Title: Discrepancy and Domain Adaptation Abstract: A standard assumption in learning theory and applications is that training and test points are drawn according to the same distribution. In practice, however, this ideal assumption often does not hold. The learner receives labeled data from some source domain whose distribution somewhat differs from that of the target domain. This gives rise to the domain adaptation problem which consists of using labeled data from the source domain, and typically large amounts of unlabeled data from the target domain, to learn a hypothesis performing well on the target domain. This talk presents a number of recent theoretical and algorithmic results for domain adaptation in regression using the notion of discrepancy. The material presented includes recent joint work with Corinna Cortes (Google) and previous work with Yishay Mansour and Afshin Rostamizadeh.