Invited Talks

 
Learning with Output Kernels and Latent Kernels    (Slides)

Prof. Dale Schuurmans

Department of Computing Science
University of Alberta

Abstract:
Kernels provide a finite representation of abstract feature mappings, which is an invaluable modeling tool in computational data analysis. Although kernel methods are frequently applied to input representations, particularly when learning functions, much less work has considered applying kernels to output representations or representations of latent structure. In this talk, I will discuss some new applications of kernel methods to both output and latent representations. I will first discuss how losses beyond squared error can be exactly and efficiently accommodated within output kernels. Then I will discuss how kernel methods can be extended to partially supervised learning problems, such as latent representation discovery. Such kernel based extensions are able to achieve state of the art results in robust regression, multi-label classification, and hidden-layer network training.

Bio:
Dale Schuurmans is a Professor of Computing Science and Canada Research Chair in Machine Learning at the University of Alberta. He received his PhD in Computer Science from the University of Toronto, and has been employed at the National Research Council Canada, University of Pennsylvania, NEC Research Institute and the University of Waterloo. He is an Associate Editor of JAIR and AIJ, and currently serves on the IMLS and NIPS Foundation boards. He has previously served as a Program Co-chair for NIPS-2008 and ICML-2004, and as an Associate Editor for IEEE TPAMI, JMLR and MLJ. His research interests include machine learning, optimization, probability models, and search. He is author of more than 130 refereed publications in these areas and has received paper awards at IJCAI, AAAI, ICML, IEEE ICAL and IEEE ADPRL.