Image
Image
Image
Image
Image
Image
Image
Image
Image
Image



Search
»

Seminar abstract

Convex Methods for Representation Learning

Dale Schuurmans
Professor
Department of Computing Science, University of Alberta


Abstract: Automated feature discovery is a fundamental problem in data analysis. Although classical feature discovery methods fail to guarantee optimal solutions in general, convex reformulations have been developed for a number of such problems. Most of these reformulations are based on one of two key strategies: relaxing pairwise representations, or exploiting induced matrix norms. Despite their use of relaxation, convex reformulations can demonstrate improvements in solution quality by eliminating local minima. In this talk, I will discuss some recent convex reformulations of representation learning problems, including dimensionality reduction, sparse coding, and semi-supervised extensions that accommodate multi-view learning. A comparison to existing feature discovery methods demonstrates improved generalization and in some cases even improved efficiency.

Bio: Dale Schuurmans is a Professor of Computing Science and Canada Research Chair in Machine Learning at the University of Alberta. He received his PhD in Computer Science from the University of Toronto, and has been employed at the National Research Council Canada, University of Pennsylvania, NEC Research Institute and the University of Waterloo. He is an Associate Editor of JAIR and AIJ, and currently serves on the IMLS and NIPS Foundation boards. He has previously served as a Program Co-chair for NIPS-2008 and ICML-2004, and as an Associate Editor for IEEE TPAMI, JMLR and MLJ. His research interests include machine learning, optimization, probability models, and search. He is author of more than 130 refereed publications in these areas and has received paper awards at IJCAI, AAAI, ICML, IEEE ICAL and IEEE ADPRL.
  Name Size

Image
PoweredBy © LAMDA, 2022