报告信息
题目:Meta-learning with Many Tasks
报告人:James Kwok 教授,Hong Kong University of Science and Technology
摘要:In many machine learning applications, one only has a limited number of training samples. To alleviate this problem, a successful approach is meta-learning, which tries to extract meta-knowledge from similar historical tasks. Obviously, the larger the number of tasks to learn from, the more meta-knowledge can be learned. However, popular meta-learning algorithms like MAML only learn a globally-shared meta-model. This can be problematic when the task environment is complex, and a single meta-model is not sufficient to capture diversity of the meta-knowledge. Moreover, the sampling of tasks in each iteration also increases variance in the stochastic gradient, resulting in slow convergence. In this talk, we propose to address these problems by (i) using multiple meta-models for initialization, and (ii) incorporate variance reduction into meta-learning for faster convergence. Experiments on various meta-learning tasks demonstrate its effectiveness over state-of-the-art algorithms.