Image
Image
Image
Image
Image
Image
Image
Image
Image
Image



Search
»

Seminar abstract

Learning through deterministic assignment of hidden parameters

林绍波
博士研究生
西安交通大学数学与统计学院


Abstract: Supervised learning boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples. The hidden parameters determine the attributions of hidden predictors or the nonlinear mechanism of an estimator, while the bright parameters characterize how the hidden predictors are linearly combined or the linear mechanism. In traditional learning paradigm, hidden and bright parameters are not distinguished and trained simultaneously in one learning process. Such an one-stage learning (OSL) brings a benefit of theoretical analysis but may suffer from the high computational burden. To overcome this difficulty, a two-stage learning (TSL) scheme, featured by learning through random assignment for hidden parameters (LRHP) is developed. LRHP assigns randomly the hidden parameters in the first stage and determines the bright parameters by solving a linear least squares problem in the second stage. Although LRHP works well in many applications, it suffers from an uncertainty problem: its performance can only be guaranteed in a certain statistical expectation sense. Under this circumstance we propose a new TSL scheme, learning through deterministic assignment of hidden parameters (LDHP), where we suggest to deterministically generate the hidden parameters by using the minimal Riesz energy points on a sphere and the best packing points in an interval. We theoretically show that with such deterministic assignment of hidden parameters, LDHP almost shares the same generalization performance with that of OSL, i.e., it does not degrade the generalization capability of OSL. Thus LDHP provides an effective way to overcome both the high computational burden of OSL and the uncertainty problem of LRHP. We present a series of simulations and application examples to support the outperformance of LDHP, as compared with the typical OSL algorithm: Support Vector Regression (SVR) and the typical LRHP algorithm: Extreme Learning Machine (ELM).

Biography: 林绍波,西安交通大学数学与统计学院博士研究生,导师为徐宗本院士。研究方向为函数逼近论及统计学习理论。现阶段主要研究兴趣是将函数逼近论中的宽度理论用于监督学习,并基于此提出并发展新的学习系统。在Machine Learning, Journal of Approximation Theory, 中国科学等国内外著名期刊发表论文20余篇,其中SCI检索20篇。
  Name Size

Image
PoweredBy © LAMDA, 2022