Page History: Yang Yu @ NJUCS

Compare Page Revisions



« Older Revision - Back to Page History - Newer Revision »


Page Revision: 2015/11/10 00:13


ImageChinese name(中文简历)
Yang Yu (Y. Yu)
Can be pronounced as "young you"
Ph.D., Associate Professor
LAMDA Group
Department of Computer Science
National Key Laboratory for Novel Software Technology
Nanjing University

Office: 311, Computer Science Building, Xianlin Campus
email: ,
Image Image

I received my Ph.D. degree in Computer Science from Nanjing University in 2011 (supervisor Prof. Zhi-Hua Zhou), and then joined the LAMDA Group (LAMDA Publications), Department of Computer Science and Technology of Nanjing University as an assistant researcher in 2011, and as an associate professor in 2014.

I am interested in artificial intelligence, particularly, applying theoretical-grounded evolutionary optimization to solve machine learning problems. Our research work has been published in international journals and conferences including Artificial Intelligence, IJCAI, KDD, etc., and granted several awards such as Grand Champion (with other LAMDA members) of PAKDD'06 Data Mining Competition, the best paper award of PAKDD'08, the best theory paper award of GECCO'11. My dissertation was granted honours including China National Outstanding Doctoral Dissertation Award in 2013 and China Computer Federation Outstanding Doctoral Dissertation Award in 2011.

(Detailed CV)

Recent Update


Research

I am interested in investigating towards the goal of artificial intelligence. As conceived by Alan Turing, one possible way of building an intelligent machine is to evolve learning machines, which now drops into multiple subfields of artificial intelligence, especially machine learning and evolutionary computation. Currently I am focusing on evolutionary machine learning that applies theoretical-grounded evolutionary optimization to solve complex and non-convex problems for machine learning and reinforcement learning.

Full Publication List >>>

Selected work:

(Evolutionary Machine Learning)
  • Pareto optimization for learning: (with Chao Qian and Zhi-Hua Zhou)
    The study on approximation ability of evolutionary optimization discloses a type of evolutionary algorithms are good approximate solvers, which we name Pareto optimization
    • Pareto optimization for constrained optimization: For constrained optimization tasks, we proved for a large class of P-Solvable problem and a large class of NP-Hard problem that Pareto optimization has a better worst-case performance than a commonly employed penalty method (PDF-ijcai15).
    • Pareto ensemble pruning: Adapt the Pareto optimization to the NP-Hard ensemble pruning problem, which searches for the best subset among a set of base classifiers. We show both theoretically and empirically that the proposed Pareto ensemble pruning is superior to the state-of-the-art (PDF-aaai15).
    • Pareto subset selection: Adapt the Pareto optimization to the NP-Hard subset selection problem. On spare regression, a typical subset selection problem,we show both theoretically and empirically that the proposed Pareto subset selection is superior to the state-of-the-art (PDF-nips15).

(Evolutionary Optimization)
  • The sampling-and-learning (SAL) framework: (with Hong Qian)
    Most of evolutionary algorithms share a common algorithm structure, which has been argued to involve the sampling and the model building stages. In (PDF-cec14), we developed the sampling-and-learning (SAL) framework to capture the essence of these algorithms, and analyzed its approximation performance under a probability. Focusing on the sampling-and-classification (SAC) algorithms, which specify the learning sub-routine of SAL as a binary classification, we dervied a general performance bound, with the help of the learning theory. We also disclosed conditions under which SAC algorithms can achieve polynomial and super-polynomial speedup to random search.

  • Approximation ability of evolutionary optimization: (with Chao Qian, Xin Yao and Zhi-Hua Zhou)
    Evolutionary algorithms are most commonly used to obtain "good-enough" solutions in practice, which relates to their approximation ability.
    • SEIP analysis framework: We developed the SEIP framework (PDF-aij12) to characterize the approximation ability of evolutionary algorithms, leading to a general approximation guarantee. On k-set cover problem, it can achieve the currently best-achievable approximation ratio, revealing the advantage of evolutionary algorithms over the well known greedy algorithm.
    • Crossover is helpful: We disclosed that crossover operator can help fulfill the Pareto front in multi-objective optimization tasks. As a consequence, on the bi-objective Minimum Spanning Tree problem, a multi-objective EA with a crossover operator is proved to improve the running time from that without the crossover for achieving a 2-approximate solution (PDF-aij13).

  • General approaches to running time analysis of evolutionary optimization: (with Chao Qian and Zhi-Hua Zhou)
    The running time, or the computational complexity, of evolutionary algorithms and other metaheuristic algorithms is one of the most important theoretical issues for understanding these algorithms.
    • Switch analysis: We developed the switch analysis (PDF-tec15) for running time analysis of evolutionary algorithms. It compares two algorithm processes (a.k.a. two Markov chains), leading to the relationship of the expected running time of the two processes.
      • The drift analysis and the fitness level method are reducible to switch analysis (PDF-tec15), our convergence-based approach is also reducible to switch analysis (PDF-cec15).
      • Using switch analysis to compare problem instances, we identified the hardest and the easiest problem instances for a metaheuristic algorithm in a problem class (PDF-tec15, PDF-ppsn12), demonstrating the use of switch analysis for problem class-wise analysis.
      • We applied switch analysis to analyzing crossover operators (tr11, PDF-ppsn10) that can lead to tight running time bounds.
    • Convergence-based approach: We developed the convergence-based approach (PDF-aij08) that can be applied to obtain running time bounds of a large range of metaheuristic algorithms. We applied this tool to disclose that infeasible solutions in constrained optimization may not be useless (PDF-cec08).

(Machine Learning)
  • Functional representation for nonlinear reinforcement learning: (with Yu Qian, Qing Da and Zhi-Hua Zhou)
    Reinforcement learning seeks for a policy that receives the highest total reward from its environement. Real-world applications are often so complex that complex policies are appealing. Functional representation, known as boosting techniques widely used in supervised learning, is a powerful tool to approximate complex functions but has been little investigated in reinforcement learning.
    • By functional representation, a policy can be a sum of many basis functions, which causes a high time cost especially for reinforcement learning. We tackle this time cost barrier by the napping mechanism, which results a significant improvement in time as well as a potential improvement in total reward (PDF-aamas14 / code)

  • The role of diversity in ensemble learning: (with Nan Li, Yu-Feng Li and Zhi-Hua Zhou)
    Ensemble learning is a machine learning paradigm that achieves the state-of-the-art performance. Diversity was believed to be a key to a good performance of an ensemble approach, which, however, previously served only as a heuristic idea.
    • Diversity regularized machine: We showed that diversity plays a role of regularization as in popular statistical learning approaches (PDF-ijcai11 / code), .
    • Diversity regularized ensemble pruning: We proved that diversity defined on hypothesis output space plays a role of regularization, and use this principle to prune Bagging classifiers. (PDF-ecml12 / code)

Codes


(My Goolge Scholar Citations)

Teaching

  • Artificial Intelligence. (for undergraduate students. Spring, 2015)
  • Data Mining. (for M.Sc. students. Fall, 2014, 2013, 2012)
  • Digital Image Processing. (for undergraduate students from Dept. Math., Spring, 2014, 2013)
  • Introduction to Data Mining. (for undergraduate students. Spring, 2013, 2012)

Students


Recuit                                        >>>

  • Current Ph.D. students:
2015: Hong Qian

  • Current masters students:
2013: Jian Ma (Winner of the Alibaba TianMao 11.11 Competition One-Million Award)
2014: Peng-Fei Hou, Jing-Wen Yang
2015: Shi-Yong Chen, Yi-Qi Hu, Jing-Cheng Shi, Han Wang

  • Past master students:
2012-2015: Yu Qian (now in Alibaba Co.)

  • Past co-supervised master students:
2011-2014: Xun-Li Guo (now in Alibaba Co.), Shuan Song (now in 360 Co.)

Mail:
National Key Laboratory for Novel Software Technology, Nanjing University, Xianlin Campus Mailbox 603, 163 Xianlin Avenue, Qixia District, Nanjing 210023, China
(In Chinese:) 南京市栖霞区仙林大道163号,南京大学仙林校区603信箱,软件新技术国家重点实验室,210023。

The end