Page History: Test

Compare Page Revisions



« Older Revision - Back to Page History - Newer Revision »


Page Revision: 2018/04/28 16:06


ImageChinese name(中文简历)
Yang Yu (Y. Yu)
Can be pronounced as "young you"
Ph.D., Associate Professor
LAMDA Group
Department of Computer Science
National Key Laboratory for Novel Software Technology
Nanjing University

Office: 311, Computer Science Building, Xianlin Campus
email: ,
Image Image

I received my Ph.D. degree in Computer Science from Nanjing University in 2011 (supervisor Prof. Zhi-Hua Zhou), and then joined the LAMDA Group (LAMDA Publications), Department of Computer Science and Technology of Nanjing University as an assistant researcher from 2011, and as an associate professor from 2014.

My research interest is in machine learning, a sub-field of artificial intelligence. Currently, I am working on reinforcement learning in various aspects, including optimization, representation, transfer, etc. More information please see my CV. (Detailed CV | CV in PDF)

Recent Update

Neuron & Logic Our recent paper connects neural perception and logic reasoning through abductive learning.   Tutorial We will have a tutorial on Pareto Optimization for Subset Selection in WCCI 2018.
         
ZOOpt A Python package for derivative free optimization. Release 0.2.   AWRL We had a successful 2nd Asian Workshop on Reinforcement Learning

Research



Full Publication List >>>

Codes

LAMDA codes: http://lamda.nju.edu.cn/Data.ashx Github: https://github.com/eyounx?tab=repositories

Selected Work


  • Pareto optimization (with Chao Qian and Zhi-Hua Zhou)
    Pareto optimization is born from evolutionary algorithms. It has been shown to be a powerful approximate solver for constrained optimization problems in finite discrete domains, particularly the subset selection problem. We apply Pareto optimization for learning tasks including sparse regression, achieving extraordinary performance.





  • The role of diversity in ensemble learning (with Nan Li, Yu-Feng Li and Zhi-Hua Zhou)
    Ensemble learning is a machine learning paradigm that achieves the state-of-the-art performance. Diversity was believed to be a key to a good performance of an ensemble approach, which, however, previously served only as a heuristic idea. We show that diversity can play the role of regularization.



(My Goolge Scholar Citations)

Teaching

  • Artificial Intelligence. (for undergraduate students. Spring, 2018) >>>Course Page>>>
  • Advanced Machine Learning. (for graduate students. Fall, 2017)
  • Artificial Intelligence. (for undergraduate students. Spring, 2015, 2016, 2017)
  • Data Mining. (for M.Sc. students. Fall, 2014, 2013, 2012)
  • Digital Image Processing. (for undergraduate students from Dept. Math., Spring, 2014, 2013)
  • Introduction to Data Mining. (for undergraduate students. Spring, 2013, 2012)

Students



Mail:
National Key Laboratory for Novel Software Technology, Nanjing University, Xianlin Campus Mailbox 603, 163 Xianlin Avenue, Qixia District, Nanjing 210023, China
(In Chinese:) 南京市栖霞区仙林大道163号,南京大学仙林校区603信箱,软件新技术国家重点实验室,210023。

The end