![]() |
韩 路 |
Currently I am a Ph.D. candidate of School of Artificial Intelligence in Nanjing University and a member of LAMDA Group( LAMDA Publications), led by professor Zhi-Hua Zhou.
Associate professor De-Chuan Zhan.
I received my B.Sc. degree from Wuhan University in June 2019. In the same year, I was admitted to study for a M.Sc. degree in Nanjing University without entrance examination.
I received my M.Sc. degree in School of Artificial Intelligence from Nanjing University in June 2022. In the same year, I was admitted to study for a Ph.D. degree in School of Artificial Intelligence from Nanjing University, which is also in the LAMDA Group led by Prof. Zhi-Hua Zhou., under the supervision of Prof. De-Chuan Zhan and Han-Jia Ye.Lu mainly focuses on machine learning, especially:
Meta-Learning
Meta-Learning, or learning to learn, aims at extracting meta-knowledge from previous tasks, and reuse them in new tasks. It can be applied to few-shot learning, federated learning, hyper-parameter setting, and other related areas.
Contrastive Learning
Contrastive Learning methods maximize the agreement between positive pairs and minimize the agreement between negative pairs. It has been the one of the most popular methods in self-supervised learning, representation learning and is the fundamental technique of many pre-trained models like CLIP.
Semi-Supervised Learning
Semi-supervised learning is a broad category of machine learning that uses labeled data to ground predictions, and unlabeled data to learn the shape of the larger data distribution. Practitioners can achieve strong results with fractions of the labeled data, and as a result, can save valuable time and money.
![]() |
Traditional contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. In this paper, we propose Augmentation Component Analysis (ACA) which theoretically preserves the similarity of augmentation distribution between samples and helps learn semantically meaningful embeddings. |
![]() |
In this paper, we empirically analyze Pseudo-Labeling (PL) in class-mismatched SSL and find the imbalance problem and show better strategies for pseudo-labeling. Base on these findings, we propose to improve PL in class-mismatched SSL with two components -- Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). |
![]() |
Unsupervised meta-learning for few-shot classification without any base class labels. We propose strong unsupervised baselines which outperforms some supervised counterparts. |
Email:
hanlu@lamda.nju.edu.cn, hanlu198004@163.com
Office:
Room A201, Shaoyifu Building, Xianlin Campus of Nanjing University
Address:
Lu Han, National Key Laboratory for Novel Software Technology, Nanjing University, Xianlin Campus Mailbox 603, 163 Xianlin Avenue, Qixia District, Nanjing 210023, China
(南京市栖霞区仙林大道163号, 南京大学仙林校区603信箱, 软件新技术国家重点实验室, 210023.)