Tian Xu

alt text 

Ph.D. student,
School of Artificial Intelligence,
Nanjing University
[Google Scholar] [CV]

Email: xut@lamda.nju.edu.cn

About me

Welcome to my homepage! I am a fifth-year Ph.D. student at Nanjing University (NJU), advised by Prof. Yang Yu.

My research interests lie in reinforcement learning theory and algorithms.

I am visiting LIONS led by Prof. Volkan Cevher in EPFL.

I visited The Chinese University of Hong Kong, Shenzhen (CUHKSZ) from June to September in 2021, where I was fortunate to be supervised by Prof. Zhi-Quan (Tom) Luo.

Selected Work

*: indicating equal contribution.

  • Imitation Learning from Imperfection: Theoretical Justifications and Algorithms.
    Ziniu Li*, Tian Xu*, Zeyu Qin, Yang Yu, Zhi-Quan Luo
    In Advances in Neural Information Process System 36 (NeurIPS, spotlight), 2023.
    (This work proposes a data-selection-based method ISW-BC to address the distribution shift issue in IL with imperfect demonstrations.
    We prove that ISW-BC is robust to OOD samples and enjoys a small imitation gap bound.)

  • Rethinking ValueDice: Does It Really Improve Performance?
    Ziniu Li*, Tian Xu*, Yang Yu, Zhi-Quan Luo
    In Proceedings of the 10th International Conference on Learning Representations (ICLR) (Blog Track), 2022
    (This work demonstrates the first reduction of offline adversarial imitation learning (AIL) to BC, implying that AIL does not outperform BC in the offline setting.)

  • Error Bounds of Imitating Policies and Environments
    Tian Xu*, Ziniu Li* , Yang Yu
    In Advances in Neural Information Processing Systems 33 (NeurIPS), 2020
    (This work presents a general analysis framework for imitation learning algorithms, upon which we first prove that
    GAIL-type methods can overcome the compounding errors issue of BC in both imitating policies and environments.)

Service

Reviewer

NeurIPS (2022, 2023), ICML (2022, 2023), ICLR (2024), AISTATS (2024), UAI (2021, 2022, 2023).

Award

  • [2020-10] National Scholarship for Graduates.

  • [2020-8] Second winner on KDD CUP 2020 Reinforcement Learning Competition Track.

  • [2016-9, 2017-9] National Scholarship for Undergraduates.