Wei Jiang @ LAMDA, NJU-CS

jiangwei.jpg 

姜伟
Wei Jiang
Ph.D. student, LAMDA Group
Department of Computer Science and Technology
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China

Google Scholar


Supervisor

      Professor Lijun Zhang

Education Experiences

Research Interests

      My research interests include Machine Learning, Stochastic Optimization and Online learning.

Publication

  1. Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions [arXiv]
    W. Jiang, S. Yang, Y. Wang, and L. Zhang
    In Advances in Neural Information Processing Systems 37 (NeurIPS 2024), to appear, 2024.

  2. Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction [arXiv]
    W. Jiang, S. Yang, W. Yang, and L. Zhang
    In Advances in Neural Information Processing Systems 37 (NeurIPS 2024), to appear, 2024.

  3. Online Composite Optimization Between Stochastic and Adversarial Environments
    Y. Wang, S. Chen, W. Jiang, W. Yang, Y.Wan and L. Zhang
    In Advances in Neural Information Processing Systems 37 (NeurIPS 2024), to appear, 2024.

  4. Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization [PDF]
    W. Jiang, S. Yang, W. Yang, Y. Wang, Y. Wan, and L. Zhang
    In Proceedings of the 41st International Conference on Machine Learning (ICML 2024), pages 21962 - 21987, 2024.

  5. Small-loss Adaptive Regret for Online Convex Optimization [PDF]
    W. Yang, W. Jiang, Y. Wang, P. Yang, Y. Hu, and L. Zhang
    In Proceedings of the 41st International Conference on Machine Learning (ICML 2024), pages 56156 - 56195, 2024.

  6. Efficient Algorithms for Empirical Group Distributionally Robust Optimization and Beyond [PDF]
    D. Yu, Y. Cai, W. Jiang, and L. Zhang
    In Proceedings of the 41st International Conference on Machine Learning (ICML 2024), pages 57384 - 57414, 2024.

  7. Non-stationary Projection-free Online Learning with Dynamic and Adaptive Regret Guarantees [PDF, arXiv]
    Y. Wang, W. Yang, W. Jiang, S. Lu, B. Wang, H. Tang, Y. Wan, and L. Zhang
    In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024), pages 15671 - 15679, 2024.

  8. Learning Unnormalized Statistical Models via Compositional Optimization [PDF]
    W. Jiang, J. Qin, L. Wu, C. Chen, T. Yang, L. Zhang
    In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), pages 15105 - 15124, 2023.

  9. Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization [PDF, Supplementary]
    W. Jiang, G. Li, Y. Wang, L. Zhang, and T. Yang
    In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pages 32499 - 32511, 2022.

  10. Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor [PDF, Supplementary]
    L. Zhang, W. Jiang, J. Yi, and T. Yang
    In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pages 4928 - 4942, 2022.

  11. Optimal Algorithms for Stochastic Multi-Level Compositional Optimization [PDF]
    W. Jiang, B. Wang, Y. Wang, L. Zhang, and T. Yang
    In Proceedings of the 39th International Conference on Machine Learning (ICML 2022), pages 10195 - 10216, 2022.

  12. Revisiting Smoothed Online Learning [PDF, Supplementary]
    L. Zhang, W. Jiang, S. Lu, and T. Yang
    In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pages 13599 - 13612, 2021.

  13. Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions [PDF, Supplementary]
    L. Zhang, G. Wang, W.-W. Tu, W. Jiang, and Z.-H. Zhou
    In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pages 24968 - 24980, 2021.

Honors and Awards

Foundation

Distributed Optimization of Compositional Loss Functions. Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX24_0231) 2024.05-2025.05

Academic Service

Teaching Assistant

Correspondence