Ziniu Li

alt text 

Ph.D. student,
School of Data Science,
The Chinese University of Hong Kong, Shenzhen

Email: ziniuli@link.cuhk.edu.cn

[Twitter] [Zhihu]

About me

I am a Ph.D. student at The Chinese University of Hong Kong, Shenzhen (CUHKSZ), advised by Prof. Zhi-Quan (Tom) Luo.

I am interested in artificial intelligence, especially reinforcement learning and large language models.

I have worked/interned at Tencent, Nanjing University, Cardinal Operations, etc.

My curriculum vitae can be downloaded from here.

Feel free to contact me if you want to discuss some ideas.

Recent Highlights

*: indicating equal contribution or alphabetic ordering.

Adam-mini: Use Fewer Learning Rates To Gain More
Yushun Zhang, Congliang Chen, Ziniu Li, Tian Ding, Chenwei Wu, Yinyu Ye, Zhi-Quan Luo, Ruoyu Sun

TL;DR: This work develops a mini-version of Adam, which cuts down >90% learning rates in Adam based on Hessian structure of LLMs.

On the Algorithmic Bias of Aligning Large Language Models with RLHF: Preference Collapse and Matching Regularization
Jiancong Xiao, Ziniu Li, Xingyu Xie, Emily Getzen, Cong Fang, Qi Long, Weijie J. Su

TL;DR: This work identifies the preference collapse in RLHF and addresses this issue by a matching regularization

ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models
Ziniu Li, Tian Xu, Yushun Zhang, Zhihang Lin, Yang Yu, Ruoyu Sun, Zhi-Quan Luo
The 41st International Conference on Machine Learning (ICML), 2024

TL;DR: This work develops an RL method called ReMax for RLHF in LLMs, which is simple (6 lines of code) and efficient (less memory and fast training)

When is RL better than DPO in RLHF? A Representation and Optimization Perspective
Ziniu Li* , Tian Xu*, Yang Yu
Oral Presentation, The 12th International Conference on Learning Representations (ICLR) (Tiny Paper Track), 2024

TL;DR: This work analyzes the reward modeling quality in view of representations, and analyzed the optimization error sources

Why Transformers Need Adam: A Hessian Perspective
Yushun Zhang, Congliang Chen, Tian Ding, Ziniu Li, Ruoyu Sun, Zhi-Quan Luo
TL;DR: This work studies the Hessian spectrum of LLMs and points out that the heterogeneity is the key for adaptive learning rates in Adam

Imitation Learning from Imperfection: Theoretical Justifications and Algorithms
Ziniu Li* , Tian Xu*, Zeyu Qin, Yang Yu, Zhi-Quan Luo
Spotlight Presentation (acceptance rate < 5%), In Neural Information Processing System (NeurIPS) 37, 2023

TL;DR: This work validates that importance sampling is effective in data selection when leveraging multiple imperfect (out-of-distribution and low-quality) data sources



NeurIPS (Top Reviewer), ICML (Outstanding Reviewer), ICLR (Highlighted Reviewer).

Teaching Assistant

  • DDA6111: Discrete Optimization. 2022 Spring @ CUHKSZ

  • DDA6060: Machine Learning. 2023 Spring @ CUHKSZ

  • FTE4560: Basic Machine Learning. 2021 Spring @ CUHKSZ.

  • CSC4120: Design and Analysis of Algorithms. 2022 Fall, 2021 Fall @ CUHKSZ

  • MAT3007: Introduction to Optimization. 2020 Fall @ CUHKSZ


  • Machine Learning (Summer Course for Senior High School Students) @ X ACADEMY 2022 TechX


  • [2024-01] Runner-up of poster presentation award at the third doctoral and postdoctoral forum of Shenzhen Research Institute of Big Data. $5,000 CNY

  • [2023-12] Guo-Tai-Jun-An Scholarship. $20,000 CNY

  • [2021-04] Best oral presentation award at the first doctoral and postdoctoral forum of Shenzhen Research Institute of Big Data. $5,000 CNY