Ziniu Li
About meI am a Ph.D. student at The Chinese University of Hong Kong, Shenzhen (CUHKSZ), advised by Prof. Zhi-Quan (Tom) Luo. I am interested in artificial intelligence, especially reinforcement learning and large language models. I have worked/interned at Tencent, Nanjing University, Cardinal Operations, etc. My curriculum vitae can be downloaded from here. Feel free to contact me if you want to discuss some ideas. Recent Highlights*: indicating equal contribution or alphabetic ordering. Policy Optimization in RLHF: The Impact of Out-of-preference Data TL;DR: This work analyzes policy optimization errors in RLHF and shows that out-of-preference data is important for RL algorithms such as PPO and ReMax ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models TL;DR: This work develops an RL method called ReMax, which is more simple (6 lines of code) and efficient (less memory and fast training) than PPO when used in RLHF Imitation Learning from Imperfection: Theoretical Justifications and Algorithms TL;DR: This work validates that importance sampling is effective in data selection when leveraging multiple imperfect (out-of-distribution and low-quality) data sources Provably Efficient Adversarial Imitation Learning with Unknown Transitions TL;DR: This work addresses the online sample efficiency issue of adversarial imitation learning by bridging it with reward-free exploration Understanding Adversarial Imitation Learning in Small Sample Regime: A Stage-coupled Analysis TL;DR: This work presents the first horizon-free sample complexity bound of adversarial imitation learning (AIL), providing an answer to the open question –why does AIL outperform BC by a wide margin, particularly in the low data regime ? (raised in CoRL 2019 best paper by Ghasemipour et al.) ServiceReviewerNeurIPS (Top Reviewer), ICML (Outstanding Reviewer), ICLR (Highlighted Reviewer). Teaching Assistant
Lecturer
Award
|