


default search action
Han Zhong 0001
Person information
- unicode name: 钟涵
- affiliation: Peking University, Center for Data Science, Beijing, China
Other persons with the same name
- Han Zhong — disambiguation page
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j2]Shenao Zhang, Donghan Yu, Hiteshi Sharma, Han Zhong, Zhihan Liu, Ziyi Yang, Shuohang Wang, Hany Hassan Awadalla, Zhaoran Wang:
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment. Trans. Mach. Learn. Res. 2025 (2025)
[i29]Han Zhong, Yutong Yin, Shenao Zhang, Xiaojun Xu, Yuanxin Liu, Yifei Zuo, Zhihan Liu, Boyi Liu, Sirui Zheng, Hongyi Guo, Liwei Wang, Mingyi Hong, Zhaoran Wang:
BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning. CoRR abs/2501.18858 (2025)
[i28]Yuxuan Han, Han Zhong, Miao Lu, Jose H. Blanchet, Zhengyuan Zhou:
Learning an Optimal Assortment Policy under Observational Data. CoRR abs/2502.06777 (2025)
[i27]Jiachen Hu, Rui Ai, Han Zhong, Xiaoyu Chen, Liwei Wang, Zhaoran Wang, Zhuoran Yang:
The Sample Complexity of Online Strategic Decision Making with Information Asymmetry and Knowledge Transportability. CoRR abs/2506.09940 (2025)- 2024
[c23]Jiayi Huang, Han Zhong, Liwei Wang, Lin Yang:
Horizon-Free and Instance-Dependent Regret Bounds for Reinforcement Learning with General Function Approximation. AISTATS 2024: 3673-3681
[c22]Rui Yang, Han Zhong, Jiawei Xu, Amy Zhang, Chongjie Zhang, Lei Han, Tong Zhang:
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption. ICLR 2024
[c21]Han Zhong, Jiachen Hu, Yecheng Xue, Tongyang Li, Liwei Wang:
Provably Efficient Exploration in Quantum Reinforcement Learning with Logarithmic Worst-Case Regret. ICML 2024
[c20]Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, Jianshu Chen:
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. ICML 2024
[c19]Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, Tong Zhang:
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint. ICML 2024
[c18]Guhao Feng, Han Zhong:
Rethinking Model-based, Policy-based, and Value-based Reinforcement Learning via the Lens of Representation Complexity. NeurIPS 2024
[c17]Miao Lu, Han Zhong, Tong Zhang, Jose H. Blanchet:
Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms. NeurIPS 2024
[c16]Jiachen Hu, Tongyang Li, Xinzhao Wang, Yecheng Xue, Chenyi Zhang, Han Zhong:
Quantum Non-Identical Mean Estimation: Efficient Algorithms and Fundamental Limits. TQC 2024: 9:1-9:21
[i26]Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, Jianshu Chen:
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. CoRR abs/2402.10207 (2024)
[i25]Miao Lu, Han Zhong, Tong Zhang, Jose H. Blanchet:
Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithm. CoRR abs/2404.03578 (2024)
[i24]Han Zhong, Guhao Feng, Wei Xiong, Li Zhao, Di He, Jiang Bian, Liwei Wang:
DPO Meets PPO: Reinforced Token Optimization for RLHF. CoRR abs/2404.18922 (2024)- 2023
[j1]Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopically Rational Followers? J. Mach. Learn. Res. 24: 35:1-35:52 (2023)
[c15]Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, Tong Zhang:
Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game. ICLR 2023
[c14]Jiachen Hu, Han Zhong, Chi Jin, Liwei Wang:
Provable Sim-to-real Transfer in Continuous Domain with Partial Observations. ICLR 2023
[c13]Han Zhong, Tong Zhang:
A Theoretical Analysis of Optimistic Proximal Policy Optimization in Linear Markov Decision Processes. NeurIPS 2023
[c12]Jose H. Blanchet, Miao Lu, Tong Zhang, Han Zhong:
Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage. NeurIPS 2023
[c11]Jiayi Huang, Han Zhong, Liwei Wang, Lin Yang:
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds. NeurIPS 2023
[c10]Zhihan Liu, Miao Lu, Wei Xiong, Han Zhong, Hao Hu, Shenao Zhang, Sirui Zheng, Zhuoran Yang, Zhaoran Wang:
Maximize to Explore: One Objective Function Fusing Estimation, Planning, and Exploration. NeurIPS 2023
[c9]Shuang Qiu, Ziyu Dai, Han Zhong, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
Posterior Sampling for Competitive RL: Function Approximation and Partial Observation. NeurIPS 2023
[c8]Yunchang Yang, Han Zhong, Tianhao Wu, Bin Liu, Liwei Wang, Simon S. Du:
A Reduction-based Framework for Sequential Decision Making with Delayed Feedback. NeurIPS 2023
[i23]Yunchang Yang, Han Zhong, Tianhao Wu, Bin Liu, Liwei Wang, Simon S. Du:
A Reduction-based Framework for Sequential Decision Making with Delayed Feedback. CoRR abs/2302.01477 (2023)
[i22]Han Zhong, Jiachen Hu, Yecheng Xue, Tongyang Li, Liwei Wang:
Provably Efficient Exploration in Quantum Reinforcement Learning with Logarithmic Worst-Case Regret. CoRR abs/2302.10796 (2023)
[i21]Han Zhong, Tong Zhang:
A Theoretical Analysis of Optimistic Proximal Policy Optimization in Linear Markov Decision Processes. CoRR abs/2305.08841 (2023)
[i20]Jose H. Blanchet, Miao Lu, Tong Zhang, Han Zhong:
Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage. CoRR abs/2305.09659 (2023)
[i19]Zhihan Liu, Miao Lu, Wei Xiong, Han Zhong, Hao Hu, Shenao Zhang, Sirui Zheng, Zhuoran Yang, Zhaoran Wang:
One Objective to Rule Them All: A Maximization Objective Fusing Estimation and Planning for Exploration. CoRR abs/2305.18258 (2023)
[i18]Jiayi Huang, Han Zhong, Liwei Wang, Lin F. Yang:
Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds. CoRR abs/2306.06836 (2023)
[i17]Rui Yang, Han Zhong, Jiawei Xu, Amy Zhang, Chongjie Zhang, Lei Han, Tong Zhang:
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption. CoRR abs/2310.12955 (2023)
[i16]Shuang Qiu, Ziyu Dai, Han Zhong, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
Posterior Sampling for Competitive RL: Function Approximation and Partial Observation. CoRR abs/2310.19861 (2023)
[i15]Jiayi Huang, Han Zhong, Liwei Wang, Lin F. Yang:
Horizon-Free and Instance-Dependent Regret Bounds for Reinforcement Learning with General Function Approximation. CoRR abs/2312.04464 (2023)
[i14]Wei Xiong, Hanze Dong, Chenlu Ye, Han Zhong, Nan Jiang, Tong Zhang:
Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF. CoRR abs/2312.11456 (2023)- 2022
[c7]Yunchang Yang, Tianhao Wu, Han Zhong, Evrard Garcelon, Matteo Pirotta, Alessandro Lazaric, Liwei Wang, Simon Shaolei Du:
A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning. ICLR 2022
[c6]Xiaoyu Chen, Han Zhong, Zhuoran Yang, Zhaoran Wang, Liwei Wang:
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation. ICML 2022: 3773-3793
[c5]Tianhao Wu, Yunchang Yang, Han Zhong, Liwei Wang, Simon S. Du, Jiantao Jiao:
Nearly Optimal Policy Optimization with Stable at Any Time Guarantee. ICML 2022: 24243-24265
[c4]Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Tong Zhang:
A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games. ICML 2022: 24496-24523
[c3]Han Zhong, Wei Xiong, Jiyuan Tan
, Liwei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang:
Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets. ICML 2022: 27117-27142
[c2]Binghui Li, Jikai Jin, Han Zhong, John E. Hopcroft, Liwei Wang:
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power. NeurIPS 2022
[i13]Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang:
Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets. CoRR abs/2202.07511 (2022)
[i12]Xiaoyu Chen, Han Zhong, Zhuoran Yang, Zhaoran Wang, Liwei Wang:
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation. CoRR abs/2205.11140 (2022)
[i11]Binghui Li, Jikai Jin, Han Zhong, John E. Hopcroft, Liwei Wang:
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power. CoRR abs/2205.13863 (2022)
[i10]Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, Tong Zhang:
Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game. CoRR abs/2205.15512 (2022)
[i9]Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Tong Zhang:
A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games. CoRR abs/2210.01907 (2022)
[i8]Jiachen Hu, Han Zhong, Chi Jin, Liwei Wang:
Provable Sim-to-real Transfer in Continuous Domain with Partial Observations. CoRR abs/2210.15598 (2022)
[i7]Han Zhong, Wei Xiong, Sirui Zheng, Liwei Wang, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP, and Beyond. CoRR abs/2211.01962 (2022)- 2021
[c1]Han Zhong, Jiayi Huang, Lin Yang
, Liwei Wang:
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs. NeurIPS 2021: 15710-15720
[i6]Yunchang Yang, Tianhao Wu, Han Zhong, Evrard Garcelon, Matteo Pirotta, Alessandro Lazaric, Liwei Wang, Simon S. Du:
A Unified Framework for Conservative Exploration. CoRR abs/2106.11692 (2021)
[i5]Han Zhong, Zhuoran Yang, Zhaoran Wang, Csaba Szepesvári:
Optimistic Policy Optimization is Provably Efficient in Non-stationary MDPs. CoRR abs/2110.08984 (2021)
[i4]Han Zhong, Jiayi Huang, Lin F. Yang, Liwei Wang:
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs. CoRR abs/2110.13876 (2021)
[i3]Tianhao Wu, Yunchang Yang, Han Zhong, Liwei Wang, Simon S. Du, Jiantao Jiao:
Nearly Optimal Policy Optimization with Stable at Any Time Guarantee. CoRR abs/2112.10935 (2021)
[i2]Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopic Followers? CoRR abs/2112.13521 (2021)- 2020
[i1]Han Zhong, Ethan X. Fang, Zhuoran Yang, Zhaoran Wang:
Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds Globally Optimal Policy. CoRR abs/2012.14098 (2020)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-11-28 04:37 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







