


Остановите войну!
for scientists:


default search action
Zhaoran Wang
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j9]Chenjia Bai
, Lingxiao Wang
, Yixin Wang
, Zhaoran Wang, Rui Zhao
, Chenyao Bai
, Peng Liu
:
Addressing Hindsight Bias in Multigoal Reinforcement Learning. IEEE Trans. Cybern. 53(1): 392-405 (2023) - 2022
- [c101]Zehao Dou, Zhuoran Yang, Zhaoran Wang, Simon S. Du:
Gap-Dependent Bounds for Two-Player Markov Games. AISTATS 2022: 432-455 - [c100]Yixuan Wang, Chao Huang, Zhaoran Wang, Zhilu Wang, Qi Zhu:
Design-while-verify: correct-by-construction control learning with verification in the loop. DAC 2022: 925-930 - [c99]Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhi-Hong Deng, Animesh Garg, Peng Liu, Zhaoran Wang:
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. ICLR 2022 - [c98]Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang:
Towards General Function Approximation in Zero-Sum Markov Games. ICLR 2022 - [c97]Qi Cai, Zhuoran Yang, Zhaoran Wang:
Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency. ICML 2022: 2485-2522 - [c96]Siyu Chen, Donglin Yang, Jiayang Li, Senmiao Wang, Zhuoran Yang, Zhaoran Wang:
Adaptive Model Design for Markov Decision Process. ICML 2022: 3679-3700 - [c95]Xiaoyu Chen, Han Zhong, Zhuoran Yang, Zhaoran Wang, Liwei Wang:
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation. ICML 2022: 3773-3793 - [c94]Hongyi Guo, Qi Cai, Yufeng Zhang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Offline Reinforcement Learning for Partially Observable Markov Decision Processes. ICML 2022: 8016-8038 - [c93]Zhihan Liu, Miao Lu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang:
Welfare Maximization in Competitive Equilibrium: Reinforcement Learning for Markov Exchange Economy. ICML 2022: 13870-13911 - [c92]Zhihan Liu, Yufeng Zhang, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Learning from Demonstration: Provably Efficient Adversarial Policy Imitation with Linear Function Approximation. ICML 2022: 14094-14138 - [c91]Boxiang Lyu, Zhaoran Wang, Mladen Kolar, Zhuoran Yang:
Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning. ICML 2022: 14601-14638 - [c90]Shuang Qiu, Lingxiao Wang, Chenjia Bai, Zhuoran Yang, Zhaoran Wang:
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning. ICML 2022: 18168-18210 - [c89]Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang:
Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets. ICML 2022: 27117-27142 - [c88]Shichao Xu, Yangyang Fu, Yixuan Wang, Zhuoran Yang, Zheng O'Neill, Zhaoran Wang, Qi Zhu:
Accelerate online reinforcement learning for building HVAC control with heterogeneous expert guidances. BuildSys@SenSys 2022: 89-98 - [c87]Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu:
Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning. EC 2022: 471-472 - [i111]Yixuan Wang, Chao Huang, Zhaoran Wang, Zhuoran Yang, Qi Zhu:
Joint Differentiable Optimization and Verification for Certified Reinforcement Learning. CoRR abs/2201.12243 (2022) - [i110]Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang:
Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets. CoRR abs/2202.07511 (2022) - [i109]Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu:
Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning. CoRR abs/2202.10678 (2022) - [i108]Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang:
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. CoRR abs/2202.11566 (2022) - [i107]Boxiang Lyu, Qinglin Meng, Shuang Qiu, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan:
Learning Dynamic Mechanisms in Unknown Environments: A Reinforcement Learning Approach. CoRR abs/2202.12797 (2022) - [i106]Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang:
Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets. CoRR abs/2203.03684 (2022) - [i105]Qi Cai, Zhuoran Yang, Zhaoran Wang:
Sample-Efficient Reinforcement Learning for POMDPs with Linear Function Approximations. CoRR abs/2204.09787 (2022) - [i104]Boxiang Lyu, Zhaoran Wang, Mladen Kolar, Zhuoran Yang:
Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning. CoRR abs/2205.02450 (2022) - [i103]Xiaoyu Chen, Han Zhong, Zhuoran Yang, Zhaoran Wang, Liwei Wang:
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation. CoRR abs/2205.11140 (2022) - [i102]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Embed to Control Partially Observed Systems: Representation Learning with Provable Sample Efficiency. CoRR abs/2205.13476 (2022) - [i101]Miao Lu, Yifei Min, Zhaoran Wang, Zhuoran Yang:
Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes. CoRR abs/2205.13589 (2022) - [i100]Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han:
RORL: Robust Offline Reinforcement Learning via Conservative Smoothing. CoRR abs/2206.02829 (2022) - [i99]Doudou Zhou, Yufeng Zhang, Aaron Sonabend W., Zhaoran Wang, Junwei Lu, Tianxi Cai:
Federated Offline Reinforcement Learning. CoRR abs/2206.05581 (2022) - [i98]Shuang Qiu, Xiaohan Wei, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions. CoRR abs/2207.12463 (2022) - [i97]Shuang Qiu, Lingxiao Wang, Chenjia Bai, Zhuoran Yang, Zhaoran Wang:
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning. CoRR abs/2207.14800 (2022) - [i96]Jiayang Li, Jing Yu, Qianni Wang, Boyi Liu, Zhaoran Wang, Yu Marco Nie:
Differentiable Bilevel Programming for Stackelberg Congestion Games. CoRR abs/2209.07618 (2022) - [i95]Zuyue Fu, Zhengling Qi, Zhaoran Wang, Zhuoran Yang, Yanxun Xu, Michael R. Kosorok:
Offline Reinforcement Learning with Instrumental Variables in Confounded Markov Decision Processes. CoRR abs/2209.08666 (2022) - [i94]Fengzhuo Zhang, Boyi Liu, Kaixin Wang, Vincent Y. F. Tan, Zhuoran Yang, Zhaoran Wang:
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL. CoRR abs/2209.09845 (2022) - [i93]Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu:
Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments. CoRR abs/2209.15090 (2022) - [i92]Rui Ai, Boxiang Lyu, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan:
A Reinforcement Learning Approach in Multi-Phase Second-Price Auction Design. CoRR abs/2210.10278 (2022) - [i91]Han Zhong, Wei Xiong, Sirui Zheng, Liwei Wang, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP, and Beyond. CoRR abs/2211.01962 (2022) - [i90]Tongzheng Ren, Chenjun Xiao, Tianjun Zhang, Na Li, Zhaoran Wang, Sujay Sanghavi, Dale Schuurmans, Bo Dai:
Latent Variable Representation for Reinforcement Learning. CoRR abs/2212.08765 (2022) - [i89]Ying Jin, Zhimei Ren, Zhuoran Yang, Zhaoran Wang:
Policy learning "without" overlap: Pessimism and generalized empirical Bernstein's inequality. CoRR abs/2212.09900 (2022) - [i88]Zuyue Fu, Zhengling Qi, Zhuoran Yang, Zhaoran Wang, Lan Wang:
Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information. CoRR abs/2212.12167 (2022) - [i87]Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Animesh Garg, Zhaoran Wang, Lihong Li, Doina Precup:
Offline Policy Optimization in RL with Variance Regularizaton. CoRR abs/2212.14405 (2022) - [i86]Yufeng Zhang, Boyi Liu, Qi Cai, Lingxiao Wang, Zhaoran Wang:
An Analysis of Attention via the Lens of Exchangeability and Latent Variable Models. CoRR abs/2212.14852 (2022) - 2021
- [j8]Shuang Qiu
, Zhuoran Yang
, Jieping Ye, Zhaoran Wang:
On Finite-Time Convergence of Actor-Critic Algorithm. IEEE J. Sel. Areas Inf. Theory 2(2): 652-664 (2021) - [j7]Lewis Liu, Songtao Lu
, Tuo Zhao
, Zhaoran Wang:
Spectrum Truncation Power Iteration for Agnostic Matrix Phase Retrieval. IEEE Trans. Signal Process. 69: 3991-4006 (2021) - [c86]Jiaheng Wei, Zuyue Fu, Yang Liu, Xingyu Li, Zhuoran Yang, Zhaoran Wang:
Sample Elicitation. AISTATS 2021: 2692-2700 - [c85]Yufeng Zhang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Actor-Critic for Risk-Sensitive and Robust Adversarial RL: A Linear-Quadratic Case. AISTATS 2021: 2764-2772 - [c84]Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:
Provably Efficient Safe Exploration via Primal-Dual Policy Optimization. AISTATS 2021: 3304-3312 - [c83]Yixuan Wang, Chao Huang, Zhilu Wang, Shichao Xu, Zhaoran Wang, Qi Zhu:
Cocktail: Learn a Better Neural Network Controller from Multiple Experts via Adaptive Mixing and Robust Distillation. DAC 2021: 397-402 - [c82]Zechu Li, Xiao-Yang Liu, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo:
FinRL-podracer: high performance and scalable deep reinforcement learning for quantitative finance. ICAIF 2021: 48:1-48:9 - [c81]Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy. ICLR 2021 - [c80]Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang:
Principled Exploration via Optimistic Bootstrapping and Backward Induction. ICML 2021: 577-587 - [c79]Yingjie Fei, Zhuoran Yang, Zhaoran Wang:
Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach. ICML 2021: 3198-3207 - [c78]Hongyi Guo, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Decentralized Single-Timescale Actor-Critic on Zero-Sum Two-Player Stochastic Games. ICML 2021: 3899-3909 - [c77]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin Yang:
Randomized Exploration in Reinforcement Learning with General Value Function Approximation. ICML 2021: 4607-4616 - [c76]Ying Jin, Zhuoran Yang, Zhaoran Wang:
Is Pessimism Provably Efficient for Offline RL? ICML 2021: 5084-5096 - [c75]Lewis Liu, Yufeng Zhang, Zhuoran Yang, Reza Babanezhad, Zhaoran Wang:
Infinite-Dimensional Optimization for Zero-Sum Games via Variational Transport. ICML 2021: 7033-7044 - [c74]Shuang Qiu, Xiaohan Wei, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions. ICML 2021: 8715-8725 - [c73]Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game. ICML 2021: 8737-8747 - [c72]Weichen Wang, Jiequn Han, Zhuoran Yang, Zhaoran Wang:
Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field Control/Game in Continuous Time. ICML 2021: 10772-10782 - [c71]Qiaomin Xie, Zhuoran Yang, Zhaoran Wang, Andreea Minca:
Learning While Playing in Mean-Field Games: Convergence and Optimality. ICML 2021: 11436-11447 - [c70]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality. ICML 2021: 11581-11591 - [c69]Yaqiong Ma, Xiangyu Bai, Zhaoran Wang:
Trajectory Privacy Protection Method based on Shadow vehicles. ISPA/BDCloud/SocialCom/SustainCom 2021: 668-673 - [c68]Jingwei Zhang, Zhuoran Yang, Zhengyuan Zhou, Zhaoran Wang:
Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems. L4DC 2021: 597-598 - [c67]Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang:
BooVI: Provably Efficient Bootstrapped Value Iteration. NeurIPS 2021: 7041-7053 - [c66]Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael I. Jordan, Zhaoran Wang:
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic. NeurIPS 2021: 15993-16006 - [c65]Chenjia Bai, Lingxiao Wang, Lei Han, Animesh Garg, Jianye Hao, Peng Liu, Zhaoran Wang:
Dynamic Bottleneck for Robust Self-Supervised Exploration. NeurIPS 2021: 17007-17020 - [c64]Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, Tuo Zhao:
Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL. NeurIPS 2021: 17913-17926 - [c63]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang:
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning. NeurIPS 2021: 20436-20446 - [c62]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data. NeurIPS 2021: 21164-21175 - [c61]Runzhe Wu, Yufeng Zhang, Zhuoran Yang, Zhaoran Wang:
Offline Constrained Multi-Objective Reinforcement Learning via Pessimistic Dual Value Iteration. NeurIPS 2021: 25439-25451 - [c60]Prashant Khanduri, Siliang Zeng, Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang:
A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum. NeurIPS 2021: 30271-30283 - [i85]Prashant Khanduri, Siliang Zeng, Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang:
A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm for Bilevel Optimization. CoRR abs/2102.07367 (2021) - [i84]Luofeng Liao, Zuyue Fu, Zhuoran Yang, Mladen Kolar, Zhaoran Wang:
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning. CoRR abs/2102.09907 (2021) - [i83]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality. CoRR abs/2102.11866 (2021) - [i82]Yixuan Wang, Chao Huang, Zhilu Wang, Shichao Xu, Zhaoran Wang, Qi Zhu:
Cocktail: Learn a Better Neural Network Controller from Multiple Experts via Adaptive Mixing and Robust Distillation. CoRR abs/2103.05046 (2021) - [i81]Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang:
Principled Exploration via Optimistic Bootstrapping and Backward Induction. CoRR abs/2105.06022 (2021) - [i80]Yan Li, Lingxiao Wang, Jiachen Yang, Ethan Wang, Zhaoran Wang, Tuo Zhao, Hongyuan Zha:
Permutation Invariant Policy Optimization for Mean-Field Multi-Agent Reinforcement Learning: A Principled Approach. CoRR abs/2105.08268 (2021) - [i79]Yixuan Wang, Chao Huang, Zhaoran Wang, Zhilu Wang, Qi Zhu:
Verification in the Loop: Correct-by-Construction Control Learning with Reach-avoid Guarantees. CoRR abs/2106.03245 (2021) - [i78]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin F. Yang:
Randomized Exploration for Reinforcement Learning with General Value Function Approximation. CoRR abs/2106.07841 (2021) - [i77]Zehao Dou, Zhuoran Yang, Zhaoran Wang, Simon S. Du:
Gap-Dependent Bounds for Two-Player Markov Games. CoRR abs/2107.00685 (2021) - [i76]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
A Unified Off-Policy Evaluation Approach for General Value Function. CoRR abs/2107.02711 (2021) - [i75]Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang:
Towards General Function Approximation in Zero-Sum Markov Games. CoRR abs/2107.14702 (2021) - [i74]Pratik Ramprasad, Yuantong Li, Zhuoran Yang, Zhaoran Wang, Will Wei Sun, Guang Cheng:
Online Bootstrap Inference For Policy Evaluation in Reinforcement Learning. CoRR abs/2108.03706 (2021) - [i73]Zhihan Liu, Yufeng Zhang, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Generative Adversarial Imitation Learning for Online and Offline Setting with Linear Function Approximation. CoRR abs/2108.08765 (2021) - [i72]Boyi Liu, Jiayang Li, Zhuoran Yang, Hoi-To Wai, Mingyi Hong, Yu Marco Nie, Zhaoran Wang:
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Finds Global Optima. CoRR abs/2110.01212 (2021) - [i71]Han Zhong, Zhuoran Yang, Zhaoran Wang, Csaba Szepesvári:
Optimistic Policy Optimization is Provably Efficient in Non-stationary MDPs. CoRR abs/2110.08984 (2021) - [i70]Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game. CoRR abs/2110.09771 (2021) - [i69]Chenjia Bai, Lingxiao Wang, Lei Han, Animesh Garg, Jianye Hao, Peng Liu, Zhaoran Wang:
Dynamic Bottleneck for Robust Self-Supervised Exploration. CoRR abs/2110.10735 (2021) - [i68]Zhihong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Zhaoran Wang, Jing Jiang:
SCORE: Spurious COrrelation REduction for Offline Reinforcement Learning. CoRR abs/2110.12468 (2021) - [i67]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang:
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning. CoRR abs/2111.03947 (2021) - [i66]Zechu Li, Xiao-Yang Liu, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo:
FinRL-Podracer: High Performance and Scalable Deep Reinforcement Learning for Quantitative Finance. CoRR abs/2111.05188 (2021) - [i65]Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, Michael I. Jordan:
ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning. CoRR abs/2112.05923 (2021) - [i64]Xiao-Yang Liu, Jingyang Rui, Jiechao Gao, Liuqing Yang, Hongyang Yang, Zhaoran Wang, Christina Dan Wang, Jian Guo:
FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven Deep Reinforcement Learning in Quantitative Finance. CoRR abs/2112.06753 (2021) - [i63]Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopic Followers? CoRR abs/2112.13521 (2021) - [i62]Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael I. Jordan, Zhaoran Wang:
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic. CoRR abs/2112.13530 (2021) - [i61]Gene Li, Junbo Li, Nathan Srebro, Zhaoran Wang, Zhuoran Yang:
Exponential Family Model-Based Reinforcement Learning via Score Matching. CoRR abs/2112.14195 (2021) - 2020
- [j6]Matey Neykov, Zhaoran Wang, Han Liu:
Agnostic Estimation for Phase Retrieval. J. Mach. Learn. Res. 21: 121:1-121:39 (2020) - [j5]Xiang Lyu
, Will Wei Sun
, Zhaoran Wang, Han Liu, Jian Yang, Guang Cheng
:
Tensor Graphical Model: Non-Convex Optimization and Statistical Inference. IEEE Trans. Pattern Anal. Mach. Intell. 42(8): 2024-2037 (2020) - [c59]Chi Jin
, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Provably efficient reinforcement learning with linear function approximation. COLT 2020: 2137-2143 - [c58]Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang:
Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium. COLT 2020: 3674-3682 - [c57]Minshuo Chen, Yizhou Wang, Tianyi Liu, Zhuoran Yang, Xingguo Li, Zhaoran Wang, Tuo Zhao:
On Computation and Generalization of Generative Adversarial Imitation Learning. ICLR 2020 - [c56]Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang:
Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games. ICLR 2020 - [c55]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Neural Policy Gradient Methods: Global Optimality and Rates of Convergence. ICLR 2020 - [c54]Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang:
Provably Efficient Exploration in Policy Optimization. ICML 2020: 1283-1294 - [c53]Ying Jin, Zhaoran Wang, Junwei Lu:
Computational and Statistical Tradeoffs in Inferring Combinatorial Structures of Ising Model. ICML 2020: 4901-4910 - [c52]Sen Na, Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar:
Semiparametric Nonlinear Bipartite Graph Representation Learning with Provable Guarantees. ICML 2020: 7141-7152 - [c51]Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, Tuo Zhao:
Deep Reinforcement Learning with Robust and Smooth Policy. ICML 2020: 8707-8718 - [c50]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
On the Global Optimality of Model-Agnostic Meta-Learning. ICML 2020: 9837-9846 - [c49]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning. ICML 2020: 10092-10103 - [c48]Yufeng Zhang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate. ICML 2020: 11044-11054 - [c47]Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang:
A Theoretical Analysis of Deep Q-Learning. L4DC 2020: 486-489 - [c46]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie:
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret. NeurIPS 2020 - [c45]Yingjie Fei, Zhuoran Yang, Zhaoran Wang, Qiaomin Xie:
Dynamic Regret of Policy Optimization in Non-Stationary Environments. NeurIPS 2020 - [c44]Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou:
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework. NeurIPS 2020 - [c43]Jiayang Li, Jing Yu, Yu Marco Nie, Zhaoran Wang:
End-to-End Learning and Intervention in Games. NeurIPS 2020 - [c42]Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Mladen Kolar, Zhaoran Wang:
Provably Efficient Neural Estimation of Structural Equation Models: An Adversarial Approach. NeurIPS 2020 - [c41]Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jieping Ye, Zhaoran Wang:
Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss. NeurIPS 2020 - [c40]