default search action
Jiafan He
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j3]Jie Wang, Jie Yang, Jiafan He, Dongliang Peng:
Multi-Augmentation-Based Contrastive Learning for Semi-Supervised Learning. Algorithms 17(3): 91 (2024) - [c28]Qiwei Di, Heyang Zhao, Jiafan He, Quanquan Gu:
Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning. ICLR 2024 - [c27]Kaixuan Ji, Qingyue Zhao, Jiafan He, Weitong Zhang, Quanquan Gu:
Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs. ICLR 2024 - [c26]Chenlu Ye, Jiafan He, Quanquan Gu, Tong Zhang:
Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption. ICML 2024 - [c25]Zhihao Zhu, Jiafan He, Luyang Hou, Lianming Xu, Wendi Zhu, Li Wang:
Emergency Localization for Mobile Ground Users: An Adaptive UAV Trajectory Planning Method. INFOCOM (Workshops) 2024: 1-6 - [i25]Zhihao Zhu, Jiafan He, Luyang Hou, Lianming Xu, Wendi Zhu, Li Wang:
Emergency Localization for Mobile Ground Users: An Adaptive UAV Trajectory Planning Method. CoRR abs/2401.07256 (2024) - [i24]Chenlu Ye, Jiafan He, Quanquan Gu, Tong Zhang:
Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption. CoRR abs/2402.08991 (2024) - [i23]Qiwei Di, Jiafan He, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path. CoRR abs/2402.08998 (2024) - [i22]Kaixuan Ji, Jiafan He, Quanquan Gu:
Reinforcement Learning from Human Feedback with Active Queries. CoRR abs/2402.09401 (2024) - [i21]Weitong Zhang, Zhiyuan Fan, Jiafan He, Quanquan Gu:
Settling Constant Regrets in Linear Markov Decision Processes. CoRR abs/2404.10745 (2024) - [i20]Qiwei Di, Jiafan He, Quanquan Gu:
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback. CoRR abs/2404.10776 (2024) - 2023
- [j2]Jiafan He, Aiguo Fei, Qingwei Li, Feng Fang:
Attitude Synchronization of Heterogenous Flexible Spacecrafts by Measurement-Based Feedback With Disturbance Suppression. IEEE Access 11: 84453-84467 (2023) - [c24]Heyang Zhao, Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu:
Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement Learning: Adaptivity and Computational Efficiency. COLT 2023: 4977-5020 - [c23]Qiwei Di, Jiafan He, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path. ICML 2023: 7837-7864 - [c22]Jiafan He, Heyang Zhao, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision Processes. ICML 2023: 12790-12822 - [c21]Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu:
Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation. ICML 2023: 24785-24811 - [c20]Weitong Zhang, Jiafan He, Zhiyuan Fan, Quanquan Gu:
On the Interplay Between Misspecification and Sub-optimality Gap in Linear Contextual Bandits. ICML 2023: 41111-41132 - [c19]Heyang Zhao, Dongruo Zhou, Jiafan He, Quanquan Gu:
Optimal Online Generalized Linear Regression with Stochastic Noise and Its Application to Heteroscedastic Bandits. ICML 2023: 42259-42279 - [c18]Yue Wu, Jiafan He, Quanquan Gu:
Uniform-PAC Guarantees for Model-Based RL with Bounded Eluder Dimension. UAI 2023: 2304-2313 - [c17]Weitong Zhang, Jiafan He, Dongruo Zhou, Amy Zhang, Quanquan Gu:
Provably efficient representation selection in Low-rank Markov Decision Processes: from online to offline RL. UAI 2023: 2488-2497 - [i19]Heyang Zhao, Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu:
Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement Learning: Adaptivity and Computational Efficiency. CoRR abs/2302.10371 (2023) - [i18]Weitong Zhang, Jiafan He, Zhiyuan Fan, Quanquan Gu:
On the Interplay Between Misspecification and Sub-optimality Gap in Linear Contextual Bandits. CoRR abs/2303.09390 (2023) - [i17]Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu:
Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation. CoRR abs/2305.06446 (2023) - [i16]Yue Wu, Jiafan He, Quanquan Gu:
Uniform-PAC Guarantees for Model-Based RL with Bounded Eluder Dimension. CoRR abs/2305.08350 (2023) - [i15]Kaixuan Ji, Qingyue Zhao, Jiafan He, Weitong Zhang, Quanquan Gu:
Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs. CoRR abs/2305.08359 (2023) - [i14]Qiwei Di, Heyang Zhao, Jiafan He, Quanquan Gu:
Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning. CoRR abs/2310.01380 (2023) - [i13]Heyang Zhao, Jiafan He, Quanquan Gu:
A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation. CoRR abs/2311.15238 (2023) - 2022
- [j1]Jiaqi Wang, Wei Xing Zheng, Andong Sheng, Jiafan He:
Cooperative Global Robust Practical Output Regulation of Nonlinear Lower Triangular Multiagent Systems via Event-Triggered Control. IEEE Trans. Cybern. 52(7): 5708-5719 (2022) - [c16]Chonghua Liao, Jiafan He, Quanquan Gu:
Locally Differentially Private Reinforcement Learning for Linear Mixture Markov Decision Processes. ACML 2022: 627-642 - [c15]Jiafan He, Dongruo Zhou, Quanquan Gu:
Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs. AISTATS 2022: 4259-4280 - [c14]Yiming Mao, Zhijie Xia, Qingwei Li, Jiafan He, Aiguo Fei:
Accurate Decision-Making Method for Air Combat Pilots Based on Data-Driven. DMBD (2) 2022: 439-448 - [c13]Yuanzhou Chen, Jiafan He, Quanquan Gu:
On the Sample Complexity of Learning Infinite-horizon Discounted Linear Kernel MDPs. ICML 2022: 3149-3183 - [c12]Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu:
Learning Stochastic Shortest Path with Linear Function Approximation. ICML 2022: 15584-15629 - [c11]Jiafan He, Tianhao Wang, Yifei Min, Quanquan Gu:
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits. NeurIPS 2022 - [c10]Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu:
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions. NeurIPS 2022 - [i12]Heyang Zhao, Dongruo Zhou, Jiafan He, Quanquan Gu:
Bandit Learning with General Function Classes: Heteroscedastic Noise and Variance-dependent Regret Bounds. CoRR abs/2202.13603 (2022) - [i11]Jiafan He, Dongruo Zhou, Tong Zhang, Quanquan Gu:
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions. CoRR abs/2205.06811 (2022) - [i10]Jiafan He, Tianhao Wang, Yifei Min, Quanquan Gu:
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits. CoRR abs/2207.03106 (2022) - [i9]Jiafan He, Heyang Zhao, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision Processes. CoRR abs/2212.06132 (2022) - 2021
- [c9]Jiafan He, Dongruo Zhou, Quanquan Gu:
Logarithmic Regret for Reinforcement Learning with Linear Function Approximation. ICML 2021: 4171-4180 - [c8]Dongruo Zhou, Jiafan He, Quanquan Gu:
Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping. ICML 2021: 12793-12802 - [c7]Jiafan He, Dongruo Zhou, Quanquan Gu:
Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation. NeurIPS 2021: 14188-14199 - [c6]Jiafan He, Dongruo Zhou, Quanquan Gu:
Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs. NeurIPS 2021: 22288-22300 - [i8]Jiafan He, Dongruo Zhou, Quanquan Gu:
Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function Approximation. CoRR abs/2102.08940 (2021) - [i7]Jiafan He, Dongruo Zhou, Quanquan Gu:
Uniform-PAC Bounds for Reinforcement Learning with Linear Function Approximation. CoRR abs/2106.11612 (2021) - [i6]Weitong Zhang, Jiafan He, Dongruo Zhou, Amy Zhang, Quanquan Gu:
Provably Efficient Representation Learning in Low-rank Markov Decision Processes. CoRR abs/2106.11935 (2021) - [i5]Chonghua Liao, Jiafan He, Quanquan Gu:
Locally Differentially Private Reinforcement Learning for Linear Mixture Markov Decision Processes. CoRR abs/2110.10133 (2021) - [i4]Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu:
Learning Stochastic Shortest Path with Linear Function Approximation. CoRR abs/2110.12727 (2021) - 2020
- [i3]Dongruo Zhou, Jiafan He, Quanquan Gu:
Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping. CoRR abs/2006.13165 (2020) - [i2]Jiafan He, Dongruo Zhou, Quanquan Gu:
Minimax Optimal Reinforcement Learning for Discounted MDPs. CoRR abs/2010.00587 (2020) - [i1]Jiafan He, Dongruo Zhou, Quanquan Gu:
Logarithmic Regret for Reinforcement Learning with Linear Function Approximation. CoRR abs/2011.11566 (2020)
2010 – 2019
- 2019
- [c5]Pengpeng Ye, Jiafan He, Yinya Li, Guoqing Qi, Andong Sheng:
Rectangular Impulsive Consensus of Multi-agent Systems with Heterogeneous Control Widths. ASCC 2019: 913-918 - [c4]Jiafan He, Youfeng Su, Dabo Xu, Andong Sheng:
Event-Triggered Attitude Regulation of Rigid Spacecraft with Uncertain Inertia Matrix. ASCC 2019: 1661-1665 - [c3]Jiafan He, Andong Sheng, Dabo Xu:
Robust Attitude Regulation of Uncertain Spacecraft with Flexible Appendages. ICNSC 2019: 442-447 - [c2]Jiafan He, Ariel D. Procaccia, Alexandros Psomas, David Zeng:
Achieving a Fairer Future by Changing the Past. IJCAI 2019: 343-349 - 2017
- [c1]Dabo Xu, Jiafan He, Andong Sheng, Zhiyong Chen, Dan Wang:
Robust attitude tracking control of a rigid spacecraft based on nonlinearly controlled quaternions. ASCC 2017: 853-858
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-31 21:12 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint