default search action
Ayush Sekhari
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c24]Zeyu Jia, Alexander Rakhlin, Ayush Sekhari, Chen-Yu Wei:
Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data. COLT 2024: 2644-2719 - [c23]Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie:
Harnessing Density Ratios for Online Reinforcement Learning. ICLR 2024 - [c22]Yifei Zhou, Ayush Sekhari, Yuda Song, Wen Sun:
Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees. ICLR 2024 - [c21]Srinath Mahankali, Zhang-Wei Hong, Ayush Sekhari, Alexander Rakhlin, Pulkit Agrawal:
Random Latent Exploration for Deep Reinforcement Learning. ICML 2024 - [i28]Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie:
Harnessing Density Ratios for Online Reinforcement Learning. CoRR abs/2401.09681 (2024) - [i27]Zeyu Jia, Alexander Rakhlin, Ayush Sekhari, Chen-Yu Wei:
Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data. CoRR abs/2403.17091 (2024) - [i26]Runzhe Wu, Ayush Sekhari, Akshay Krishnamurthy, Wen Sun:
Computationally Efficient RL under Linear Bellman Completeness for Deterministic Dynamics. CoRR abs/2406.11810 (2024) - [i25]Martin Pawelczyk, Jimmy Z. Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, Seth Neel:
Machine Unlearning Fails to Remove Data Poisoning Attacks. CoRR abs/2406.17216 (2024) - [i24]August Y. Chen, Ayush Sekhari, Karthik Sridharan:
Langevin Dynamics: A Unified Perspective on Optimization via Lyapunov Potentials. CoRR abs/2407.04264 (2024) - [i23]Srinath Mahankali, Zhang-Wei Hong, Ayush Sekhari, Alexander Rakhlin, Pulkit Agrawal:
Random Latent Exploration for Deep Reinforcement Learning. CoRR abs/2407.13755 (2024) - 2023
- [c20]Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush Sekhari, Chiyuan Zhang:
Ticketed Learning-Unlearning Schemes. COLT 2023: 5110-5139 - [c19]Yuda Song, Yifei Zhou, Ayush Sekhari, Drew Bagnell, Akshay Krishnamurthy, Wen Sun:
Hybrid RL: Using both offline and online data can make RL efficient. ICLR 2023 - [c18]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings. ICML 2023: 34615-34641 - [c17]Jimmy Z. Di, Jack Douglas, Jayadev Acharya, Gautam Kamath, Ayush Sekhari:
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks. NeurIPS 2023 - [c16]Dylan J. Foster, Noah Golowich, Jian Qian, Alexander Rakhlin, Ayush Sekhari:
Model-Free Reinforcement Learning with the Decision-Estimation Coefficient. NeurIPS 2023 - [c15]Zeyu Jia, Gene Li, Alexander Rakhlin, Ayush Sekhari, Nati Srebro:
When is Agnostic Reinforcement Learning Statistically Tractable? NeurIPS 2023 - [c14]Ayush Sekhari, Karthik Sridharan, Wen Sun, Runzhe Wu:
Contextual Bandits and Imitation Learning with Preference-Based Active Queries. NeurIPS 2023 - [c13]Ayush Sekhari, Karthik Sridharan, Wen Sun, Runzhe Wu:
Selective Sampling and Imitation Learning via Online Regression. NeurIPS 2023 - [i22]Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush Sekhari, Chiyuan Zhang:
Ticketed Learning-Unlearning Schemes. CoRR abs/2306.15744 (2023) - [i21]Ayush Sekhari, Karthik Sridharan, Wen Sun, Runzhe Wu:
Selective Sampling and Imitation Learning via Online Regression. CoRR abs/2307.04998 (2023) - [i20]Ayush Sekhari, Karthik Sridharan, Wen Sun, Runzhe Wu:
Contextual Bandits and Imitation Learning via Preference-Based Active Queries. CoRR abs/2307.12926 (2023) - [i19]Zeyu Jia, Gene Li, Alexander Rakhlin, Ayush Sekhari, Nathan Srebro:
When is Agnostic Reinforcement Learning Statistically Tractable? CoRR abs/2310.06113 (2023) - [i18]Yifei Zhou, Ayush Sekhari, Yuda Song, Wen Sun:
Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees. CoRR abs/2311.08384 (2023) - 2022
- [c12]Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan:
Guarantees for Epsilon-Greedy Reinforcement Learning with Function Approximation. ICML 2022: 4666-4689 - [c11]Dylan J. Foster, Alexander Rakhlin, Ayush Sekhari, Karthik Sridharan:
On the Complexity of Adversarial Decision Making. NeurIPS 2022 - [c10]Christopher De Sa, Satyen Kale, Jason D. Lee, Ayush Sekhari, Karthik Sridharan:
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. NeurIPS 2022 - [c9]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. NeurIPS 2022 - [i17]Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan:
Guarantees for Epsilon-Greedy Reinforcement Learning with Function Approximation. CoRR abs/2206.09421 (2022) - [i16]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems. CoRR abs/2206.12020 (2022) - [i15]Masatoshi Uehara, Ayush Sekhari, Jason D. Lee, Nathan Kallus, Wen Sun:
Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings. CoRR abs/2206.12081 (2022) - [i14]Dylan J. Foster, Alexander Rakhlin, Ayush Sekhari, Karthik Sridharan:
On the Complexity of Adversarial Decision Making. CoRR abs/2206.13063 (2022) - [i13]Satyen Kale, Jason D. Lee, Chris De Sa, Ayush Sekhari, Karthik Sridharan:
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. CoRR abs/2210.06705 (2022) - [i12]Yuda Song, Yifei Zhou, Ayush Sekhari, J. Andrew Bagnell, Akshay Krishnamurthy, Wen Sun:
Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient. CoRR abs/2210.06718 (2022) - [i11]Dylan J. Foster, Noah Golowich, Jian Qian, Alexander Rakhlin, Ayush Sekhari:
A Note on Model-Free Reinforcement Learning with the Decision-Estimation Coefficient. CoRR abs/2211.14250 (2022) - [i10]Jimmy Z. Di, Jack Douglas, Jayadev Acharya, Gautam Kamath, Ayush Sekhari:
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks. CoRR abs/2212.10717 (2022) - 2021
- [c8]Zhilei Wang, Pranjal Awasthi, Christoph Dann, Ayush Sekhari, Claudio Gentile:
Neural Active Learning with Performance Guarantees. NeurIPS 2021: 7510-7521 - [c7]Ayush Sekhari, Jayadev Acharya, Gautam Kamath, Ananda Theertha Suresh:
Remember What You Want to Forget: Algorithms for Machine Unlearning. NeurIPS 2021: 18075-18086 - [c6]Ayush Sekhari, Christoph Dann, Mehryar Mohri, Yishay Mansour, Karthik Sridharan:
Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations. NeurIPS 2021: 19033-19045 - [c5]Ayush Sekhari, Karthik Sridharan, Satyen Kale:
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs. NeurIPS 2021: 27422-27433 - [i9]Ayush Sekhari, Jayadev Acharya, Gautam Kamath, Ananda Theertha Suresh:
Remember What You Want to Forget: Algorithms for Machine Unlearning. CoRR abs/2103.03279 (2021) - [i8]Pranjal Awasthi, Christoph Dann, Claudio Gentile, Ayush Sekhari, Zhilei Wang:
Neural Active Learning with Performance Guarantees. CoRR abs/2106.03243 (2021) - [i7]Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan:
Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations. CoRR abs/2106.11519 (2021) - [i6]Satyen Kale, Ayush Sekhari, Karthik Sridharan:
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs. CoRR abs/2107.05074 (2021) - 2020
- [c4]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations. COLT 2020: 242-299 - [c3]Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan:
Reinforcement Learning with Feedback Graphs. NeurIPS 2020 - [i5]Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan:
Reinforcement Learning with Feedback Graphs. CoRR abs/2005.03789 (2020) - [i4]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations. CoRR abs/2006.13476 (2020)
2010 – 2019
- 2019
- [c2]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. COLT 2019: 1319-1345 - [i3]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. CoRR abs/1902.04686 (2019) - 2018
- [c1]Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Uniform Convergence of Gradients for Non-Convex Learning and Optimization. NeurIPS 2018: 8759-8770 - [i2]Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Uniform Convergence of Gradients for Non-Convex Learning and Optimization. CoRR abs/1810.11059 (2018) - 2017
- [i1]Marc Pickett, Ayush Sekhari, James Davidson:
A Brief Study of In-Domain Transfer and Learning from Fewer Samples using A Few Simple Priors. CoRR abs/1707.03979 (2017)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-04 01:23 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint