default search action
Satyen Kale
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j14]Jianyu Wang, Rudrajit Das, Gauri Joshi, Satyen Kale, Zheng Xu, Tong Zhang:
On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data. Trans. Mach. Learn. Res. 2024 (2024) - [c79]Pranjal Awasthi, Satyen Kale, Ankit Pensia:
Semi-supervised Group DRO: Combating Sparsity with Unlabeled Data. ALT 2024: 125-160 - [c78]Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta:
Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements. ICML 2024 - [i46]Bo Liu, Rachita Chhaparia, Arthur Douillard, Satyen Kale, Andrei A. Rusu, Jiajun Shen, Arthur Szlam, Marc'Aurelio Ranzato:
Asynchronous Local-SGD Training for Language Modeling. CoRR abs/2401.09135 (2024) - [i45]Abhishek Panigrahi, Nikunj Saunshi, Kaifeng Lyu, Sobhan Miryoosefi, Sashank J. Reddi, Satyen Kale, Sanjiv Kumar:
Efficient Stagewise Pretraining via Progressive Subnetworks. CoRR abs/2402.05913 (2024) - [i44]Naman Agarwal, Pranjal Awasthi, Satyen Kale, Eric Zhao:
Stacking as Accelerated Gradient Descent. CoRR abs/2403.04978 (2024) - 2023
- [c77]Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Thakurta:
Differentially Private and Lazy Online Convex Optimization. COLT 2023: 4599-4632 - [c76]Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang:
On the Convergence of Federated Averaging with Cyclic Client Participation. ICML 2023: 5677-5721 - [c75]Rudrajit Das, Satyen Kale, Zheng Xu, Tong Zhang, Sujay Sanghavi:
Beyond Uniform Lipschitz Condition in Differentially Private Optimization. ICML 2023: 7066-7101 - [c74]Sashank J. Reddi, Sobhan Miryoosefi, Stefani Karp, Shankar Krishnan, Satyen Kale, Seungyeon Kim, Sanjiv Kumar:
Efficient Training of Language Models using Few-Shot Learning. ICML 2023: 14553-14568 - [i43]Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang:
On the Convergence of Federated Averaging with Cyclic Client Participation. CoRR abs/2302.03109 (2023) - [i42]Michael Dinitz, Satyen Kale, Silvio Lattanzi, Sergei Vassilvitskii:
Improved Differentially Private Densest Subgraph: Local and Purely Additive. CoRR abs/2308.10316 (2023) - [i41]Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta:
Improved Differentially Private and Lazy Online Convex Optimization. CoRR abs/2312.11534 (2023) - 2022
- [c73]Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi:
Federated Functional Gradient Boosting. AISTATS 2022: 7814-7840 - [c72]Naman Agarwal, Satyen Kale, Julian Zimmert:
Efficient Methods for Online Multiclass Logistic Regression. ALT 2022: 3-33 - [c71]Julian Zimmert, Naman Agarwal, Satyen Kale:
Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States. COLT 2022: 182-226 - [c70]Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani:
Self-Consistency of the Fokker Planck Equation. COLT 2022: 817-841 - [c69]Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Thakurta, Nisheeth K. Vishnoi:
Private Matrix Approximation and Geometry of Unitary Orbits. COLT 2022: 3547-3588 - [c68]Ziwei Ji, Kwangjun Ahn, Pranjal Awasthi, Satyen Kale, Stefani Karp:
Agnostic Learnability of Halfspaces via Logistic Loss. ICML 2022: 10068-10103 - [c67]Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I. Shamir:
Reproducibility in Optimization: Theoretical Framework and Limits. NeurIPS 2022 - [c66]Christopher De Sa, Satyen Kale, Jason D. Lee, Ayush Sekhari, Karthik Sridharan:
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. NeurIPS 2022 - [i40]Ziwei Ji, Kwangjun Ahn, Pranjal Awasthi, Satyen Kale, Stefani Karp:
Agnostic Learnability of Halfspaces via Logistic Loss. CoRR abs/2201.13419 (2022) - [i39]Julian Zimmert, Naman Agarwal, Satyen Kale:
Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States. CoRR abs/2202.02765 (2022) - [i38]Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I. Shamir:
Reproducibility in Optimization: Theoretical Framework and Limits. CoRR abs/2202.04598 (2022) - [i37]Sean Augenstein, Andrew Hard, Lin Ning, Karan Singhal, Satyen Kale, Kurt Partridge, Rajiv Mathews:
Mixed Federated Learning: Joint Decentralized and Centralized Learning. CoRR abs/2205.13655 (2022) - [i36]Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani:
Self-Consistency of the Fokker-Planck Equation. CoRR abs/2206.00860 (2022) - [i35]Jianyu Wang, Rudrajit Das, Gauri Joshi, Satyen Kale, Zheng Xu, Tong Zhang:
On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data. CoRR abs/2206.04723 (2022) - [i34]Rudrajit Das, Satyen Kale, Zheng Xu, Tong Zhang, Sujay Sanghavi:
Beyond Uniform Lipschitz Condition in Differentially Private Optimization. CoRR abs/2206.10713 (2022) - [i33]Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Guha Thakurta, Nisheeth K. Vishnoi:
Private Matrix Approximation and Geometry of Unitary Orbits. CoRR abs/2207.02794 (2022) - [i32]Satyen Kale, Jason D. Lee, Chris De Sa, Ayush Sekhari, Karthik Sridharan:
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. CoRR abs/2210.06705 (2022) - 2021
- [c65]Naman Agarwal, Pranjal Awasthi, Satyen Kale:
A Deep Conditioning Treatment of Neural Networks. ALT 2021: 249-305 - [c64]Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh:
Learning with User-Level Privacy. NeurIPS 2021: 12466-12479 - [c63]Ayush Sekhari, Karthik Sridharan, Satyen Kale:
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs. NeurIPS 2021: 27422-27433 - [c62]Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh:
Breaking the centralized barrier for cross-device federated learning. NeurIPS 2021: 28663-28676 - [i31]Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh:
Learning with User-Level Privacy. CoRR abs/2102.11845 (2021) - [i30]Jacob D. Abernethy, Pranjal Awasthi, Satyen Kale:
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness. CoRR abs/2103.01276 (2021) - [i29]Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi:
Federated Functional Gradient Boosting. CoRR abs/2103.06972 (2021) - [i28]Satyen Kale, Ayush Sekhari, Karthik Sridharan:
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs. CoRR abs/2107.05074 (2021) - [i27]Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu:
A Field Guide to Federated Optimization. CoRR abs/2107.06917 (2021) - [i26]Naman Agarwal, Satyen Kale, Julian Zimmert:
Efficient Methods for Online Multiclass Logistic Regression. CoRR abs/2110.03020 (2021) - 2020
- [c61]Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh:
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. ICML 2020: 5132-5143 - [c60]Pranjal Awasthi, Satyen Kale, Stefani Karp, Mehryar Mohri:
PAC-Bayes Learning Bounds for Sample-Dependent Priors. NeurIPS 2020 - [c59]Garima Pruthi, Frederick Liu, Satyen Kale, Mukund Sundararajan:
Estimating Training Data Influence by Tracing Gradient Descent. NeurIPS 2020 - [i25]Naman Agarwal, Pranjal Awasthi, Satyen Kale:
A Deep Conditioning Treatment of Neural Networks. CoRR abs/2002.01523 (2020) - [i24]Garima Pruthi, Frederick Liu, Mukund Sundararajan, Satyen Kale:
Estimating Training Data Influence by Tracking Gradient Descent. CoRR abs/2002.08484 (2020) - [i23]Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh:
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning. CoRR abs/2008.03606 (2020)
2010 – 2019
- 2019
- [c58]Sashank J. Reddi, Satyen Kale, Felix X. Yu, Daniel Niels Holtmann-Rice, Jiecao Chen, Sanjiv Kumar:
Stochastic Negative Mining for Learning with Large Output Spaces. AISTATS 2019: 1940-1949 - [c57]Aurélien Garivier, Satyen Kale:
Algorithmic Learning Theory 2019: Preface. ALT 2019: 1-2 - [c56]Matthew Staib, Sashank J. Reddi, Satyen Kale, Sanjiv Kumar, Suvrit Sra:
Escaping Saddle Points with Adaptive Gradient Methods. ICML 2019: 5956-5965 - [c55]Chuan Guo, Ali Mousavi, Xiang Wu, Daniel Niels Holtmann-Rice, Satyen Kale, Sashank J. Reddi, Sanjiv Kumar:
Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces. NeurIPS 2019: 4944-4954 - [c54]Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Hypothesis Set Stability and Generalization. NeurIPS 2019: 6726-6736 - [e3]Aurélien Garivier, Satyen Kale:
Algorithmic Learning Theory, ALT 2019, 22-24 March 2019, Chicago, Illinois, USA. Proceedings of Machine Learning Research 98, PMLR 2019 [contents] - [i22]Matthew Staib, Sashank J. Reddi, Satyen Kale, Sanjiv Kumar, Suvrit Sra:
Escaping Saddle Points with Adaptive Gradient Methods. CoRR abs/1901.09149 (2019) - [i21]Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Hypothesis Set Stability and Generalization. CoRR abs/1904.04755 (2019) - [i20]Sashank J. Reddi, Satyen Kale, Sanjiv Kumar:
On the Convergence of Adam and Beyond. CoRR abs/1904.09237 (2019) - [i19]Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh:
SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning. CoRR abs/1910.06378 (2019) - 2018
- [c53]Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Logistic Regression: The Importance of Being Improper. COLT 2018: 167-208 - [c52]Sashank J. Reddi, Satyen Kale, Sanjiv Kumar:
On the Convergence of Adam and Beyond. ICLR 2018 - [c51]Ian En-Hsu Yen, Satyen Kale, Felix X. Yu, Daniel Niels Holtmann-Rice, Sanjiv Kumar, Pradeep Ravikumar:
Loss Decomposition for Fast Learning in Large Output Spaces. ICML 2018: 5626-5635 - [c50]Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak:
Online Learning of Quantum States. NeurIPS 2018: 8976-8986 - [c49]Manzil Zaheer, Sashank J. Reddi, Devendra Singh Sachan, Satyen Kale, Sanjiv Kumar:
Adaptive Methods for Nonconvex Optimization. NeurIPS 2018: 9815-9825 - [i18]Dylan J. Foster, Satyen Kale, Mehryar Mohri, Karthik Sridharan:
Parameter-free online learning via model selection. CoRR abs/1801.00101 (2018) - [i17]Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Logistic Regression: The Importance of Being Improper. CoRR abs/1803.09349 (2018) - [i16]Sashank J. Reddi, Satyen Kale, Felix X. Yu, Daniel N. Holtmann-Rice, Jiecao Chen, Sanjiv Kumar:
Stochastic Negative Mining for Learning with Large Output Spaces. CoRR abs/1810.07076 (2018) - 2017
- [j13]Elad Hazan, Satyen Kale, Shai Shalev-Shwartz:
Near-Optimal Algorithms for Online Matrix Prediction. SIAM J. Comput. 46(2): 744-773 (2017) - [c48]Satyen Kale, Ohad Shamir:
Preface: Conference on Learning Theory (COLT), 2017. COLT 2017: 1-3 - [c47]Satyen Kale, Zohar S. Karnin, Tengyuan Liang, Dávid Pál:
Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP. ICML 2017: 1780-1788 - [c46]Dylan J. Foster, Satyen Kale, Mehryar Mohri, Karthik Sridharan:
Parameter-Free Online Learning via Model Selection. NIPS 2017: 6020-6030 - [e2]Satyen Kale, Ohad Shamir:
Proceedings of the 30th Conference on Learning Theory, COLT 2017, Amsterdam, The Netherlands, 7-10 July 2017. Proceedings of Machine Learning Research 65, PMLR 2017 [contents] - [i15]Satyen Kale, Zohar S. Karnin, Tengyuan Liang, Dávid Pál:
Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP. CoRR abs/1706.04690 (2017) - 2016
- [j12]Sanjeev Arora, Satyen Kale:
A Combinatorial, Primal-Dual Approach to Semidefinite Programs. J. ACM 63(2): 12:1-12:35 (2016) - [j11]Elad Hazan, Satyen Kale, Manfred K. Warmuth:
Learning rotations with little regret. Mach. Learn. 104(1): 129-148 (2016) - [c45]Dean P. Foster, Satyen Kale, Howard J. Karloff:
Online Sparse Linear Regression. COLT 2016: 960-970 - [c44]Noa Elad, Satyen Kale, Joseph (Seffi) Naor:
Online Semidefinite Programming. ICALP 2016: 40:1-40:13 - [c43]Alina Beygelzimer, Satyen Kale, Haipeng Luo:
Optimal and Adaptive Algorithms for Online Boosting. IJCAI 2016: 4120-4124 - [c42]Satyen Kale, Chansoo Lee, Dávid Pál:
Hardness of Online Sleeping Combinatorial Optimization Problems. NIPS 2016: 2181-2189 - [i14]Dean P. Foster, Satyen Kale, Howard J. Karloff:
Online Sparse Linear Regression. CoRR abs/1603.02250 (2016) - 2015
- [c41]Kareem Amin, Satyen Kale, Gerald Tesauro, Deepak S. Turaga:
Budgeted Prediction with Expert Advice. AAAI 2015: 2490-2496 - [c40]Alina Beygelzimer, Satyen Kale, Haipeng Luo:
Optimal and Adaptive Algorithms for Online Boosting. ICML 2015: 2323-2331 - [c39]Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo:
Online Gradient Boosting. NIPS 2015: 2458-2466 - [e1]Peter Grünwald, Elad Hazan, Satyen Kale:
Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015. JMLR Workshop and Conference Proceedings 40, JMLR.org 2015 [contents] - [i13]Alina Beygelzimer, Satyen Kale, Haipeng Luo:
Optimal and Adaptive Algorithms for Online Boosting. CoRR abs/1502.02651 (2015) - [i12]Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo:
Online Gradient Boosting. CoRR abs/1506.04820 (2015) - [i11]Satyen Kale, Chansoo Lee, Dávid Pál:
Hardness of Online Sleeping Combinatorial Optimization Problems. CoRR abs/1509.03600 (2015) - 2014
- [j10]Elad Hazan, Satyen Kale:
Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization. J. Mach. Learn. Res. 15(1): 2489-2512 (2014) - [c38]Satyen Kale:
Multiarmed Bandits With Limited Expert Advice. COLT 2014: 107-122 - [c37]Satyen Kale:
Open Problem: Efficient Online Sparse Regression. COLT 2014: 1299-1301 - [c36]Alekh Agarwal, Daniel J. Hsu, Satyen Kale, John Langford, Lihong Li, Robert E. Schapire:
Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits. ICML 2014: 1638-1646 - [i10]Alekh Agarwal, Daniel J. Hsu, Satyen Kale, John Langford, Lihong Li, Robert E. Schapire:
Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits. CoRR abs/1402.0555 (2014) - 2013
- [j9]Satyen Kale, Yuval Peres, C. Seshadhri:
Noise Tolerance of Expanders and Sublinear Expansion Reconstruction. SIAM J. Comput. 42(1): 305-323 (2013) - [c35]Anupam Gupta, Satyen Kale, Viswanath Nagarajan, Rishi Saket, Baruch Schieber:
The Approximability of the Binary Paintshop Problem. APPROX-RANDOM 2013: 205-217 - [c34]Arpita Ghosh, Satyen Kale, Kevin J. Lang, Benjamin Moseley:
Bargaining for Revenue Shares on Tree Trading Networks. IJCAI 2013: 129-135 - [c33]Jacob D. Abernethy, Satyen Kale:
Adaptive Market Making via Online Learning. NIPS 2013: 2058-2066 - [i9]Arpita Ghosh, Satyen Kale, Kevin J. Lang, Benjamin Moseley:
Bargaining for Revenue Shares on Tree Trading Networks. CoRR abs/1304.5822 (2013) - [i8]Satyen Kale:
Multiarmed Bandits With Limited Expert Advice. CoRR abs/1306.4653 (2013) - 2012
- [j8]Elad Hazan, Satyen Kale:
Online submodular minimization. J. Mach. Learn. Res. 13: 2903-2922 (2012) - [j7]Sanjeev Arora, Elad Hazan, Satyen Kale:
The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. Theory Comput. 8(1): 121-164 (2012) - [c32]Haim Avron, Satyen Kale, Shiva Prasad Kasiviswanathan, Vikas Sindhwani:
Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization. ICML 2012 - [c31]Elad Hazan, Satyen Kale:
Projection-free Online Learning. ICML 2012 - [c30]Satyen Kale:
Commentary on "Online Optimization with Gradual Variations". COLT 2012: 6.21-6.24 - [c29]Alekh Agarwal, Miroslav Dudík, Satyen Kale, John Langford, Robert E. Schapire:
Contextual Bandit Learning with Predictable Rewards. AISTATS 2012: 19-26 - [c28]Elad Hazan, Satyen Kale, Shai Shalev-Shwartz:
Near-Optimal Algorithms for Online Matrix Prediction. COLT 2012: 38.1-38.13 - [i7]Alekh Agarwal, Miroslav Dudík, Satyen Kale, John Langford, Robert E. Schapire:
Contextual Bandit Learning with Predictable Rewards. CoRR abs/1202.1334 (2012) - [i6]Elad Hazan, Satyen Kale, Shai Shalev-Shwartz:
Near-Optimal Algorithms for Online Matrix Prediction. CoRR abs/1204.0136 (2012) - [i5]Elad Hazan, Satyen Kale:
Projection-free Online Learning. CoRR abs/1206.4657 (2012) - 2011
- [j6]Elad Hazan, Satyen Kale:
Better Algorithms for Benign Bandits. J. Mach. Learn. Res. 12: 1287-1311 (2011) - [j5]Satyen Kale, C. Seshadhri:
An Expansion Tester for Bounded Degree Graphs. SIAM J. Comput. 40(3): 709-720 (2011) - [c27]Satyen Kale, C. Seshadhri:
Combinatorial Approximation Algorithms for MaxCut using Random Walks. ICS 2011: 367-388 - [c26]Satyen Kale, Ravi Kumar, Sergei Vassilvitskii:
Cross-Validation and Mean-Square Stability. ICS 2011: 487-495 - [c25]Elad Hazan, Satyen Kale:
Newtron: an Efficient Bandit algorithm for Online Multiclass Prediction. NIPS 2011: 891-899 - [c24]Arpita Ghosh, Satyen Kale, R. Preston McAfee:
Who moderates the moderators?: crowdsourcing abuse detection in user-generated content. EC 2011: 167-176 - [c23]Miroslav Dudík, Daniel J. Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, Tong Zhang:
Efficient Optimal Learning for Contextual Bandits. UAI 2011: 169-178 - [c22]Elad Hazan, Satyen Kale:
Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. COLT 2011: 421-436 - [c21]Elad Hazan, Satyen Kale:
A simple multi-armed bandit algorithm with optimal variation-bounded regret. COLT 2011: 817-820 - [i4]Miroslav Dudík, Daniel J. Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, Tong Zhang:
Efficient Optimal Learning for Contextual Bandits. CoRR abs/1106.2369 (2011) - 2010
- [j4]Elad Hazan, Satyen Kale:
Extracting certainty from uncertainty: regret bounded by variation in costs. Mach. Learn. 80(2-3): 165-188 (2010) - [j3]Sanjeev Arora, Elad Hazan, Satyen Kale:
O(sqrt(log(n)) Approximation to SPARSEST CUT in Õ(n2) Time. SIAM J. Comput. 39(5): 1748-1771 (2010) - [c20]Elad Hazan, Satyen Kale, Manfred K. Warmuth:
Learning Rotations with Little Regret. COLT 2010: 144-154 - [c19]Elad Hazan, Satyen Kale, Manfred K. Warmuth:
On-line Variance Minimization in O(n2) per Trial? COLT 2010: 314-315 - [c18]Satyen Kale, Lev Reyzin, Robert E. Schapire:
Non-Stochastic Bandit Slate Problems. NIPS 2010: 1054-1062 - [i3]Satyen Kale, C. Seshadhri:
Combinatorial Approximation Algorithms for MaxCut using Random Walks. CoRR abs/1008.3938 (2010)
2000 – 2009
- 2009
- [c17]Elad Hazan, Satyen Kale:
Beyond Convexity: Online Submodular Minimization. NIPS 2009: 700-708