


Остановите войну!
for scientists:


default search action
Ohad Shamir
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j19]Ohad Shamir:
The Implicit Bias of Benign Overfitting. J. Mach. Learn. Res. 24: 113:1-113:40 (2023) - [c106]Nadav Timor, Gal Vardi, Ohad Shamir:
Implicit Regularization Towards Rank Minimization in ReLU Networks. ALT 2023: 1429-1459 - [c105]Michael I. Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis:
Deterministic Nonsmooth Nonconvex Optimization. COLT 2023: 4570-4597 - [i98]Michael I. Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis:
Deterministic Nonsmooth Nonconvex Optimization. CoRR abs/2302.08300 (2023) - [i97]Guy Kornowski, Gilad Yehudai, Ohad Shamir:
From Tempered to Benign Overfitting in ReLU Neural Networks. CoRR abs/2305.15141 (2023) - [i96]Roey Magen, Ohad Shamir:
Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks. CoRR abs/2305.16475 (2023) - [i95]Guy Kornowski, Ohad Shamir:
An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization. CoRR abs/2307.04504 (2023) - 2022
- [j18]Guy Kornowski, Ohad Shamir:
Oracle Complexity in Nonsmooth Nonconvex Optimization. J. Mach. Learn. Res. 23: 314:1-314:44 (2022) - [c104]Ohad Shamir:
The Implicit Bias of Benign Overfitting. COLT 2022: 448-478 - [c103]Gal Vardi, Gilad Yehudai, Ohad Shamir:
Width is Less Important than Depth in ReLU Neural Networks. COLT 2022: 1249-1281 - [c102]Gal Vardi, Gilad Yehudai, Ohad Shamir:
On the Optimal Memorization Power of ReLU Neural Networks. ICLR 2022 - [c101]Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro:
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication (Extended Abstract). IJCAI 2022: 5359-5363 - [c100]Ohad Shamir:
Elephant in the Room: Non-Smooth Non-Convex Optimization. ISAIM 2022 - [c99]Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, Michal Irani:
Reconstructing Training Data From Trained Neural Networks. NeurIPS 2022 - [c98]Gal Vardi, Ohad Shamir, Nati Srebro:
The Sample Complexity of One-Hidden-Layer Neural Networks. NeurIPS 2022 - [c97]Gal Vardi, Ohad Shamir, Nati Srebro:
On Margin Maximization in Linear and ReLU Networks. NeurIPS 2022 - [c96]Gal Vardi, Gilad Yehudai, Ohad Shamir:
Gradient Methods Provably Converge to Non-Robust Networks. NeurIPS 2022 - [i94]Ohad Shamir:
The Implicit Bias of Benign Overfitting. CoRR abs/2201.11489 (2022) - [i93]Nadav Timor, Gal Vardi, Ohad Shamir:
Implicit Regularization Towards Rank Minimization in ReLU Networks. CoRR abs/2201.12760 (2022) - [i92]Gal Vardi, Gilad Yehudai, Ohad Shamir:
Width is Less Important than Depth in ReLU Neural Networks. CoRR abs/2202.03841 (2022) - [i91]Gal Vardi, Gilad Yehudai, Ohad Shamir:
Gradient Methods Provably Converge to Non-Robust Networks. CoRR abs/2202.04347 (2022) - [i90]Gal Vardi, Ohad Shamir, Nathan Srebro:
The Sample Complexity of One-Hidden-Layer Neural Networks. CoRR abs/2202.06233 (2022) - [i89]Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, Michal Irani:
Reconstructing Training Data from Trained Neural Networks. CoRR abs/2206.07758 (2022) - [i88]Guy Kornowski, Ohad Shamir:
On the Complexity of Finding Small Subgradients in Nonsmooth Optimization. CoRR abs/2209.10346 (2022) - 2021
- [j17]Ohad Shamir:
Gradient Methods Never Overfit On Separable Data. J. Mach. Learn. Res. 22: 85:1-85:20 (2021) - [c95]Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir:
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks. COLT 2021: 3265-3295 - [c94]Itay Safran, Gilad Yehudai, Ohad Shamir:
The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks. COLT 2021: 3889-3934 - [c93]Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir:
Size and Depth Separation in Approximating Benign Functions with Neural Networks. COLT 2021: 4195-4223 - [c92]Gal Vardi, Ohad Shamir:
Implicit Regularization in ReLU Networks with the Square Loss. COLT 2021: 4224-4258 - [c91]Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro:
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication. COLT 2021: 4386-4437 - [c90]Guy Kornowski, Ohad Shamir:
Oracle Complexity in Nonsmooth Nonconvex Optimization. NeurIPS 2021: 324-334 - [c89]Itay Safran, Ohad Shamir:
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems. NeurIPS 2021: 15151-15161 - [c88]Brian Bullins, Kumar Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake E. Woodworth:
A Stochastic Newton Algorithm for Distributed Convex Optimization. NeurIPS 2021: 26818-26830 - [c87]Gal Vardi, Gilad Yehudai, Ohad Shamir:
Learning a Single Neuron with Bias Using Gradient Descent. NeurIPS 2021: 28690-28700 - [i87]Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir:
Size and Depth Separation in Approximating Natural Functions with Neural Networks. CoRR abs/2102.00314 (2021) - [i86]Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir:
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks. CoRR abs/2102.00434 (2021) - [i85]Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro:
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication. CoRR abs/2102.01583 (2021) - [i84]Guy Kornowski, Ohad Shamir:
Oracle Complexity in Nonsmooth Nonconvex Optimization. CoRR abs/2104.06763 (2021) - [i83]Gal Vardi, Gilad Yehudai, Ohad Shamir:
Learning a Single Neuron with Bias Using Gradient Descent. CoRR abs/2106.01101 (2021) - [i82]Itay Safran, Ohad Shamir:
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems. CoRR abs/2106.06880 (2021) - [i81]Gal Vardi, Ohad Shamir, Nathan Srebro:
On Margin Maximization in Linear and ReLU Networks. CoRR abs/2110.02732 (2021) - [i80]Brian Bullins, Kumar Kshitij Patel, Ohad Shamir, Nathan Srebro, Blake E. Woodworth:
A Stochastic Newton Algorithm for Distributed Convex Optimization. CoRR abs/2110.02954 (2021) - [i79]Gal Vardi, Gilad Yehudai, Ohad Shamir:
On the Optimal Memorization Power of ReLU Neural Networks. CoRR abs/2110.03187 (2021) - [i78]Liran Szlak, Ohad Shamir:
Convergence Results For Q-Learning With Experience Replay. CoRR abs/2112.04213 (2021) - [i77]Liran Szlak, Ohad Shamir:
Replay For Safety. CoRR abs/2112.04229 (2021) - 2020
- [c86]Yossi Arjevani, Ohad Shamir, Nathan Srebro:
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates. ALT 2020: 111-132 - [c85]Itay Safran, Ohad Shamir:
How Good is SGD with Random Shuffling? COLT 2020: 3250-3284 - [c84]Gilad Yehudai, Ohad Shamir:
Learning a Single Neuron with Gradient Methods. COLT 2020: 3756-3786 - [c83]Yoel Drori, Ohad Shamir:
The Complexity of Finding Stationary Points with Stochastic Gradient Descent. ICML 2020: 2658-2667 - [c82]Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir:
Proving the Lottery Ticket Hypothesis: Pruning is All You Need. ICML 2020: 6682-6691 - [c81]Blake E. Woodworth, Kumar Kshitij Patel, Sebastian U. Stich, Zhen Dai, Brian Bullins, H. Brendan McMahan, Ohad Shamir, Nathan Srebro:
Is Local SGD Better than Minibatch SGD? ICML 2020: 10334-10343 - [c80]Gal Vardi, Ohad Shamir:
Neural Networks with Small Weights and Depth-Separation Barriers. NeurIPS 2020 - [i76]Gilad Yehudai, Ohad Shamir:
Learning a Single Neuron with Gradient Methods. CoRR abs/2001.05205 (2020) - [i75]Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir:
Proving the Lottery Ticket Hypothesis: Pruning is All You Need. CoRR abs/2002.00585 (2020) - [i74]Blake E. Woodworth, Kumar Kshitij Patel
, Sebastian U. Stich, Zhen Dai, Brian Bullins, H. Brendan McMahan, Ohad Shamir, Nathan Srebro:
Is Local SGD Better than Minibatch SGD? CoRR abs/2002.07839 (2020) - [i73]Ohad Shamir:
Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions? CoRR abs/2002.11962 (2020) - [i72]Gal Vardi, Ohad Shamir:
Neural Networks with Small Weights and Depth-Separation Barriers. CoRR abs/2006.00625 (2020) - [i71]Itay Safran, Gilad Yehudai, Ohad Shamir:
The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks. CoRR abs/2006.01005 (2020) - [i70]Ohad Shamir:
Gradient Methods Never Overfit On Separable Data. CoRR abs/2007.00028 (2020) - [i69]Guy Kornowski, Ohad Shamir:
High-Order Oracle Complexity of Smooth and Strongly Convex Optimization. CoRR abs/2010.06642 (2020) - [i68]Gal Vardi, Ohad Shamir:
Implicit Regularization in ReLU Networks with the Square Loss. CoRR abs/2012.05156 (2020) - [i67]Gal Vardi, Ohad Shamir:
Neural Networks with Small Weights and Depth-Separation Barriers. Electron. Colloquium Comput. Complex. TR20 (2020)
2010 – 2019
- 2019
- [j16]Yossi Arjevani, Ohad Shamir, Ron Shiff
:
Oracle complexity of second-order methods for smooth convex optimization. Math. Program. 178(1-2): 327-360 (2019) - [c79]Yuval Dagan, Gil Kur, Ohad Shamir:
Space lower bounds for linear prediction in the streaming model. COLT 2019: 929-954 - [c78]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. COLT 2019: 1319-1345 - [c77]Itay Safran, Ronen Eldan, Ohad Shamir:
Depth Separations in Neural Networks: What is Actually Being Separated? COLT 2019: 2664-2666 - [c76]Ohad Shamir:
Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks. COLT 2019: 2691-2713 - [c75]Gilad Yehudai, Ohad Shamir:
On the Power and Limitations of Random Features for Understanding Neural Networks. NeurIPS 2019: 6594-6604 - [i66]Yuval Dagan, Gil Kur, Ohad Shamir:
Space lower bounds for linear prediction. CoRR abs/1902.03498 (2019) - [i65]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. CoRR abs/1902.04686 (2019) - [i64]Gilad Yehudai, Ohad Shamir:
On the Power and Limitations of Random Features for Understanding Neural Networks. CoRR abs/1904.00687 (2019) - [i63]Itay Safran, Ronen Eldan, Ohad Shamir:
Depth Separations in Neural Networks: What is Actually Being Separated? CoRR abs/1904.06984 (2019) - [i62]Itay Safran, Ohad Shamir:
How Good is SGD with Random Shuffling? CoRR abs/1908.00045 (2019) - [i61]Yoel Drori, Ohad Shamir:
The Complexity of Finding Stationary Points with Stochastic Gradient Descent. CoRR abs/1910.01845 (2019) - 2018
- [j15]Ohad Shamir:
Distribution-Specific Hardness of Learning Neural Networks. J. Mach. Learn. Res. 19: 32:1-32:29 (2018) - [c74]Nicolò Cesa-Bianchi, Ohad Shamir:
Bandit Regret Scaling with the Effective Loss Range. ALT 2018: 128-151 - [c73]Noah Golowich, Alexander Rakhlin, Ohad Shamir:
Size-Independent Sample Complexity of Neural Networks. COLT 2018: 297-299 - [c72]Yuval Dagan, Ohad Shamir:
Detecting Correlations with Little Memory and Communication. COLT 2018: 1145-1198 - [c71]Itay Safran, Ohad Shamir:
Spurious Local Minima are Common in Two-Layer ReLU Neural Networks. ICML 2018: 4430-4438 - [c70]Ohad Shamir:
Are ResNets Provably Better than Linear Predictors? NeurIPS 2018: 505-514 - [c69]Murat A. Erdogdu, Lester Mackey, Ohad Shamir:
Global Non-convex Optimization with Discretized Diffusions. NeurIPS 2018: 9694-9703 - [i60]Yuval Dagan, Ohad Shamir:
Detecting Correlations with Little Memory and Communication. CoRR abs/1803.01420 (2018) - [i59]Ohad Shamir:
Are ResNets Provably Better than Linear Predictors? CoRR abs/1804.06739 (2018) - [i58]Yossi Arjevani, Ohad Shamir, Nathan Srebro:
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates. CoRR abs/1806.10188 (2018) - [i57]Ohad Shamir:
Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks. CoRR abs/1809.08587 (2018) - [i56]Murat A. Erdogdu, Lester Mackey, Ohad Shamir:
Global Non-convex Optimization with Discretized Diffusions. CoRR abs/1810.12361 (2018) - 2017
- [j14]Ohad Shamir:
An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback. J. Mach. Learn. Res. 18: 52:1-52:11 (2017) - [j13]Noga Alon, Nicolò Cesa-Bianchi, Claudio Gentile, Shie Mannor
, Yishay Mansour, Ohad Shamir:
Nonstochastic Multi-Armed Bandits with Graph-Structured Feedback. SIAM J. Comput. 46(6): 1785-1826 (2017) - [c68]Satyen Kale, Ohad Shamir:
Preface: Conference on Learning Theory (COLT), 2017. COLT 2017: 1-3 - [c67]Yossi Arjevani, Ohad Shamir:
Oracle Complexity of Second-Order Methods for Finite-Sum Problems. ICML 2017: 205-213 - [c66]Dan Garber, Ohad Shamir, Nathan Srebro:
Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis. ICML 2017: 1203-1212 - [c65]Itay Safran, Ohad Shamir:
Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks. ICML 2017: 2979-2987 - [c64]Shai Shalev-Shwartz, Ohad Shamir, Shaked Shammah:
Failures of Gradient-Based Deep Learning. ICML 2017: 3067-3075 - [c63]Ohad Shamir, Liran Szlak:
Online Learning with Local Permutations and Delayed Feedback. ICML 2017: 3086-3094 - [e2]Satyen Kale, Ohad Shamir:
Proceedings of the 30th Conference on Learning Theory, COLT 2017, Amsterdam, The Netherlands, 7-10 July 2017. Proceedings of Machine Learning Research 65, PMLR 2017 [contents] - [i55]Dan Garber, Ohad Shamir, Nathan Srebro:
Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis. CoRR abs/1702.08169 (2017) - [i54]Ohad Shamir, Liran Szlak:
Online Learning with Local Permutations and Delayed Feedback. CoRR abs/1703.04274 (2017) - [i53]Shai Shalev-Shwartz, Ohad Shamir, Shaked Shammah:
Failures of Deep Learning. CoRR abs/1703.07950 (2017) - [i52]Nicolò Cesa-Bianchi, Ohad Shamir:
Bandit Regret Scaling with the Effective Loss Range. CoRR abs/1705.05091 (2017) - [i51]Shai Shalev-Shwartz, Ohad Shamir, Shaked Shammah:
Weight Sharing is Crucial to Succesful Optimization. CoRR abs/1706.00687 (2017) - [i50]Noah Golowich, Alexander Rakhlin, Ohad Shamir:
Size-Independent Sample Complexity of Neural Networks. CoRR abs/1712.06541 (2017) - [i49]Itay Safran, Ohad Shamir:
Spurious Local Minima are Common in Two-Layer ReLU Neural Networks. CoRR abs/1712.08968 (2017) - 2016
- [j12]Yossi Arjevani, Shai Shalev-Shwartz, Ohad Shamir:
On Lower and Upper Bounds in Smooth and Strongly Convex Optimization. J. Mach. Learn. Res. 17: 126:1-126:51 (2016) - [j11]Niv Buchbinder
, Shahar Chen, Joseph Naor, Ohad Shamir:
Unified Algorithms for Online Learning and Competitive Analysis. Math. Oper. Res. 41(2): 612-625 (2016) - [c62]Ronen Eldan, Ohad Shamir:
The Power of Depth for Feedforward Neural Networks. COLT 2016: 907-940 - [c61]Jonathan Rosenski, Ohad Shamir, Liran Szlak:
Multi-Player Bandits - a Musical Chairs Approach. ICML 2016: 155-163 - [c60]Ohad Shamir:
Fast Stochastic Algorithms for SVD and PCA: Convergence Properties and Convexity. ICML 2016: 248-256 - [c59]Ohad Shamir:
Convergence of Stochastic Gradient Descent for PCA. ICML 2016: 257-265 - [c58]Itay Safran, Ohad Shamir:
On the Quality of the Initial Basin in Overspecified Neural Networks. ICML 2016: 774-782 - [c57]Yossi Arjevani, Ohad Shamir:
On the Iteration Complexity of Oblivious First-Order Optimization Algorithms. ICML 2016: 908-916 - [c56]Ohad Shamir:
Without-Replacement Sampling for Stochastic Gradient Methods. NIPS 2016: 46-54 - [c55]Yossi Arjevani, Ohad Shamir:
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems. NIPS 2016: 3540-3548 - [e1]Vitaly Feldman, Alexander Rakhlin, Ohad Shamir:
Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016. JMLR Workshop and Conference Proceedings 49, JMLR.org 2016 [contents] - [i48]Ohad Shamir:
Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization. CoRR abs/1603.00570 (2016) - [i47]Yossi Arjevani, Ohad Shamir:
On the Iteration Complexity of Oblivious First-Order Optimization Algorithms. CoRR abs/1605.03529 (2016) - [i46]Yossi Arjevani, Ohad Shamir:
Dimension-Free Iteration Complexity of Finite Sum Optimization Problems. CoRR abs/1606.09333 (2016) - [i45]Ohad Shamir:
Distribution-Specific Hardness of Learning Neural Networks. CoRR abs/1609.01037 (2016) - [i44]Itay Safran, Ohad Shamir:
Depth Separation in ReLU Networks for Approximating Smooth Non-Linear Functions. CoRR abs/1610.09887 (2016) - [i43]Yossi Arjevani, Ohad Shamir:
Oracle Complexity of Second-Order Methods for Finite-Sum Problems. CoRR abs/1611.04982 (2016) - 2015
- [j10]Ohad Shamir:
The sample complexity of learning linear predictors with the squared loss. J. Mach. Learn. Res. 16: 3475-3486 (2015) - [c54]Ethan Fetaya, Ohad Shamir, Shimon Ullman:
Graph Approximation and Clustering on a Budget. AISTATS 2015 - [c53]Nicolò Cesa-Bianchi, Yishay Mansour, Ohad Shamir:
On the Complexity of Learning with Kernels. COLT 2015: 297-325 - [c52]Ohad Shamir:
On the Complexity of Bandit Linear Optimization. COLT 2015: 1523-1551 - [c51]Ohad Shamir:
A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate. ICML 2015: 144-152 - [c50]Doron Kukliansky, Ohad Shamir:
Attribute Efficient Linear Regression with Distribution-Dependent Sampling. ICML 2015: 153-161 - [c49]Yossi Arjevani, Ohad Shamir:
Communication Complexity of Distributed Convex Learning and Optimization. NIPS 2015: 1756-1764 - [i42]Yossi Arjevani, Shai Shalev-Shwartz, Ohad Shamir:
On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems. CoRR abs/1503.06833 (2015) - [i41]Yossi Arjevani, Ohad Shamir:
Communication Complexity of Distributed Convex Learning and Optimization. CoRR abs/1506.01900 (2015) - [i40]Ohad Shamir:
An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback. CoRR abs/1507.08752 (2015) - [i39]Ohad Shamir:
Fast Stochastic Algorithms for SVD and PCA: Convergence Properties and Convexity. CoRR abs/1507.08788 (2015) - [i38]