


default search action
36th COLT 2023: Bangalore, India
- Gergely Neu, Lorenzo Rosasco:

The Thirty Sixth Annual Conference on Learning Theory, COLT 2023, 12-15 July 2023, Bangalore, India. Proceedings of Machine Learning Research 195, PMLR 2023 - Preface. i

- Alireza Mousavi Hosseini, Tyler K. Farghly, Ye He, Krishna Balasubramanian, Murat A. Erdogdu:

Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincaré Inequality. 1-35 - Matthew Shunshi Zhang, Sinho Chewi, Mufan (Bill) Li, Krishna Balasubramanian, Murat A. Erdogdu:

Improved Discretization Analysis for Underdamped Langevin Monte Carlo. 36-71 - Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy:

The One-Inclusion Graph Algorithm is not Always Optimal. 72-88 - Matthew Faw, Litu Rout, Constantine Caramanis, Sanjay Shakkottai:

Beyond Uniform Smoothness: A Stopped Analysis of Adaptive SGD. 89-160 - Bohan Wang, Huishuai Zhang, Zhiming Ma, Wei Chen:

Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions. 161-190 - Yunwen Lei:

Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems. 191-227 - Adam Block, Yury Polyanskiy:

The Sample Complexity of Approximate Rejection Sampling With Applications to Smoothed Online Learning. 228-273 - Angelos Assos, Idan Attias, Yuval Dagan, Constantinos Daskalakis, Maxwell K. Fishelson:

Online Learning and Solving Infinite Games with an ERM Oracle. 274-324 - Changlong Wu, Ananth Grama, Wojciech Szpankowski:

Online Learning in Dynamically Changing Environments. 325-358 - David Martínez-Rubio

, Sebastian Pokutta:
Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties. 359-393 - Sayak Ray Chowdhury, Patrick Saux, Odalric Maillard, Aditya Gopalan:

Bregman Deviations of Generic Exponential Families. 394-449 - Kasper Green Larsen:

Bagging is an Optimal PAC Learner. 450-468 - Julia Gaudio, Nirmit Joshi:

Community Detection in the Hypergraph SBM: Optimal Recovery Given the Similarity Matrix. 469-510 - Valentino Delle Rose, Alexander Kozachinskiy, Cristóbal Rojas, Tomasz Steifer:

Find a witness or shatter: the landscape of computable PAC learning. 511-524 - Han Bao:

Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds. 525-547 - Rares-Darius Buhai, David Steurer

:
Beyond Parallel Pancakes: Quasi-Polynomial Time Guarantees for Non-Spherical Gaussian Mixtures. 548-611 - Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh:

Online Reinforcement Learning in Stochastic Continuous-Time Systems. 612-656 - Fang Kong, Canzhe Zhao, Shuai Li:

Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm. 657-673 - Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar:

Private Online Prediction from Experts: Separations and Faster Rates. 674-699 - Weiwei Liu:

Improved Bounds for Multi-task Learning with Trace Norm Regularization. 700-714 - Doron Cohen, Aryeh Kontorovich:

Local Glivenko-Cantelli. 715 - Giacomo Greco, Maxence Noble, Giovanni Conforti, Alain Durmus:

Non-asymptotic convergence bounds for Sinkhorn iterates and their gradients: a coupling approach. 716-746 - Konstantina Bairaktari, Guy Blanc, Li-Yang Tan, Jonathan R. Ullman, Lydia Zakynthinou

:
Multitask Learning via Shared Features: Algorithms and Hardness. 747-772 - Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran:

Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension. 773-836 - Yuzhou Gu, Yury Polyanskiy:

Uniqueness of BP fixed point for the Potts model and applications to community detection. 837-884 - Yuzhou Gu, Yury Polyanskiy:

Weak Recovery Threshold for the Hypergraph Stochastic Block Model. 885-920 - Gabriel Arpino, Ramji Venkataramanan:

Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression. 921-986 - Alekh Agarwal, Yujia Jin, Tong Zhang:

VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation. 987-1063 - Zongbo Bao, Penghui Yao:

On Testing and Learning Quantum Junta Channels. 1064-1094 - Nicolò Cesa-Bianchi, Tommaso Renato Cesari, Roberto Colomboni

, Federico Fusco, Stefano Leonardi:
Repeated Bilateral Trade Against a Smoothed Adversary. 1095-1130 - Rémy Degenne:

On the Existence of a Complexity in Fixed Budget Bandit Identification. 1131-1154 - Weihang Xu, Simon S. Du:

Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron. 1155-1198 - Luca Arnaboldi, Ludovic Stephan, Florent Krzakala

, Bruno Loureiro:
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks. 1199-1227 - Sholom Schechtman, Daniil Tiapkin

, Michael Muehlebach, Éric Moulines:
Orthogonal Directions Constrained Gradient Method: from non-linear equality constraints to Stiefel manifold. 1228-1258 - Dan Garber, Ben Kretzu:

Projection-free Online Exp-concave Optimization. 1259-1284 - Dirk van der Hoeven, Lukas Zierahn

, Tal Lancewicki, Aviv Rosenberg, Nicolò Cesa-Bianchi:
A Unified Analysis of Nonstochastic Delayed Feedback for Combinatorial Semi-Bandits, Linear Bandits, and MDPs. 1285-1321 - Andrew J. Wagenmaker, Dylan J. Foster:

Instance-Optimality in Interactive Decision Making: Toward a Non-Asymptotic Theory. 1322-1472 - Jiaojiao Fan, Bo Yuan, Yongxin Chen:

Improved dimension dependence of a proximal algorithm for sampling. 1473-1521 - Oren Mangoubi, Nisheeth K. Vishnoi:

Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex Gaussian Perturbations. 1522-1587 - Sihan Liu, Gaurav Mahajan, Daniel Kane, Shachar Lovett, Gellért Weisz, Csaba Szepesvári:

Exponential Hardness of Reinforcement Learning with Linear Function Approximation. 1588-1617 - Adam Block, Max Simchowitz, Alexander Rakhlin:

Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making. 1618-1665 - Anthimos Vardis Kandiros, Constantinos Daskalakis, Yuval Dagan, Davin Choo:

Learning and Testing Latent-Tree Ising Models Efficiently. 1666-1729 - Arun Ganesh, Abhradeep Thakurta, Jalaj Upadhyay:

Universality of Langevin Diffusion for Private Optimization, with Applications to Sampling from Rashomon Sets. 1730-1773 - Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric Vigoda:

Complexity of High-Dimensional Identity Testing with Coordinate Conditional Sampling. 1774-1790 - Osama A. Hanna, Lin Yang, Christina Fragouli:

Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms. 1791-1821 - Omar Fawzi, Nicolas Flammarion, Aurélien Garivier, Aadil Oufkir:

Quantum Channel Certification with Incoherent Measurements. 1822-1884 - Shay Moran, Ohad Sharon, Iska Tsubari, Sivan Yosebashvili:

List Online Classification. 1885-1913 - Advait Parulekar, Liam Collins, Karthikeyan Shanmugam

, Aryan Mokhtari, Sanjay Shakkottai:
InfoNCE Loss Provably Learns Cluster-Preserving Representations. 1914-1961 - Ruichen Jiang, Qiujiang Jin, Aryan Mokhtari:

Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence. 1962-1992 - Nikita Puchkin, Nikita Zhivotovskiy:

Exploring Local Norms in Exp-concave Statistical Learning. 1993-2013 - Gaurav Mahajan, Sham M. Kakade, Akshay Krishnamurthy, Cyril Zhang:

Learning Hidden Markov Models Using Conditional Samples. 2014-2066 - Tor Lattimore, András György:

A Second-Order Method for Stochastic Bandit Convex Optimisation. 2067-2094 - Tor Lattimore:

A Lower Bound for Linear and Kernel Regression with Adaptive Covariates. 2095-2113 - Alekh Agarwal, Yuda Song, Wen Sun, Kaiwen Wang, Mengdi Wang, Xuezhou Zhang:

Provable Benefits of Representational Transfer in Reinforcement Learning. 2114-2187 - Victor-Emmanuel Brunel:

Geodesically convex M-estimation in metric spaces. 2188-2210 - Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, Nikos Zarifis:

Information-Computation Tradeoffs for Learning Margin Halfspaces with Random Classification Noise. 2211-2239 - Kyoungseok Jang, Kwang-Sung Jun, Ilja Kuzborskij, Francesco Orabona:

Tighter PAC-Bayes Bounds Through Coin-Betting. 2240-2264 - Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara:

Inference on Strongly Identified Functionals of Weakly Identified Functions. 2265 - Zijian Liu, Jiawei Zhang, Zhengyuan Zhou:

Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise. 2266-2290 - Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara:

Minimax Instrumental Variable Regression and L2 Convergence Guarantees without Identification or Closedness. 2291-2318 - Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis:

SQ Lower Bounds for Learning Mixtures of Separated and Bounded Covariance Gaussians. 2319-2349 - Wenhao Li, Ningyuan Chen:

Allocating Divisible Resources on Arms with Unknown and Random Rewards. 2350-2351 - Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian:

Semi-Random Sparse Recovery in Nearly-Linear Time. 2352-2398 - Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian:

Algorithmic Aspects of the Log-Laplace Transform and a Non-Euclidean Proximal Sampler. 2399-2439 - Cheng Mao, Alexander S. Wein, Shenduo Zhang:

Detection-Recovery Gap for Planted Dense Cycles. 2440-2481 - Raef Bassily, Cristóbal Guzmán, Michael Menart:

Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap. 2482-2508 - Jason M. Altschuler, Kunal Talwar:

Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling. 2509-2510 - Rohith Kuditipudi, John C. Duchi, Saminul Haque:

A Pretty Fast Algorithm for Adaptive Private Mean Estimation. 2511-2551 - Emmanuel Abbe, Enric Boix Adserà, Theodor Misiakiewicz:

SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics. 2552-2623 - Jason D. Hartline, Liren Shan, Yingkai Li, Yifan Wu:

Optimal Scoring Rules for Multi-dimensional Effort. 2624-2650 - Qiwen Cui, Kaiqing Zhang, Simon S. Du:

Breaking the Curse of Multiagents in a Large State Space: RL in Markov Games with Independent Linear Function Approximation. 2651-2652 - Shinji Ito, Kei Takemura:

Best-of-Three-Worlds Linear Bandit Algorithm with Variance-Adaptive Regret Bounds. 2653-2677 - Dean P. Foster, Dylan J. Foster, Noah Golowich, Alexander Rakhlin:

On the Complexity of Multi-Agent Decision Making: From Learning in Games to Partial Monitoring. 2678-2792 - Yuanhao Wang, Qinghua Liu, Yu Bai, Chi Jin:

Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation. 2793-2848 - Wai Ming Tai, Bryon Aragam:

Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures. 2849 - Wei You, Chao Qin, Zihao Wang, Shuoguang Yang:

Information-Directed Selection for Top-Two Algorithms. 2850-2851 - David Martínez-Rubio

, Elias Samuel Wirth, Sebastian Pokutta:
Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond. 2852-2876 - Kefan Dong, Tengyu Ma:

Toward L_∞Recovery of Nonlinear Functions: A Polynomial Sample Complexity Bound for Gaussian Random Fields. 2877-2918 - Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis:

Self-Directed Linear Classification. 2919-2947 - Pengyun Yue, Cong Fang, Zhouchen Lin:

On the Lower Bound of Minimizing Polyak-Łojasiewicz functions. 2948-2968 - Christopher Criscitiello, Nicolas Boumal:

Curvature and complexity: Better lower bounds for geodesically convex optimization. 2969-3013 - Daniel Kane, Ilias Diakonikolas:

A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points. 3014-3028 - Stefan Tiegel:

Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems. 3029-3064 - Sourav Chakraborty, Eldar Fischer, Arijit Ghosh, Gopinath Mishra, Sayantan Sen

:
Testing of Index-Invariant Properties in the Huge Object Model. 3065-3136 - Michal Derezinski:

Algorithmic Gaussianization through Sketching: Converting Data into Sub-gaussian Random Designs. 3137-3172 - Spencer Frei, Gal Vardi, Peter L. Bartlett, Nathan Srebro:

Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization. 3173-3228 - Ankit Pensia, Amir-Reza Asadi, Varun S. Jog, Po-Ling Loh:

Simple Binary Hypothesis Testing under Local Differential Privacy and Communication Constraints. 3229-3230 - David Gamarnik, Eren C. Kizildag, Will Perkins, Changji Xu:

Geometric Barriers for Stable and Online Algorithms for Discrepancy Minimization. 3231-3263 - Navid Ardeshir, Daniel J. Hsu, Clayton Hendrick Sanford:

Intrinsic dimensionality and generalization properties of the R-norm inductive bias. 3264-3303 - Yuanyu Wan, Lijun Zhang, Mingli Song:

Improved Dynamic Regret for Online Frank-Wolfe. 3304-3327 - Ziwei Guan, Yi Zhou, Yingbin Liang:

Online Nonconvex Optimization with Limited Instantaneous Oracle Feedback. 3328-3355 - Max Simchowitz, Abhishek Gupta, Kaiqing Zhang:

Tackling Combinatorial Distribution Shift: A Matrix Completion Perspective. 3356-3468 - Mirabel E. Reid, Santosh S. Vempala:

The k-Cap Process on Geometric Random Graphs. 3469-3509 - Zhiyuan Fan, Jian Li:

Efficient Algorithms for Sparse Moment Problems without Separation. 3510-3565 - Cynthia Dwork, Daniel Lee, Huijia Lin, Pranay Tankala:

From Pseudorandomness to Multi-Group Fairness and Back. 3566-3614 - Doudou Zhou, Hao Chen:

A new ranking scheme for modern data and its application to two-sample hypothesis testing. 3615-3668 - Maria-Luiza Vladarean, Nikita Doikov, Martin Jaggi, Nicolas Flammarion:

Linearization Algorithms for Fully Composite Optimization. 3669-3695 - Aniket Das, Dheeraj M. Nagaraj, Praneeth Netrapalli, Dheeraj Baby:

Near Optimal Heteroscedastic Regression with Symbiotic Learning. 3696-3757 - Giannis Fikioris, Éva Tardos:

Approximately Stationary Bandits with Knapsacks. 3758-3782 - Yuchen Wu, Kangjie Zhou:

Lower Bounds for the Convergence of Tensor Power Iteration on Random Overcomplete Models. 3783-3820 - Anish Agarwal, Munther A. Dahleh, Devavrat Shah, Dennis Shen:

Causal Matrix Completion. 3821-3826 - Kevin Han Huang

, Xing Liu, Andrew B. Duncan, Axel Gandy:
A High-dimensional Convergence Theorem for U-statistics with Applications to Kernel-based Testing. 3827-3918 - Shuyu Liu, Florentina Bunea, Jonathan Niles-Weed:

Asymptotic confidence sets for random linear programs. 3919-3940 - Yiyun He, Roman Vershynin, Yizhe Zhu:

Algorithmically Effective Differentially Private Synthetic Data. 3941-3968 - Dylan J. Foster, Noah Golowich, Yanjun Han:

Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient. 3969-4043 - Yiding Hua, Jingqiu Ding, Tommaso d'Orsi, David Steurer

:
Reaching Kesten-Stigum Threshold in the Stochastic Block Model under Node Corruptions. 4044-4071 - Aniket Das, Dheeraj M. Nagaraj, Anant Raj:

Utilising the CLT Structure in Stochastic Gradient based Sampling : Improved Analysis and Faster Algorithms. 4072-4129 - Angeliki Giannou, Shashank Rajput, Dimitris Papailiopoulos:

The Expressive Power of Tuning Only the Normalization Layers. 4130-4131 - David Bosch, Ashkan Panahi, Babak Hassibi:

Precise Asymptotic Analysis of Deep Random Feature Models. 4132-4179 - Constantinos Daskalakis, Noah Golowich, Kaiqing Zhang:

The Complexity of Markov Equilibrium in Stochastic Games. 4180-4234 - Aaron Potechin, Paxton M. Turner, Prayaag Venkat, Alexander S. Wein:

Near-optimal fitting of ellipsoids to random points. 4235-4295 - Zeyu Jia, Yury Polyanskiy, Yihong Wu:

Entropic characterization of optimal rates for learning Gaussian mixtures. 4296-4335 - Zihao Hu, Guanghui Wang, Jacob D. Abernethy:

Minimizing Dynamic Regret on Geodesic Metric Spaces. 4336-4383 - Abhishek Dhawan, Cheng Mao, Ashwin Pananjady:

Sharp analysis of EM for learning mixtures of pairwise differences. 4384-4428 - Pengyun Yue, Long Yang, Cong Fang, Zhouchen Lin:

Zeroth-order Optimization with Weak Dimension Dependency. 4429-4472 - Zakaria Mhammedi, Khashayar Gatmiry:

Quasi-Newton Steps for Efficient Online Exp-Concave Optimization. 4473-4503 - Yunbum Kook, Yin Tat Lee, Ruoqi Shen, Santosh S. Vempala:

Condition-number-independent Convergence Rate of Riemannian Hamiltonian Monte Carlo with Numerical Integrators. 4504-4569 - Michael I. Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis

:
Deterministic Nonsmooth Nonconvex Optimization. 4570-4597 - Zeyuan Allen-Zhu, Yuanzhi Li:

Backward Feature Correction: How Deep Learning Performs Deep (Hierarchical) Learning. 4598 - Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Thakurta:

Differentially Private and Lazy Online Convex Optimization. 4599-4632 - Aleksandrs Slivkins, Karthik Abinav Sankararaman, Dylan J. Foster:

Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression. 4633-4656 - Arnaud Descours, Tom Huix, Arnaud Guillin, Manon Michel, Éric Moulines, Boris Nectoux:

Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference. 4657-4695 - Moïse Blanchard, Junhui Zhang, Patrick Jaillet:

Quadratic Memory is Necessary for Optimal Query Complexity in Convex Optimization: Center-of-Mass is Pareto-Optimal. 4696-4736 - Gleb Novikov:

Sparse PCA Beyond Covariance Thresholding. 4737-4776 - Shivam Gupta, Jasper C. H. Lee, Eric Price:

Finite-Sample Symmetric Mean Estimation with Fisher Information Rate. 4777-4830 - Moses Charikar, Beidi Chen, Christopher Ré, Erik Waingarten:

Fast Algorithms for a New Relaxation of Optimal Transport. 4831-4862 - Sarah Sachs, Tim van Erven, Liam Hodgkinson, Rajiv Khanna, Umut Simsekli:

Generalization Guarantees via Algorithm-dependent Rademacher Complexity. 4863-4880 - Naren Sarayu Manoj

, Nathan Srebro:
Shortest Program Interpolation Learning. 4881-4901 - Yi Li, Honghao Lin, David P. Woodruff:

ℓp-Regression in the Arbitrary Partition Model of Communication. 4902-4928 - Yutong Wang, Clayton Scott:

On Classification-Calibration of Gamma-Phi Losses. 4929-4951 - Ibrahim Issa, Amedeo Roberto Esposito, Michael Gastpar:

Asymptotically Optimal Generalization Error Bounds for Noisy, Iterative Algorithms. 4952-4976 - Heyang Zhao, Jiafan He, Dongruo Zhou

, Tong Zhang, Quanquan Gu:
Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement Learning: Adaptivity and Computational Efficiency. 4977-5020 - Saachi Mutreja, Jonathan Shafer:

PAC Verification of Statistical Algorithms. 5021-5043 - Aymen Al Marjani, Andrea Tirinzoni, Emilie Kaufmann:

Active Coverage for PAC Reinforcement Learning. 5044-5109 - Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush Sekhari, Chiyuan Zhang:

Ticketed Learning-Unlearning Schemes. 5110-5139 - Mahdi Soltanolkotabi

, Dominik Stöger, Changzhi Xie:
Implicit Balancing and Regularization: Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing. 5140-5142 - Bobby Kleinberg, Renato Paes Leme, Jon Schneider, Yifeng Teng:

U-Calibration: Forecasting for an Unknown Agent. 5143-5145 - Constantinos Daskalakis, Noah Golowich, Stratis Skoulakis

, Emmanouil Zampetakis
:
STay-ON-the-Ridge: Guaranteed Convergence to Local Minimax Equilibrium in Nonconvex-Nonconcave Games. 5146-5198 - Soham Jana, Yury Polyanskiy, Anzo Z. Teh, Yihong Wu:

Empirical Bayes via ERM and Rademacher complexities: the Poisson model. 5199-5235 - Loucas Pillaud-Vivien, Francis R. Bach:

Kernelized Diffusion Maps. 5236-5259 - Ilias Diakonikolas, Daniel M. Kane, Yuetian Luo, Anru Zhang:

Statistical and Computational Limits for Tensor-on-Tensor Association Detection. 5260-5310 - Ramchandran Muthukumar, Jeremias Sulam:

Sparsity-aware generalization theory for deep neural networks. 5311-5342 - Pravesh Kothari, Santosh S. Vempala, Alexander S. Wein, Jeff Xu:

Is Planted Coloring Easier than Planted Clique? 5343-5372 - Yujia Jin, Christopher Musco, Aaron Sidford, Apoorv Vikram Singh:

Moments, Random Walks, and Limits for Spectrum Approximation. 5373-5394 - Patrik R. Gerber, Yanjun Han, Yury Polyanskiy:

Minimax optimal testing by classification. 5395-5432 - Nataly Brukhim, Steve Hanneke, Shay Moran:

Improper Multiclass Boosting. 5433-5452 - Ilias Diakonikolas, Sushrut Karmalkar, Jongho Park, Christos Tzamos:

Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions. 5453-5475 - Zihan Zhang, Qiaomin Xie:

Sharper Model-free Reinforcement Learning for Average-reward Markov Decision Processes. 5476-5477 - Xuyang Zhao, Huiyuan Wang

, Wei Lin:
The Aggregation-Heterogeneity Trade-off in Federated Learning. 5478-5502 - Christoph Dann, Chen-Yu Wei, Julian Zimmert:

A Blackbox Approach to Best of Both Worlds in Bandits and Beyond. 5503-5570 - Alexandros Hollender, Emmanouil Zampetakis

:
The Computational Complexity of Finding Stationary Points in Non-Convex Optimization. 5571-5572 - Elchanan Mossel, Jonathan Niles-Weed, Youngtak Sohn, Nike Sun, Ilias Zadik:

Sharp thresholds in inference of planted subgraphs. 5573-5577 - Gavin Brown, Samuel B. Hopkins, Adam Smith:

Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions. 5578-5579 - Sitan Chen, Zehao Dou, Surbhi Goel, Adam R. Klivans, Raghu Meka:

Learning Narrow One-Hidden-Layer ReLU Networks. 5580-5614 - Steve Hanneke, Shay Moran, Qian Zhang:

Universal Rates for Multiclass Learning. 5615-5681 - Steve Hanneke, Shay Moran, Vinod Raman, Unique Subedi, Ambuj Tewari:

Multiclass Online Learning and Uniform Convergence. 5682-5696 - Jaouad Mourtada, Tomas Vaskevicius, Nikita Zhivotovskiy:

Local Risk Bounds for Statistical Aggregation. 5697-5698 - Yuan Cao, Difan Zou, Yuanzhi Li, Quanquan Gu:

The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks. 5699-5753 - Bo Yuan, Jiaojiao Fan, Jiaming Liang, Andre Wibisono, Yongxin Chen:

On a Class of Gibbs Sampling over Networks. 5754-5780 - Steve Hanneke, Samory Kpotufe, Yasaman Mahdaviyeh:

Limits of Model Selection under Transfer Learning. 5781-5812 - Steve Hanneke, Liu Yang:

Bandit Learnability can be Undecidable. 5813-5849 - Guy Bresler, Tianze Jiang:

Detection-Recovery and Detection-Refutation Gaps via Reductions from Planted Clique. 5850-5889 - Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya O. Tolstikhin:

Fine-Grained Distribution-Dependent Learning Curves. 5890-5924 - Stanislav Minsker:

Efficient median of means estimator. 5925-5933 - Doron Cohen, Aryeh Kontorovich:

Open problem: log(n) factor in "Local Glivenko-Cantelli. 5934-5936 - Manfred K. Warmuth, Ehsan Amid:

Open Problem: Learning sparse linear concepts by priming the features. 5937-5942 - Pranjal Awasthi, Nika Haghtalab, Eric Zhao:

Open Problem: The Sample Complexity of Multi-Distribution Learning for VC Classes. 5943-5949 - Christopher Criscitiello, David Martínez-Rubio, Nicolas Boumal:

Open Problem: Polynomial linearly-convergent method for g-convex optimization? 5950-5956 - Jiseok Chae, Kyuwon Kim, Donghwan Kim:

Open Problem: Is There a First-Order Method that Only Converges to Local Minimax Optima? 5957-5964

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














