


Остановите войну!
for scientists:


default search action
Peter Richtárik
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [i158]Konstantin Mishchenko, Slavomír Hanzely, Peter Richtárik:
Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes. CoRR abs/2301.06806 (2023) - 2022
- [j41]Aritra Dutta, El Houcine Bergou, Yunming Xiao
, Marco Canini, Peter Richtárik:
Direct nonlinear acceleration. EURO J. Comput. Optim. 10: 100047 (2022) - [j40]Adil Salim, Laurent Condat
, Konstantin Mishchenko
, Peter Richtárik
:
Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms. J. Optim. Theory Appl. 195(1): 102-130 (2022) - [j39]Albert S. Berahas
, Majid Jahani
, Peter Richtárik
, Martin Takác:
Quasi-Newton methods for machine learning: forget the past, just sample. Optim. Methods Softw. 37(5): 1668-1704 (2022) - [j38]Samuel Horváth, Lihua Lei, Peter Richtárik, Michael I. Jordan:
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization. SIAM J. Math. Data Sci. 4(2): 634-648 (2022) - [c73]Xun Qian, Rustem Islamov, Mher Safaryan, Peter Richtárik:
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning. AISTATS 2022: 680-720 - [c72]Adil Salim, Laurent Condat, Dmitry Kovalev, Peter Richtárik:
An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints. AISTATS 2022: 4482-4498 - [c71]Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik:
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning. AISTATS 2022: 11374-11421 - [c70]Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik, Michael W. Mahoney, Martin Takác:
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information. ICLR 2022 - [c69]Konstantin Mishchenko, Bokun Wang, Dmitry Kovalev, Peter Richtárik:
IntSGD: Adaptive Floatless Compression of Stochastic Gradients. ICLR 2022 - [c68]Rafal Szlendak, Alexander Tyurin, Peter Richtárik:
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization. ICLR 2022 - [c67]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Proximal and Federated Random Reshuffling. ICML 2022: 15718-15749 - [c66]Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, Peter Richtárik:
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! ICML 2022: 15750-15769 - [c65]Peter Richtárik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, Eduard Gorbunov:
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation. ICML 2022: 18596-18648 - [c64]Mher Safaryan, Rustem Islamov, Xun Qian, Peter Richtárik:
FedNL: Making Newton-Type Methods Applicable to Federated Learning. ICML 2022: 18959-19010 - [c63]Adil Salim, Lukang Sun, Peter Richtárik:
A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1. ICML 2022: 19139-19152 - [c62]Egor Shulgin, Peter Richtárik:
Shifted compression framework: generalizations and improvements. UAI 2022: 1813-1823 - [i157]Grigory Malinovsky, Konstantin Mishchenko
, Peter Richtárik
:
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization. CoRR abs/2201.11066 (2022) - [i156]Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtárik
, Yuejie Chi:
BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression. CoRR abs/2201.13320 (2022) - [i155]Peter Richtárik
, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Zhize Li, Eduard Gorbunov:
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation. CoRR abs/2202.00998 (2022) - [i154]Alexander Tyurin
, Peter Richtárik
:
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization. CoRR abs/2202.01268 (2022) - [i153]Dmitry Kovalev, Aleksandr Beznosikov, Abdurakhmon Sadiev, Michael Persiianov, Peter Richtárik
, Alexander V. Gasnikov:
Optimal Algorithms for Decentralized Stochastic Variational Inequalities. CoRR abs/2202.02771 (2022) - [i152]Konstantin Burlachenko, Samuel Horváth
, Peter Richtárik
:
FL_PyTorch: optimization research simulator for federated learning. CoRR abs/2202.03099 (2022) - [i151]Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, Peter Richtárik
:
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! CoRR abs/2202.09357 (2022) - [i150]Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat:
FedShuffle: Recipes for Better Use of Local Work in Federated Learning. CoRR abs/2204.13169 (2022) - [i149]Grigory Malinovsky, Peter Richtárik:
Federated Random Reshuffling with Compression and Variance Reduction. CoRR abs/2205.03914 (2022) - [i148]Laurent Condat, Kai Yi, Peter Richtárik:
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. CoRR abs/2205.04180 (2022) - [i147]Alexander Tyurin, Peter Richtárik:
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting. CoRR abs/2205.15580 (2022) - [i146]Lukang Sun, Avetik G. Karagulyan, Peter Richtárik:
Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition. CoRR abs/2206.00508 (2022) - [i145]Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel:
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top. CoRR abs/2206.00529 (2022) - [i144]Lukang Sun, Adil Salim, Peter Richtárik:
Federated Learning with a Sampling Algorithm under Isoperimetry. CoRR abs/2206.00920 (2022) - [i143]Alexander Tyurin, Lukang Sun, Konstantin Burlachenko, Peter Richtárik:
Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling. CoRR abs/2206.02275 (2022) - [i142]Motasem Alfarra, Juan C. Pérez, Egor Shulgin, Peter Richtárik, Bernard Ghanem:
Certified Robustness in Federated Learning. CoRR abs/2206.02535 (2022) - [i141]Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik:
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation. CoRR abs/2206.03588 (2022) - [i140]Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik:
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression. CoRR abs/2206.07021 (2022) - [i139]Lukang Sun, Peter Richtárik:
A Note on the Convergence of Mirrored Stein Variational Gradient Descent under (L0, L1)-Smoothness Condition. CoRR abs/2206.09709 (2022) - [i138]Egor Shulgin, Peter Richtárik:
Shifted Compression Framework: Generalizations and Improvements. CoRR abs/2206.10452 (2022) - [i137]Abdurakhmon Sadiev, Dmitry Kovalev, Peter Richtárik:
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox. CoRR abs/2207.03957 (2022) - [i136]Grigory Malinovsky, Kai Yi, Peter Richtárik:
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning. CoRR abs/2207.04338 (2022) - [i135]Samuel Horváth, Konstantin Mishchenko, Peter Richtárik:
Adaptive Learning Rates for Faster Stochastic Gradient Methods. CoRR abs/2208.05287 (2022) - [i134]El Houcine Bergou, Konstantin Burlachenko, Aritra Dutta, Peter Richtárik:
Personalized Federated Learning with Communication Compression. CoRR abs/2209.05148 (2022) - [i133]Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou:
Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization. CoRR abs/2209.07883 (2022) - [i132]Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik:
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression. CoRR abs/2209.15218 (2022) - [i131]Lukang Sun, Peter Richtárik:
Improved Stein Variational Gradient Descent with Importance Weights. CoRR abs/2210.00462 (2022) - [i130]Laurent Condat, Ivan Agarský, Peter Richtárik:
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Compressed Communication. CoRR abs/2210.13277 (2022) - [i129]Artavazd Maranjyan, Mher Safaryan, Peter Richtárik:
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity. CoRR abs/2210.16402 (2022) - [i128]Maksim Makarenko, Elnur Gasanov, Rustem Islamov, Abdurakhmon Sadiev, Peter Richtárik:
Adaptive Compression for Communication-Efficient Distributed Training. CoRR abs/2211.00188 (2022) - [i127]Michal Grudzien, Grigory Malinovsky, Peter Richtárik:
Can 5th} Generation Local Training Methods Support Client Sampling? Yes! CoRR abs/2212.14370 (2022) - 2021
- [j37]Filip Hanzely
, Peter Richtárik
, Lin Xiao
:
Accelerated Bregman proximal gradient methods for relatively smooth convex optimization. Comput. Optim. Appl. 79(2): 405-440 (2021) - [j36]Filip Hanzely
, Peter Richtárik
:
Fastest rates for stochastic mirror descent methods. Comput. Optim. Appl. 79(3): 717-766 (2021) - [j35]Xun Qian, Zheng Qu, Peter Richtárik:
L-SVRG and L-Katyusha with Arbitrary Sampling. J. Mach. Learn. Res. 22: 112:1-112:47 (2021) - [j34]Robert M. Gower
, Peter Richtárik
, Francis R. Bach:
Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. Math. Program. 188(1): 135-192 (2021) - [j33]Nicolas Loizou
, Peter Richtárik
:
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols. IEEE Trans. Inf. Theory 67(12): 8300-8324 (2021) - [c61]Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau:
Hyperparameter Transfer Learning with Adaptive Complexity. AISTATS 2021: 1378-1386 - [c60]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
Local SGD: Unified Theory and New Efficient Methods. AISTATS 2021: 3556-3564 - [c59]Dmitry Kovalev, Anastasia Koloskova, Martin Jaggi, Peter Richtárik, Sebastian U. Stich:
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! AISTATS 2021: 4087-4095 - [c58]Konstantin Burlachenko, Samuel Horváth
, Peter Richtárik
:
FL_PyTorch: optimization research simulator for federated learning. DistributedML@CoNEXT 2021: 1-7 - [c57]Samuel Horváth, Peter Richtárik:
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning. ICLR 2021 - [c56]Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
MARINA: Faster Non-Convex Distributed Learning with Compression. ICML 2021: 3788-3798 - [c55]Rustem Islamov, Xun Qian, Peter Richtárik:
Distributed Second Order Methods with Fast Rates and Compressed Communication. ICML 2021: 4617-4628 - [c54]Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Alexander Rogozin, Alexander V. Gasnikov:
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks. ICML 2021: 5784-5793 - [c53]Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtárik:
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization. ICML 2021: 6286-6295 - [c52]Mher Safaryan, Peter Richtárik:
Stochastic Sign Descent Methods: New Algorithms and Better Theory. ICML 2021: 9224-9234 - [c51]Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin:
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback. NeurIPS 2021: 4384-4396 - [c50]Zhize Li, Peter Richtárik:
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression. NeurIPS 2021: 13770-13781 - [c49]Dmitry Kovalev, Elnur Gasanov, Alexander V. Gasnikov, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks. NeurIPS 2021: 22325-22335 - [c48]Mher Safaryan, Filip Hanzely, Peter Richtárik:
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization. NeurIPS 2021: 25688-25702 - [c47]Xun Qian, Peter Richtárik, Tong Zhang:
Error Compensated Distributed SGD Can Be Accelerated. NeurIPS 2021: 30401-30413 - [c46]Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports, Peter Richtárik:
Scaling Distributed Machine Learning with In-Network Aggregation. NSDI 2021: 785-808 - [i126]Konstantin Mishchenko
, Ahmed Khaled, Peter Richtárik
:
Proximal and Federated Random Reshuffling. CoRR abs/2102.06704 (2021) - [i125]Rustem Islamov
, Xun Qian
, Peter Richtárik
:
Distributed Second Order Methods with Fast Rates and Compressed Communication. CoRR abs/2102.07158 (2021) - [i124]Mher Safaryan
, Filip Hanzely, Peter Richtárik
:
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization. CoRR abs/2102.07245 (2021) - [i123]Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik
:
MARINA: Faster Non-Convex Distributed Learning with Compression. CoRR abs/2102.07845 (2021) - [i122]Konstantin Mishchenko
, Bokun Wang, Dmitry Kovalev, Peter Richtárik
:
IntSGD: Floatless Compression of Stochastic Gradients. CoRR abs/2102.08374 (2021) - [i121]Dmitry Kovalev, Egor Shulgin, Peter Richtárik
, Alexander Rogozin, Alexander V. Gasnikov:
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks. CoRR abs/2102.09234 (2021) - [i120]Zheng Shi, Nicolas Loizou, Peter Richtárik
, Martin Takác:
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods. CoRR abs/2102.09700 (2021) - [i119]Samuel Horváth
, Aaron Klein, Peter Richtárik
, Cédric Archambeau:
Hyperparameter Transfer Learning with Adaptive Complexity. CoRR abs/2102.12810 (2021) - [i118]Zhize Li, Peter Richtárik
:
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation. CoRR abs/2103.01447 (2021) - [i117]Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik
:
Random Reshuffling with Variance Reduction: New Analysis and Better Rates. CoRR abs/2104.09342 (2021) - [i116]Mher Safaryan
, Rustem Islamov, Xun Qian
, Peter Richtárik
:
FedNL: Making Newton-Type Methods Applicable to Federated Learning. CoRR abs/2106.02969 (2021) - [i115]Laurent Condat, Peter Richtárik
:
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization. CoRR abs/2106.03056 (2021) - [i114]Adil Salim, Lukang Sun, Peter Richtárik
:
Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1. CoRR abs/2106.03076 (2021) - [i113]Bokun Wang, Mher Safaryan
, Peter Richtárik
:
Smoothness-Aware Quantization Techniques. CoRR abs/2106.03524 (2021) - [i112]Dmitry Kovalev, Elnur Gasanov, Peter Richtárik
, Alexander V. Gasnikov:
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks. CoRR abs/2106.04469 (2021) - [i111]Peter Richtárik
, Igor Sokolov, Ilyas Fatkhullin:
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback. CoRR abs/2106.05203 (2021) - [i110]Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth
, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik
, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu:
A Field Guide to Federated Optimization. CoRR abs/2107.06917 (2021) - [i109]Zhize Li, Peter Richtárik
:
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression. CoRR abs/2107.09461 (2021) - [i108]Haoyu Zhao, Zhize Li, Peter Richtárik
:
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning. CoRR abs/2108.04755 (2021) - [i107]Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik
, Michael W. Mahoney, Martin Takác:
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information. CoRR abs/2109.05198 (2021) - [i106]Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik
:
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback. CoRR abs/2110.03294 (2021) - [i105]Rafal Szlendak, Alexander Tyurin
, Peter Richtárik
:
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization. CoRR abs/2110.03300 (2021) - [i104]Aleksandr Beznosikov, Peter Richtárik
, Michael Diskin
, Max Ryabinin, Alexander V. Gasnikov:
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees. CoRR abs/2110.03313 (2021) - [i103]Xun Qian
, Rustem Islamov, Mher Safaryan
, Peter Richtárik
:
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning. CoRR abs/2111.01847 (2021) - [i102]Elnur Gasanov, Ahmed Khaled, Samuel Horváth
, Peter Richtárik
:
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning. CoRR abs/2111.11556 (2021) - [i101]Haoyu Zhao, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
Faster Rates for Compressed Federated Learning with Client-Variance Reduction. CoRR abs/2112.13097 (2021) - [i100]Dmitry Kovalev, Alexander V. Gasnikov, Peter Richtárik:
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling. CoRR abs/2112.15199 (2021) - 2020
- [j32]Nicolas Loizou
, Peter Richtárik
:
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods. Comput. Optim. Appl. 77(3): 653-710 (2020) - [j31]Robert M. Gower
, Mark Schmidt
, Francis R. Bach, Peter Richtárik
:
Variance-Reduced Methods for Machine Learning. Proc. IEEE 108(11): 1968-1983 (2020) - [j30]El Houcine Bergou, Eduard Gorbunov, Peter Richtárik
:
Stochastic Three Points Method for Unconstrained Smooth Minimization. SIAM J. Optim. 30(4): 2726-2749 (2020) - [j29]Peter Richtárik
, Martin Takác
:
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. SIAM J. Matrix Anal. Appl. 41(2): 487-524 (2020) - [j28]Nicolas Loizou, Peter Richtárik
:
Convergence Analysis of Inexact Randomized Iterative Methods. SIAM J. Sci. Comput. 42(6): A3979-A4016 (2020) - [j27]Aritra Dutta
, Filip Hanzely
, Jingwei Liang
, Peter Richtárik
:
Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit. IEEE Trans. Signal Process. 68: 6128-6141 (2020) - [c45]Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem, Peter Richtárik:
A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control. AAAI 2020: 3275-3282 - [c44]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. AISTATS 2020: 680-690 - [c43]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
Tighter Theory for Local SGD on Identical and Heterogeneous Data. AISTATS 2020: 4519-4529 - [c42]Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky:
Revisiting Stochastic Extragradient. AISTATS 2020: 4573-4582 - [c41]Dmitry Kovalev, Samuel Horváth, Peter Richtárik:
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. ALT 2020: 451-467 - [c40]Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou, Peter Richtárik:
A Stochastic Derivative Free Optimization Method with Momentum. ICLR 2020 - [c39]Filip Hanzely, Nikita Doikov, Yurii E. Nesterov, Peter Richtárik:
Stochastic Subspace Cubic Newton Method. ICML 2020: 4027-4038 - [c38]Filip Hanzely, Dmitry Kovalev, Peter Richtárik:
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems. ICML 2020: 4039-4048 - [c37]Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik:
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization. ICML 2020: 5895-5904 - [c36]Grigory Malinovskiy, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik:
From Local SGD to Local Fixed-Point Methods for Federated Learning. ICML 2020: 6692-6701 - [c35]Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. NeurIPS 2020 - [c34]Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Personalized Federated Learning. NeurIPS 2020 - [c33]Dmitry Kovalev, Adil Salim, Peter Richtárik:
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization. NeurIPS 2020 - [c32]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Random Reshuffling: Simple Analysis with Vast Improvements. NeurIPS 2020 - [c31]Adil Salim, Peter Richtárik:
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm. NeurIPS 2020 - [c30]Konstantin Mishchenko, Filip Hanzely, Peter Richtárik:
99% of Worker-Master Communication in Distributed Optimization Is Not Needed. UAI 2020: 979-988 - [i99]