
Peter Richtárik
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2021
- [i106]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Proximal and Federated Random Reshuffling. CoRR abs/2102.06704 (2021) - [i105]Rustem Islamov, Xun Qian, Peter Richtárik:
Distributed Second Order Methods with Fast Rates and Compressed Communication. CoRR abs/2102.07158 (2021) - [i104]Mher Safaryan, Filip Hanzely, Peter Richtárik:
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization. CoRR abs/2102.07245 (2021) - [i103]Eduard A. Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
MARINA: Faster Non-Convex Distributed Learning with Compression. CoRR abs/2102.07845 (2021) - [i102]Konstantin Mishchenko, Bokun Wang, Dmitry Kovalev, Peter Richtárik:
IntSGD: Floatless Compression of Stochastic Gradients. CoRR abs/2102.08374 (2021) - [i101]Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Alexander Rogozin, Alexander Gasnikov:
ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks. CoRR abs/2102.09234 (2021) - [i100]Zheng Shi, Nicolas Loizou, Peter Richtárik, Martin Takác:
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods. CoRR abs/2102.09700 (2021) - 2020
- [j30]Nicolas Loizou, Peter Richtárik:
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods. Comput. Optim. Appl. 77(3): 653-710 (2020) - [j29]Robert M. Gower
, Mark Schmidt
, Francis R. Bach, Peter Richtárik:
Variance-Reduced Methods for Machine Learning. Proc. IEEE 108(11): 1968-1983 (2020) - [j28]El Houcine Bergou, Eduard A. Gorbunov, Peter Richtárik:
Stochastic Three Points Method for Unconstrained Smooth Minimization. SIAM J. Optim. 30(4): 2726-2749 (2020) - [j27]Peter Richtárik, Martin Takác:
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. SIAM J. Matrix Anal. Appl. 41(2): 487-524 (2020) - [j26]Nicolas Loizou, Peter Richtárik:
Convergence Analysis of Inexact Randomized Iterative Methods. SIAM J. Sci. Comput. 42(6): A3979-A4016 (2020) - [j25]Aritra Dutta
, Filip Hanzely
, Jingwei Liang
, Peter Richtárik:
Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit. IEEE Trans. Signal Process. 68: 6128-6141 (2020) - [c45]Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem, Peter Richtárik:
A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control. AAAI 2020: 3275-3282 - [c44]Eduard A. Gorbunov, Filip Hanzely, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. AISTATS 2020: 680-690 - [c43]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
Tighter Theory for Local SGD on Identical and Heterogeneous Data. AISTATS 2020: 4519-4529 - [c42]Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky:
Revisiting Stochastic Extragradient. AISTATS 2020: 4573-4582 - [c41]Dmitry Kovalev, Samuel Horváth, Peter Richtárik:
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. ALT 2020: 451-467 - [c40]Eduard A. Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou, Peter Richtárik:
A Stochastic Derivative Free Optimization Method with Momentum. ICLR 2020 - [c39]Filip Hanzely, Nikita Doikov, Yurii E. Nesterov, Peter Richtárik:
Stochastic Subspace Cubic Newton Method. ICML 2020: 4027-4038 - [c38]Filip Hanzely, Dmitry Kovalev, Peter Richtárik:
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems. ICML 2020: 4039-4048 - [c37]Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik:
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization. ICML 2020: 5895-5904 - [c36]Grigory Malinovskiy, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik:
From Local SGD to Local Fixed-Point Methods for Federated Learning. ICML 2020: 6692-6701 - [c35]Eduard A. Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. NeurIPS 2020 - [c34]Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Personalized Federated Learning. NeurIPS 2020 - [c33]Dmitry Kovalev, Adil Salim, Peter Richtárik:
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization. NeurIPS 2020 - [c32]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Random Reshuffling: Simple Analysis with Vast Improvements. NeurIPS 2020 - [c31]Adil Salim, Peter Richtárik:
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm. NeurIPS 2020 - [c30]Konstantin Mishchenko, Filip Hanzely, Peter Richtárik:
99% of Worker-Master Communication in Distributed Optimization Is Not Needed. UAI 2020: 979-988 - [i99]Ahmed Khaled, Peter Richtárik:
Better Theory for SGD in the Nonconvex World. CoRR abs/2002.03329 (2020) - [i98]Filip Hanzely, Dmitry Kovalev, Peter Richtárik:
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems. CoRR abs/2002.04670 (2020) - [i97]Samuel Horváth, Lihua Lei, Peter Richtárik, Michael I. Jordan:
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization. CoRR abs/2002.05359 (2020) - [i96]Filip Hanzely, Peter Richtárik:
Federated Learning of a Mixture of Global and Local Models. CoRR abs/2002.05516 (2020) - [i95]Mher Safaryan, Egor Shulgin, Peter Richtárik:
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor. CoRR abs/2002.08958 (2020) - [i94]Filip Hanzely, Nikita Doikov, Peter Richtárik, Yurii E. Nesterov:
Stochastic Subspace Cubic Newton Method. CoRR abs/2002.09526 (2020) - [i93]Dmitry Kovalev, Robert M. Gower, Peter Richtárik, Alexander Rogozin:
Fast Linear Convergence of Randomized BFGS. CoRR abs/2002.11337 (2020) - [i92]Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik:
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization. CoRR abs/2002.11364 (2020) - [i91]Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan:
On Biased Compression for Distributed Learning. CoRR abs/2002.12410 (2020) - [i90]Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat
, Peter Richtárik:
From Local SGD to Local Fixed Point Methods for Federated Learning. CoRR abs/2004.01442 (2020) - [i89]Atal Narayan Sahu, Aritra Dutta, Aashutosh Tiwari, Peter Richtárik:
On the Convergence Analysis of Asynchronous SGD for Solving Consistent Linear Systems. CoRR abs/2004.02163 (2020) - [i88]Adil Salim, Laurent Condat
, Konstantin Mishchenko, Peter Richtárik:
Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms. CoRR abs/2004.02635 (2020) - [i87]Motasem Alfarra, Slavomír Hanzely, Alyazeed Albasyoni, Bernard Ghanem, Peter Richtárik:
Adaptive Learning of the Optimal Mini-Batch Size of SGD. CoRR abs/2005.01097 (2020) - [i86]Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik:
Random Reshuffling: Simple Analysis with Vast Improvements. CoRR abs/2006.05988 (2020) - [i85]Zhize Li, Peter Richtárik:
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization. CoRR abs/2006.07013 (2020) - [i84]Adil Salim, Peter Richtárik:
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm. CoRR abs/2006.09270 (2020) - [i83]Samuel Horváth, Peter Richtárik:
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning. CoRR abs/2006.11077 (2020) - [i82]Ahmed Khaled, Othmane Sebbouh, Nicolas Loizou, Robert M. Gower, Peter Richtárik:
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization. CoRR abs/2006.11573 (2020) - [i81]Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtárik:
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization. CoRR abs/2008.10898 (2020) - [i80]Robert M. Gower, Mark Schmidt, Francis R. Bach, Peter Richtárik:
Variance-Reduced Methods for Machine Learning. CoRR abs/2010.00892 (2020) - [i79]Laurent Condat, Grigory Malinovsky, Peter Richtárik:
Distributed Proximal Splitting Algorithms with Rates and Acceleration. CoRR abs/2010.00952 (2020) - [i78]Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik:
Lower Bounds and Optimal Algorithms for Personalized Federated Learning. CoRR abs/2010.02372 (2020) - [i77]Alyazeed Albasyoni, Mher Safaryan, Laurent Condat, Peter Richtárik:
Optimal Gradient Compression for Distributed and Federated Learning. CoRR abs/2010.03246 (2020) - [i76]Eduard A. Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. CoRR abs/2010.12292 (2020) - [i75]Wenlin Chen, Samuel Horvath, Peter Richtárik:
Optimal Client Sampling for Federated Learning. CoRR abs/2010.13723 (2020) - [i74]Dmitry Kovalev, Anastasia Koloskova, Martin Jaggi, Peter Richtárik, Sebastian U. Stich:
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! CoRR abs/2011.01697 (2020) - [i73]Eduard A. Gorbunov, Filip Hanzely, Peter Richtárik:
Local SGD: Unified Theory and New Efficient Methods. CoRR abs/2011.02828 (2020)
2010 – 2019
- 2019
- [j24]Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg
, Martin Takác, Marten van Dijk:
New Convergence Aspects of Stochastic Gradient Algorithms. J. Mach. Learn. Res. 20: 176:1-176:49 (2019) - [j23]Ion Necoara, Peter Richtárik, Andrei Patrascu:
Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates. SIAM J. Optim. 29(4): 2814-2852 (2019) - [c29]Aritra Dutta, Filip Hanzely
, Peter Richtárik:
A Nonconvex Projection Method for Robust PCA. AAAI 2019: 1468-1476 - [c28]Filip Hanzely, Peter Richtárik:
Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches. AISTATS 2019: 304-312 - [c27]Nicolas Loizou, Michael G. Rabbat, Peter Richtárik:
Provably Accelerated Randomized Gossip Algorithms. ICASSP 2019: 7505-7509 - [c26]Samuel Horváth, Peter Richtárik:
Nonconvex Variance Reduced Optimization with Arbitrary Sampling. ICML 2019: 2781-2789 - [c25]Xun Qian, Zheng Qu, Peter Richtárik:
SAGA with Arbitrary Sampling. ICML 2019: 5190-5199 - [c24]Xun Qian, Peter Richtárik, Robert M. Gower, Alibek Sailanbayev, Nicolas Loizou, Egor Shulgin:
SGD with Arbitrary Sampling: General Analysis and Improved Rates. ICML 2019: 5200-5209 - [c23]Robert M. Gower, Dmitry Kovalev, Felix Lieder, Peter Richtárik:
RSN: Randomized Subspace Newton. NeurIPS 2019: 614-623 - [c22]Adil Salim, Dmitry Kovalev, Peter Richtárik:
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates. NeurIPS 2019: 6649-6661 - [c21]Jinhui Xiong, Peter Richtárik, Wolfgang Heidrich:
Stochastic Convolutional Sparse Coding. VMV 2019: 47-54 - [c20]Aritra Dutta
, Peter Richtárik:
Online and Batch Supervised Background Estimation Via L1 Regression. WACV 2019: 541-550 - [i72]Xun Qian, Zheng Qu, Peter Richtárik:
SAGA with Arbitrary Sampling. CoRR abs/1901.08669 (2019) - [i71]Dmitry Kovalev, Samuel Horvath, Peter Richtárik:
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. CoRR abs/1901.08689 (2019) - [i70]Konstantin Mishchenko, Eduard A. Gorbunov
, Martin Takác, Peter Richtárik:
Distributed Learning with Compressed Gradient Differences. CoRR abs/1901.09269 (2019) - [i69]Filip Hanzely
, Jakub Konecný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko:
A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion. CoRR abs/1901.09367 (2019) - [i68]Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, Peter Richtárik:
SGD: General Analysis and Improved Rates. CoRR abs/1901.09401 (2019) - [i67]Konstantin Mishchenko, Filip Hanzely
, Peter Richtárik:
99% of Parallel Optimization is Inevitably a Waste of Time. CoRR abs/1901.09437 (2019) - [i66]Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports, Peter Richtárik:
Scaling Distributed Machine Learning with In-Network Aggregation. CoRR abs/1903.06701 (2019) - [i65]Nicolas Loizou, Peter Richtárik:
Convergence Analysis of Inexact Randomized Iterative Methods. CoRR abs/1903.07971 (2019) - [i64]Nicolas Loizou, Peter Richtárik:
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols. CoRR abs/1905.08645 (2019) - [i63]Aritra Dutta, Filip Hanzely
, Jingwei Liang, Peter Richtárik:
Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit. CoRR abs/1905.10598 (2019) - [i62]Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtárik:
Natural Compression for Distributed Deep Learning. CoRR abs/1905.10988 (2019) - [i61]Eduard A. Gorbunov
, Filip Hanzely
, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. CoRR abs/1905.11261 (2019) - [i60]Filip Hanzely
, Peter Richtárik:
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods. CoRR abs/1905.11266 (2019) - [i59]Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky:
Revisiting Stochastic Extragradient. CoRR abs/1905.11373 (2019) - [i58]Aritra Dutta, El Houcine Bergou, Yunming Xiao, Marco Canini, Peter Richtárik:
Direct Nonlinear Acceleration. CoRR abs/1905.11692 (2019) - [i57]Adil Salim, Dmitry Kovalev, Peter Richtárik:
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates. CoRR abs/1905.11768 (2019) - [i56]Jinhui Xiong, Peter Richtárik, Wolfgang Heidrich:
Stochastic Convolutional Sparse Coding. CoRR abs/1909.00145 (2019) - [i55]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
First Analysis of Local GD on Heterogeneous Data. CoRR abs/1909.04715 (2019) - [i54]Ahmed Khaled, Peter Richtárik:
Gradient Descent with Compressed Iterates. CoRR abs/1909.04716 (2019) - [i53]Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik:
Better Communication Complexity for Local SGD. CoRR abs/1909.04746 (2019) - [i52]Dmitry Kovalev, Konstantin Mishchenko, Peter Richtárik:
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates. CoRR abs/1912.01597 (2019) - [i51]Sélim Chraibi, Ahmed Khaled, Dmitry Kovalev, Peter Richtárik, Adil Salim, Martin Takác:
Distributed Fixed Point Methods with Compressed Iterates. CoRR abs/1912.09925 (2019) - 2018
- [j22]Dominik Csiba, Peter Richtárik:
Importance Sampling for Minibatches. J. Mach. Learn. Res. 19: 27:1-27:21 (2018) - [j21]Rachael Tappenden, Martin Takác, Peter Richtárik:
On the complexity of parallel coordinate descent. Optim. Methods Softw. 33(2): 372-395 (2018) - [j20]Antonin Chambolle
, Matthias J. Ehrhardt
, Peter Richtárik, Carola-Bibiane Schönlieb
:
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications. SIAM J. Optim. 28(4): 2783-2808 (2018) - [c19]Nicolas Loizou, Peter Richtárik:
Accelerated Gossip via Stochastic Heavy Ball Method. Allerton 2018: 927-934 - [c18]Dominik Csiba, Peter Richtárik:
Coordinate Descent Faceoff: Primal or Dual? ALT 2018: 246-267 - [c17]Nikita Doikov, Peter Richtárik:
Randomized Block Cubic Newton Method. ICML 2018: 1289-1297 - [c16]Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg
, Martin Takác:
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption. ICML 2018: 3747-3755 - [c15]Robert M. Gower, Filip Hanzely, Peter Richtárik, Sebastian U. Stich:
Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization. NeurIPS 2018: 1626-1636 - [c14]Filip Hanzely, Konstantin Mishchenko, Peter Richtárik:
SEGA: Variance Reduction via Gradient Sketching. NeurIPS 2018: 2086-2097 - [c13]Dmitry Kovalev, Peter Richtárik, Eduard A. Gorbunov, Elnur Gasanov:
Stochastic Spectral and Conjugate Descent Methods. NeurIPS 2018: 3362-3371 - [c12]Jakub Marecek
, Peter Richtárik, Martin Takác:
Matrix Completion Under Interval Uncertainty: Highlights. ECML/PKDD (3) 2018: 621-625 - [i50]Lam M. Nguyen
, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takác:
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption. CoRR abs/1802.03801 (2018) - [i49]Robert M. Gower, Filip Hanzely
, Peter Richtárik, Sebastian U. Stich:
Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization. CoRR abs/1802.04079 (2018) - [i48]Filip Hanzely
, Peter Richtárik:
Fastest Rates for Stochastic Mirror Descent Methods. CoRR abs/1803.07374 (2018) - [i47]Aritra Dutta, Xin Li, Peter Richtárik:
Weighted Low-Rank Approximation of Matrices and Background Modeling. CoRR abs/1804.06252 (2018) - [i46]Aritra Dutta, Filip Hanzely
, Peter Richtárik:
A Nonconvex Projection Method for Robust PCA. CoRR abs/1805.07962 (2018) - [i45]Filip Hanzely
, Konstantin Mishchenko, Peter Richtárik:
SEGA: Variance Reduction via Gradient Sketching. CoRR abs/1809.03054 (2018) - [i44]Nicolas Loizou, Peter Richtárik:
Accelerated Gossip via Stochastic Heavy Ball Method. CoRR abs/1809.08657 (2018) - [i43]Nicolas Loizou, Michael G. Rabbat, Peter Richtárik:
Provably Accelerated Randomized Gossip Algorithms. CoRR abs/1810.13084 (2018) - [i42]Lam M. Nguyen
, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takác, Marten van Dijk:
New Convergence Aspects of Stochastic Gradient Algorithms. CoRR abs/1811.12403 (2018) - 2017
- [j19]Jakub Marecek
, Peter Richtárik, Martin Takác:
Matrix completion under interval uncertainty. Eur. J. Oper. Res. 256(1): 35-43 (2017) - [j18]Chenxin Ma, Jakub Konecný, Martin Jaggi
, Virginia Smith, Michael I. Jordan
, Peter Richtárik, Martin Takác:
Distributed optimization with arbitrary local solvers. Optim. Methods Softw. 32(4): 813-848 (2017) - [j17]Jakub Konecný, Zheng Qu, Peter Richtárik:
Semi-stochastic coordinate descent. Optim. Methods Softw. 32(5): 993-1005 (2017) - [j16]Robert M. Gower
, Peter Richtárik:
Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms. SIAM J. Matrix Anal. Appl. 38(4): 1380-1409 (2017) - [c11]Xin Li, Aritra Dutta
, Peter Richtárik:
A Batch-Incremental Video Background Estimation Model Using Weighted Low-Rank Approximation of Matrices. ICCV Workshops 2017: 1835-1843 - [i41]Peter Richtárik, Martin Takác:
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. CoRR abs/1706.01108 (2017) - [i40]Antonin Chambolle, Matthias J. Ehrhardt, Peter Richtárik, Carola-Bibiane Schönlieb:
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications. CoRR abs/1706.04957 (2017) - [i39]Aritra Dutta, Xin Li, Peter Richtárik:
A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices. CoRR abs/1707.00281 (2017) - [i38]Nicolas Loizou, Peter Richtárik:
Linearly convergent stochastic heavy ball method for minimizing generalization error. CoRR abs/1710.10737 (2017) - [i37]Aritra Dutta, Peter Richtárik:
Online and Batch Supervised Background Estimation via L1 Regression. CoRR abs/1712.02249 (2017) - [i36]Nicolas Loizou, Peter Richtárik:
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods. CoRR abs/1712.09677 (2017) - 2016
- [j15]Peter Richtárik, Martin Takác:
Distributed Coordinate Descent Method for Learning with Big Data. J. Mach. Learn. Res. 17: 75:1-75:25 (2016) - [j14]Rachael Tappenden, Peter Richtárik, Jacek Gondzio:
Inexact Coordinate Descent: Complexity and Preconditioning. J. Optim. Theory Appl. 170(1): 144-176 (2016) - [j13]Jakub Konecný, Jie Liu, Peter Richtárik, Martin Takác:
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting. IEEE J. Sel. Top. Signal Process. 10(2): 242-255 (2016) - [j12]Peter Richtárik, Martin Takác:
Parallel coordinate descent methods for big data optimization. Math. Program. 156(1-2): 433-484 (2016) - [j11]Peter Richtárik, Martin Takác:
On optimal probabilities in stochastic coordinate descent methods. Optim. Lett. 10(6): 1233-1243 (2016) - [j10]Zheng Qu, Peter Richtárik:
Coordinate descent with arbitrary sampling I: algorithms and complexity. Optim. Methods Softw. 31(5): 829-857 (2016) - [j9]Zheng Qu, Peter Richtárik:
Coordinate descent with arbitrary sampling II: expected separable overapproximation. Optim. Methods Softw. 31(5): 858-884 (2016) - [j8]