Peter Richtarik
#83,520
Most Influential Person Now
Slovak mathematician
Peter Richtarik's AcademicInfluence.com Rankings
Peter Richtarikmathematics Degrees
Mathematics
#5668
World Rank
#7968
Historical Rank
Applied Mathematics
#236
World Rank
#261
Historical Rank
Measure Theory
#1448
World Rank
#1813
Historical Rank

Download Badge
Mathematics
Why Is Peter Richtarik Influential?
(Suggest an Edit or Addition)According to Wikipedia, Peter Richtarik is a Slovak mathematician and computer scientist working in the area of big data optimization and machine learning, known for his work on randomized coordinate descent algorithms, stochastic gradient descent and federated learning. He is currently a Professor of Computer Science at the King Abdullah University of Science and Technology.
Peter Richtarik's Published Works
Number of citations in a given year to any of this author's works
Total number of citations to an author for the works they published in a given year. This highlights publication of the most important work(s) by the author
Published Works
- Federated Learning: Strategies for Improving Communication Efficiency (2016) (2898)
- Federated Optimization: Distributed Machine Learning for On-Device Intelligence (2016) (1235)
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function (2011) (753)
- Generalized Power Method for Sparse Principal Component Analysis (2008) (597)
- Parallel coordinate descent methods for big data optimization (2012) (479)
- Accelerated, Parallel, and Proximal Coordinate Descent (2013) (363)
- Tighter Theory for Local SGD on Identical and Heterogeneous Data (2019) (264)
- Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting (2014) (251)
- Distributed Coordinate Descent Method for Learning with Big Data (2013) (225)
- Semi-Stochastic Gradient Descent Methods (2013) (220)
- SGD: General Analysis and Improved Rates (2019) (219)
- Randomized Iterative Methods for Linear Systems (2015) (219)
- Federated Learning of a Mixture of Global and Local Models (2020) (211)
- Scaling Distributed Machine Learning with In-Network Aggregation (2019) (197)
- Mini-Batch Primal and Dual Methods for SVMs (2013) (195)
- A Field Guide to Federated Optimization (2021) (195)
- Adding vs. Averaging in Distributed Primal-Dual Optimization (2015) (160)
- SGD and Hogwild! Convergence Without the Bounded Gradients Assumption (2018) (157)
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications (2017) (157)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods (2017) (154)
- Distributed optimization with arbitrary local solvers (2015) (152)
- Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling (2015) (151)
- Distributed Learning with Compressed Gradient Differences (2019) (143)
- Stochastic Block BFGS: Squeezing More Curvature out of Data (2016) (138)
- AIDE: Fast and Communication Efficient Distributed Optimization (2016) (132)
- On optimal probabilities in stochastic coordinate descent methods (2013) (124)
- Stochastic Distributed Learning with Gradient Quantization and Variance Reduction (2019) (120)
- Coordinate descent with arbitrary sampling I: algorithms and complexity† (2014) (118)
- First Analysis of Local GD on Heterogeneous Data (2019) (117)
- Lower Bounds and Optimal Algorithms for Personalized Federated Learning (2020) (114)
- Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop (2019) (109)
- Inexact Coordinate Descent: Complexity and Preconditioning (2013) (107)
- Importance Sampling for Minibatches (2016) (103)
- On Biased Compression for Distributed Learning (2020) (95)
- Stochastic Dual Coordinate Ascent with Adaptive Probabilities (2015) (93)
- SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization (2015) (93)
- A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent (2019) (92)
- Natural Compression for Distributed Deep Learning (2019) (88)
- Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling (2015) (85)
- Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization (2020) (85)
- Randomized Distributed Mean Estimation: Accuracy vs. Communication (2016) (81)
- Coordinate descent with arbitrary sampling II: expected separable overapproximation (2014) (81)
- Optimal Client Sampling for Federated Learning (2020) (77)
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (2018) (76)
- From Local SGD to Local Fixed Point Methods for Federated Learning (2020) (74)
- Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory (2017) (72)
- Better Theory for SGD in the Nonconvex World (2020) (72)
- Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design (2011) (69)
- Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms (2016) (68)
- Distributed Block Coordinate Descent for Minimizing Partially Separable Functions (2014) (65)
- Semi-stochastic coordinate descent (2014) (63)
- Local SGD: Unified Theory and New Efficient Methods (2020) (62)
- Stochastic Dual Ascent for Solving Linear Systems (2015) (61)
- Randomized Dual Coordinate Ascent with Arbitrary Sampling (2014) (61)
- Random Reshuffling: Simple Analysis with Vast Improvements (2020) (61)
- PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization (2020) (60)
- Fast distributed coordinate descent for non-strongly convex losses (2014) (60)
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization (2018) (58)
- New Convergence Aspects of Stochastic Gradient Algorithms (2018) (53)
- Linearly Converging Error Compensated SGD (2020) (51)
- SEGA: Variance Reduction via Gradient Sketching (2018) (50)
- Distributed Mini-Batch SDCA (2015) (49)
- MARINA: Faster Non-Convex Distributed Learning with Compression (2021) (48)
- Variance-Reduced Methods for Machine Learning (2020) (47)
- EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback (2021) (47)
- Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization (2020) (46)
- RSN: Randomized Subspace Newton (2019) (46)
- Smooth minimization of nonsmooth functions with parallel coordinate descent methods (2013) (45)
- Revisiting Stochastic Extragradient (2019) (43)
- A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! (2020) (42)
- Optimal Gradient Compression for Distributed and Federated Learning (2020) (40)
- Separable approximations and decomposition methods for the augmented Lagrangian (2013) (39)
- Linearly convergent stochastic heavy ball method for minimizing generalization error (2017) (36)
- Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization (2018) (36)
- Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor (2020) (35)
- Stochastic Three Points Method for Unconstrained Smooth Minimization (2019) (34)
- Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches (2018) (33)
- A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning (2020) (33)
- A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments (2018) (33)
- Parallel coordinate descent methods for big data optimization (2015) (32)
- Nonconvex Variance Reduced Optimization with Arbitrary Sampling (2018) (31)
- A new perspective on randomized gossip algorithms (2016) (30)
- Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent (2016) (29)
- Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates (2019) (29)
- A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization (2020) (29)
- Simple Complexity Analysis of Simplified Direct Search (2014) (28)
- On the complexity of parallel coordinate descent (2015) (28)
- Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-{\L} ojasiewicz Functions (2017) (27)
- Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization (2020) (27)
- Alternating maximization: unifying framework for 8 sparse PCA formulations and efficient parallel codes (2012) (27)
- Fastest rates for stochastic mirror descent methods (2018) (27)
- Randomized projection methods for convex feasibility problems: conditioning and convergence rates (2018) (26)
- Distributed Second Order Methods with Fast Rates and Compressed Communication (2021) (26)
- Randomized Block Cubic Newton Method (2018) (26)
- Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates (2019) (26)
- Better Communication Complexity for Local SGD (2019) (25)
- Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse (2016) (25)
- Accelerated Gossip via Stochastic Heavy Ball Method (2018) (25)
- FedNL: Making Newton-Type Methods Applicable to Federated Learning (2021) (24)
- L-SVRG and L-Katyusha with Arbitrary Sampling (2019) (24)
- Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm (2020) (23)
- Stochastic Subspace Cubic Newton Method (2020) (22)
- SAGA with Arbitrary Sampling (2019) (22)
- ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation (2021) (22)
- Error Compensated Distributed SGD Can Be Accelerated (2020) (22)
- Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex Losses (2015) (21)
- One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods (2019) (21)
- Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols (2019) (20)
- Stochastic Sign Descent Methods: New Algorithms and Better Theory (2019) (20)
- SGD with Arbitrary Sampling: General Analysis and Improved Rates (2019) (19)
- Privacy preserving randomized gossip algorithms (2017) (19)
- EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback (2021) (19)
- ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks (2021) (19)
- FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning (2021) (19)
- S2CD: Semi-stochastic coordinate descent (2014) (18)
- A Nonconvex Projection Method for Robust PCA (2018) (18)
- Provably Accelerated Randomized Gossip Algorithms (2018) (18)
- Convergence Analysis of Inexact Randomized Iterative Methods (2019) (18)
- Improved Algorithms for Convex Minimization in Relative Scale (2011) (18)
- Proximal and Federated Random Reshuffling (2021) (18)
- Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling (2021) (16)
- Fast Linear Convergence of Randomized BFGS (2020) (16)
- Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates (2019) (16)
- Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms (2020) (16)
- Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks (2021) (15)
- Gradient Descent with Compressed Iterates (2019) (15)
- Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization (2020) (15)
- Matrix completion under interval uncertainty (2017) (15)
- A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions (2019) (14)
- Distributed Proximal Splitting Algorithms with Rates and Acceleration (2020) (14)
- Approximate Level Method for Nonsmooth Convex Minimization (2012) (13)
- Efficiency of randomized coordinate descent methods on minimization problems with a composite objective function (2011) (13)
- An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints (2021) (13)
- Coordinate Descent Face-Off: Primal or Dual? (2016) (13)
- Permutation Compressors for Provably Faster Distributed Nonconvex Optimization (2021) (13)
- Parallel Stochastic Newton Method (2017) (12)
- Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems (2020) (12)
- A Stochastic Derivative Free Optimization Method with Momentum (2019) (12)
- 99% of Parallel Optimization is Inevitably a Waste of Time (2019) (11)
- Faster PET reconstruction with a stochastic primal-dual hybrid gradient method (2017) (11)
- Distributed Fixed Point Methods with Compressed Iterates (2019) (11)
- CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression (2021) (11)
- MISO is Making a Comeback With Better Proofs and Rates (2019) (11)
- A Batch-Incremental Video Background Estimation Model Using Weighted Low-Rank Approximation of Matrices (2017) (10)
- Quasi-Newton methods for machine learning: forget the past, just sample (2019) (10)
- MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization (2021) (10)
- Simultaneously solving seven optimization problems in relative scale (2009) (10)
- 99% of Worker-Master Communication in Distributed Optimization Is Not Needed (2020) (10)
- Optimal Algorithms for Decentralized Stochastic Variational Inequalities (2022) (10)
- A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control (2019) (10)
- Weighted Low-Rank Approximation of Matrices and Background Modeling (2018) (9)
- A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion (2019) (8)
- Stochastic Spectral and Conjugate Descent Methods (2018) (8)
- IntSGD: Floatless Compression of Stochastic Gradients (2021) (7)
- Adaptive Learning of the Optimal Mini-Batch Size of SGD (2020) (7)
- FL_PyTorch: optimization research simulator for federated learning (2021) (7)
- 99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it (2019) (7)
- Random Reshuffling with Variance Reduction: New Analysis and Better Rates (2021) (7)
- On Stochastic Sign Descent Methods (2019) (7)
- Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit (2019) (7)
- Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization (2021) (7)
- Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms (2020) (7)
- Approximate level method (2008) (6)
- Improving SAGA via a Probabilistic Interpolation with Gradient Descent (2018) (6)
- RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates (2022) (6)
- Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees (2021) (6)
- IntSGD: Adaptive Floatless Compression of Stochastic Gradients (2021) (5)
- A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 (2021) (5)
- Simple Complexity Analysis of Direct Search (2014) (5)
- A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints (2018) (5)
- Hyperparameter Transfer Learning with Adaptive Complexity (2021) (5)
- Online and Batch Supervised Background Estimation Via L1 Regression (2017) (5)
- Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition (2022) (4)
- Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information (2021) (4)
- Inequality-Constrained Matrix Completion: Adding the Obvious Helps! (2014) (4)
- Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1 (2021) (4)
- Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox (2022) (4)
- High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance (2023) (3)
- Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization (2021) (3)
- AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods (2021) (3)
- Error Compensated Loopless SVRG for Distributed Optimization (2020) (3)
- Stochastic distributed learning with gradient quantization and double-variance reduction (2022) (3)
- Federated Random Reshuffling with Compression and Variance Reduction (2022) (2)
- Some algorithms for large-scale linear and convex minimization in relative scale (2007) (2)
- Finding sparse approximations to extreme eigenvectors: generalized power method for sparse PCA and extensions (2011) (2)
- Coordinate Descent Faceoff: Primal or Dual? (2018) (1)
- Error Compensated Proximal SGD and RDA (2020) (1)
- A Note on the Convergence of Mirrored Stein Variational Gradient Descent under (L0, L1)-Smoothness Condition (2022) (1)
- Smoothness-Aware Quantization Techniques (2021) (1)
- Adaptive Learning Rates for Faster Stochastic Gradient Methods (2022) (1)
- On the Convergence Analysis of Asynchronous SGD for Solving Consistent Linear Systems (2020) (1)
- Improved Stein Variational Gradient Descent with Importance Weights (2022) (1)
- Programme on “ Modern Maximal Monotone Operator Theory : From Nonsmooth Optimization to Differential Inclusions ” January 28 – March 8 , 2019 organized (2019) (0)
- Inexact Coordinate Descent: Complexity and Preconditioning (2016) (0)
- 2008 / 83 Approximate level method (2008) (0)
- Explorer Accelerated , Parallel and Proximal Coordinate Descent (2014) (0)
- Explorer Semi-Stochastic Gradient Descent Methods (2013) (0)
- Matrix Completion Under Interval Uncertainty: Highlights (2018) (0)
- Title Coordinate descent with arbitrary sampling I : algorithms andcomplexity (2015) (0)
- Acceleration for Compressed Gradient Descent in Distributed Optimization (2020) (0)
- Edinburgh Research Explorer Alternating Maximization (2012) (0)
- Extending the Reach of Big Data Optimization: Randomized Algorithms for Minimizing Relatively Smooth Functions (2017) (0)
- On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics (2021) (0)
- Explorer Randomized Distributed Mean Estimation : Accuracy vs . Communication (2018) (0)
- Explorer Coordinate Descent with Arbitrary Sampling I : Algorithms and Complexity (2015) (0)
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function (2012) (0)
- Approximate Level Method for Nonsmooth Convex Minimization (2011) (0)
- Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes (2023) (0)
- IntML: Natural Compression for Distributed Deep Learning (2019) (0)
- Edinburgh Explorer Coordinate Descent with Arbitrary Sampling I: Algorithms and Complexity (2018) (0)
- The complexity of primal-dual fixed point methods for ridge regression (2018) (0)
- Stochastic Reformulations of Linear Systems (2017) (0)
- Inequality-Constrained Matrix Completion (2014) (0)
- A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate (2022) (0)
- Ju n 20 15 Coordinate Descent with Arbitrary Sampling I : Algorithms and Complexity ∗ (2018) (0)
- SOME ALGORITHMS FOR LARGE-SCALE LINEAR (2007) (0)
- Stochastic Convolutional Sparse Coding (2019) (0)
- A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate (2022) (0)
- Explorer Matrix Completion under Interval Uncertainty (2017) (0)
- Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on support vector machine classification of RT-QuIC data (2012) (0)
- Direct Nonlinear Acceleration (2019) (0)
- Explorer Simple Complexity Analysis of Direct Search (2014) (0)
- On Optimal Solutions to Planetesimal Growth Models (2015) (0)
- TOP-SPIN: TOPic discovery via Sparse Principal component INterference (2013) (0)
- TOPic discovery via Sparse Principal component INterference∗ (2013) (0)
- Adaptive Compression for Communication-Efficient Distributed Training (2022) (0)
- Explorer Inexact Coordinate Descent : Complexity and Preconditioning (2013) (0)
This paper list is powered by the following services:
Other Resources About Peter Richtarik
What Schools Are Affiliated With Peter Richtarik?
Peter Richtarik is affiliated with the following schools: