Rachel Ward
#41,523
Most Influential Person Now
American mathematician
Rachel Ward 's AcademicInfluence.com Rankings
Rachel Ward mathematics Degrees
Mathematics
#3580
World Rank
#5261
Historical Rank
Measure Theory
#3109
World Rank
#3686
Historical Rank
Download Badge
Mathematics
Rachel Ward 's Degrees
- PhD Mathematics Princeton University
- Masters Mathematics Stanford University
Why Is Rachel Ward Influential?
(Suggest an Edit or Addition)According to Wikipedia, Rachel Ward is an American applied mathematician at the University of Texas at Austin. She is known for work on machine learning, optimization, and signal processing. At the University of Texas, she is W. A. "Tex" Moncrief Distinguished Professor in Computational Engineering and Sciences—Data Science, and professor of mathematics.
Rachel Ward 's Published Works
Published Works
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm (2013) (478)
- New and Improved Johnson-Lindenstrauss Embeddings via the Restricted Isometry Property (2010) (309)
- Stable Image Reconstruction Using Total Variation Minimization (2012) (235)
- Sparse Legendre expansions via l1-minimization (2012) (225)
- Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization (2010) (215)
- Stable and Robust Sampling Strategies for Compressive Imaging (2012) (180)
- AdaGrad stepsizes: sharp convergence over nonconvex landscapes (2019) (176)
- Interpolation via weighted $l_1$ minimization (2013) (149)
- Exact Recovery of Chaotic Systems from Highly Corrupted Data (2016) (149)
- Compressed Sensing With Cross Validation (2008) (146)
- One-Bit Compressive Sensing With Norm Estimation (2014) (145)
- Coherent Matrix Completion (2013) (115)
- Extracting Sparse High-Dimensional Dynamics from Limited Data (2017) (112)
- Relax, No Need to Round: Integrality of Clustering Formulations (2014) (109)
- Completing any low-rank matrix, provably (2013) (93)
- AdaGrad stepsizes: Sharp convergence over nonconvex landscapes, from any initialization (2018) (88)
- Near-Optimal Compressed Sensing Guarantees for Total Variation Minimization (2012) (86)
- Clustering subgaussian mixtures by semidefinite programming (2016) (86)
- Sparse recovery for spherical harmonic expansions (2011) (80)
- The Local Convexity of Solving Systems of Quadratic Equations (2015) (73)
- WNGrad: Learn the Learning Rate in Gradient Descent (2018) (60)
- Two-Subspace Projection Method for Coherent Overdetermined Systems (2012) (56)
- Global Convergence of Adaptive Gradient Methods for An Over-parameterized Neural Network (2019) (55)
- Iterative Thresholding Meets Free-Discontinuity Problems (2009) (49)
- Compressive sensing with redundant dictionaries and structured measurements (2015) (46)
- A near-stationary subspace for ridge approximation (2016) (46)
- Root-Exponential Accuracy for Coarse Quantization of Finite Frame Expansions (2012) (37)
- Stochastic gradient descent and the randomized Kaczmarz algorithm (2013) (36)
- Extracting structured dynamical systems using sparse optimization with very few samples (2018) (35)
- Recovery guarantees for exemplar-based clustering (2013) (34)
- Subfunctionalization: How often does it occur? How long does it take? (2004) (33)
- Linear Convergence of Adaptive Stochastic Gradient Descent (2019) (33)
- Batched Stochastic Gradient Descent with Weighted Sampling (2016) (32)
- Weighted Eigenfunction Estimates with Applications to Compressed Sensing (2011) (30)
- Computing the confidence levels for a root-mean-square test of goodness-of-fit (2010) (28)
- Matrix Concentration for Products (2020) (28)
- Faster Johnson-Lindenstrauss Transforms via Kronecker Products (2019) (27)
- MC^2: A Two-Phase Algorithm for Leveraged Matrix Completion (2016) (24)
- Improved bounds for sparse recovery from subsampled random convolutions (2016) (22)
- Learning Dynamical Systems and Bifurcation via Group Sparsity (2017) (22)
- The Local Convexity of Solving Quadratic Equations (2015) (19)
- Lower bounds for the error decay incurred by coarse quantization schemes (2010) (19)
- On the Complexity of Mumford–Shah-Type Regularization, Viewed as a Relaxed Sparsity Constraint (2010) (18)
- The Sample Complexity of Weighted Sparse Approximation (2015) (16)
- Compressive imaging: stable and robust recovery from variable density frequency samples (2012) (16)
- AdaOja: Adaptive Learning Rates for Streaming PCA (2019) (15)
- A Symbol-Based Algorithm for Decoding Bar Codes (2012) (14)
- A Unified Framework for Linear Dimensionality Reduction in L1 (2014) (14)
- Fast Cross-Polytope Locality-Sensitive Hashing (2016) (14)
- Generalization bounds for sparse random feature expansions (2021) (14)
- An arithmetic–geometric mean inequality for products of three matrices (2014) (14)
- A polynomial-time relaxation of the Gromov-Hausdorff distance (2016) (13)
- Testing goodness-of-fit for logistic regression (2013) (12)
- Implicit Regularization of Normalization Methods (2019) (11)
- Implicit Regularization and Convergence for Weight Normalization (2019) (11)
- Testing Hardy-Weinberg equilibrium with a simple root-mean-square statistic. (2012) (11)
- Clustering subgaussian mixtures with k-means (2016) (11)
- Sparse Legendre expansions via $\ell_1$ minimization (2010) (11)
- Concentration inequalities for random matrix products (2019) (10)
- Weighted Optimization: better generalization by smoother interpolation (2020) (10)
- Chi-square and classical exact tests often wildly misreport significance; the remedy lies in computers (2011) (10)
- Streaming k-PCA: Efficient guarantees for Oja's algorithm, beyond rank-one updates (2021) (10)
- The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance (2022) (10)
- χ 2 and classical exact tests often wildly misreport significance ; the remedy lies in computers (2011) (9)
- Sample Efficiency of Data Augmentation Consistency Regularization (2022) (7)
- Recovery guarantees for polynomial coefficients from weakly dependent data with outliers (2020) (7)
- Cross Validation in Compressed Sensing via the Johnson Lindenstrauss Lemma (2008) (6)
- Total variation minimization for stable multidimensional signal recovery (2012) (6)
- SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning (2021) (6)
- A comparison of the discrete Kolmogorov-Smirnov statistic and the Euclidean distance (2012) (6)
- Johnson-Lindenstrauss Embeddings with Kronecker Structure (2021) (6)
- Significance Testing Without Truth (2012) (5)
- Bootstrapping the Error of Oja's Algorithm (2021) (5)
- Compressed sensing with a jackknife and a bootstrap (2018) (5)
- Learning to Forecast Dynamical Systems from Streaming Data (2021) (5)
- How catastrophic can catastrophic forgetting be in linear regression? (2022) (5)
- Near-optimal compressed sensing guarantees for anisotropic and isotropic total variation minimization (2013) (5)
- Local coherence sampling in compressed sensing (2013) (4)
- Function Approximation via Sparse Random Features (2021) (4)
- Bias of Homotopic Gradient Descent for the Hinge Loss (2019) (3)
- Some deficiencies of χ2 and classical exact tests of significance (2014) (3)
- Learning the second-moment matrix of a smooth function from point samples (2016) (3)
- An Exponentially Increasing Step-size for Parameter Estimation in Statistical Models (2022) (3)
- Quiet sigma delta quantization, and global convergence for a class of asymmetric piecewise-affine maps (2010) (3)
- Concentration of Random Feature Matrices in High-Dimensions (2022) (3)
- Greedy Variance Estimation for the LASSO (2018) (3)
- Importance sampling in signal processing applications (2015) (3)
- AdaLoss: A computationally-efficient and provably convergent adaptive gradient method (2021) (3)
- The Hanson–Wright inequality for random tensors (2021) (3)
- An introduction to how chi-square and classical exact tests often wildly misreport significance and how the remedy lies in computers (2012) (2)
- Side effects of learning from low-dimensional data embedded in a Euclidean space (2022) (2)
- Recovery guarantees for polynomial approximation from dependent data with outliers (2018) (2)
- Learning the Differential Correlation Matrix of a Smooth Function From Point Samples (2016) (1)
- Two-Subspace Projection Method for Coherent Overdetermined Systems (2012) (1)
- Arbitrary-length analogs to de Bruijn sequences (2021) (1)
- Overparameterization and Generalization Error: Weighted Trigonometric Interpolation (2020) (1)
- Concentration Inequalities for Sums of Markov Dependent Random Matrices (2023) (1)
- Linear dimension reduction in the L₁ norm: When and how is it possible? (2014) (1)
- On the fast convergence of minibatch heavy ball momentum (2022) (1)
- Efficient and stable recovery of Legendre-sparse polynomials (2010) (1)
- ESTIMATES WITH APPLICATIONS TO COMPRESSED SENSING (2012) (1)
- Scalable symmetric Tucker tensor decomposition (2022) (0)
- Stability for second-order chaotic sigma delta quantization (2011) (0)
- AdaWAC: Adaptively Weighted Augmentation Consistency Regularization for Volumetric Medical Image Segmentation (2022) (0)
- A Symbol-based Bar Code Decoding Algorithm (2012) (0)
- 3 Greedy Variance Estimation – The Orthonormal Case (2018) (0)
- ICES REPORT 12-34 August 2012 Significance testing without truth (2012) (0)
- Median Balancing: A Linearly Convergent Algorithm for Time Gain Power Correction (2015) (0)
- Reliable Function Approximation and Estimation (2016) (0)
- Mini-Workshop: Mathematical Physics meets Sparse Recovery (2014) (0)
- Computer-enabled Metrics of Statistical Significance for Discrete Data (2014) (0)
- Extracting High-Dimensional Dynamics from Limited Data (2017) (0)
This paper list is powered by the following services:
Other Resources About Rachel Ward
What Schools Are Affiliated With Rachel Ward ?
Rachel Ward is affiliated with the following schools: