John D. Schulman
#155,882
Most Influential Person Now
John D. Schulman's AcademicInfluence.com Rankings
John D. Schulmancomputer-science Degrees
Computer Science
#8566
World Rank
#9004
Historical Rank
Machine Learning
#3542
World Rank
#3586
Historical Rank
Artificial Intelligence
#3851
World Rank
#3907
Historical Rank
Database
#5565
World Rank
#5774
Historical Rank

Download Badge
Computer Science
John D. Schulman's Degrees
- PhD Computer Science University of California, Berkeley
- Bachelors Computer Science Stanford University
Similar Degrees You Can Earn
Why Is John D. Schulman Influential?
(Suggest an Edit or Addition)John D. Schulman's Published Works
Number of citations in a given year to any of this author's works
Total number of citations to an author for the works they published in a given year. This highlights publication of the most important work(s) by the author
Published Works
- Proximal Policy Optimization Algorithms (2017) (8568)
- Trust Region Policy Optimization (2015) (4825)
- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets (2016) (3593)
- OpenAI Gym (2016) (3297)
- Theano: A Python framework for fast computation of mathematical expressions (2016) (2219)
- High-Dimensional Continuous Control Using Generalized Advantage Estimation (2015) (2133)
- Concrete Problems in AI Safety (2016) (1511)
- Benchmarking Deep Reinforcement Learning for Continuous Control (2016) (1421)
- On First-Order Meta-Learning Algorithms (2018) (1409)
- Training language models to follow instructions with human feedback (2022) (797)
- RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning (2016) (761)
- Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations (2017) (684)
- VIME: Variational Information Maximizing Exploration (2016) (619)
- Spike sorting for large, dense electrode arrays (2015) (576)
- Variational Lossy Autoencoder (2016) (576)
- Motion planning with sequential convex optimization and convex collision checking (2014) (566)
- #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning (2016) (562)
- Reptile: a Scalable Metalearning Algorithm (2018) (457)
- Quantifying Generalization in Reinforcement Learning (2018) (448)
- Finding Locally Optimal, Collision-Free Trajectories with Sequential Convex Optimization (2013) (428)
- Gradient Estimation Using Stochastic Computation Graphs (2015) (336)
- Leveraging Procedural Generation to Benchmark Reinforcement Learning (2019) (314)
- Meta Learning Shared Hierarchies (2017) (282)
- Equivalence Between Policy Gradients and Soft Q-Learning (2017) (263)
- Teacher–Student Curriculum Learning (2017) (257)
- Training Verifiers to Solve Math Word Problems (2021) (222)
- Model-Based Reinforcement Learning via Meta-Policy Optimization (2018) (165)
- WebGPT: Browser-assisted question-answering with human feedback (2021) (161)
- Tracking deformable objects with point clouds (2013) (158)
- Scaling Laws for Autoregressive Generative Modeling (2020) (146)
- DEFENSIVE QUANTIZATION: WHEN EFFICIENCY MEETS ROBUSTNESS (2018) (145)
- Gotta Learn Fast: A New Benchmark for Generalization in RL (2018) (142)
- Learning from Demonstrations Through the Use of Non-rigid Registration (2013) (134)
- Unsolved Problems in ML Safety (2021) (91)
- UCB EXPLORATION VIA Q-ENSEMBLES (2018) (87)
- Scaling up Gaussian Belief Space Planning Through Covariance-Free Trajectory Optimization and Automatic Differentiation (2014) (86)
- A case study of trajectory transfer through non-rigid registration for a simplified suturing scenario (2013) (83)
- Amplitude compression and profound hearing loss. (1988) (78)
- Phasic Policy Gradient (2020) (75)
- Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks (2016) (66)
- Variational Information Maximizing Exploration (2016) (46)
- Distribution Augmentation for Generative Modeling (2020) (41)
- Sigma hulls for Gaussian belief space planning for imprecise articulated robots amid obstacles (2013) (39)
- Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs (2016) (36)
- Gaussian belief space planning with discontinuities in sensing domains (2014) (30)
- Policy Gradient Search: Online Planning and Expert Iteration without Search Trees (2019) (28)
- Efficient Training of Language Models to Fill in the Middle (2022) (25)
- Generalization in Robotic Manipulation Through The Use of Non-Rigid Registration (2013) (21)
- Grasping and Fixturing as Submodular Coverage Problems (2011) (21)
- Semi-Supervised Learning by Label Gradient Alignment (2019) (19)
- Planning locally optimal, curvature-constrained trajectories in 3D using sequential convex optimization (2014) (17)
- Scaling Laws for Reward Model Overoptimization (2022) (14)
- Reinforced Variational Inference (2015) (13)
- Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark (2021) (10)
- UCB and InfoGain Exploration via $\boldsymbol{Q}$-Ensembles (2017) (4)
- Batch size-invariance for policy optimization (2021) (4)
- Gaussian Belief Space Planning for Imprecise Articulated Robots (2013) (3)
- Scaling laws for single-agent reinforcement learning (2023) (3)
- PixelEDL: Unsupervised Skill Discovery and Learning from Pixels (2021) (1)
- Kahler-Einstein and Kahler scalar flat supermanifolds (2016) (1)
- Energetics and Error Rates of Self-Correcting Quantum Memories (2008) (0)
- Under review as a conference paper at ICLR 2017 # Exploration : A Study of Count-Based Exploration for Deep Reinforcement Learning (2017) (0)
- Learning 2 D Linear Dynamics in Image Space Using Neural Networks (2014) (0)
- 4 Variational Inference as Reinforcement Learning (2015) (0)
- Understanding hippocampal phase precession and phase relationships using phase response curves (2008) (0)
- Conditional Augmentation for Generative Modeling (2020) (0)
- Task Pipeline Specification and Scheduling (2014) (0)
- Static charging: old and new mysteries (2010) (0)
This paper list is powered by the following services: