Andrew Barto
#7,049
Most Influential Person Now
Professor of computer science
Andrew Barto's AcademicInfluence.com Rankings
Andrew Bartocomputer-science Degrees
Computer Science
#413
World Rank
#429
Historical Rank
#234
USA Rank
Database
#238
World Rank
#247
Historical Rank
#114
USA Rank
Download Badge
Computer Science
Andrew Barto's Degrees
- PhD Computer Science University of Massachusetts Amherst
Similar Degrees You Can Earn
Why Is Andrew Barto Influential?
(Suggest an Edit or Addition)According to Wikipedia, Andrew G. Barto is an American computer scientist, currently Professor Emeritus of computer science at University of Massachusetts Amherst. Barto is best known for his foundational contributions to the field of modern computational reinforcement learning.
Andrew Barto's Published Works
Published Works
- Reinforcement Learning: An Introduction (2005) (37772)
- Introduction to Reinforcement Learning (1998) (5160)
- Neuronlike adaptive elements that can solve difficult learning control problems (1983) (3372)
- Reinforcement learning (1998) (2480)
- Toward a modern theory of adaptive networks: expectation and prediction. (1981) (1495)
- Learning to Act Using Real-Time Dynamic Programming (1995) (1341)
- Recent Advances in Hierarchical Reinforcement Learning (2003) (1038)
- Handbook of Learning and Approximate Dynamic Programming (2006) (786)
- Intrinsically Motivated Reinforcement Learning (2004) (762)
- Improving Elevator Performance Using Reinforcement Learning (1995) (650)
- Time-Derivative Models of Pavlovian Reinforcement (1990) (625)
- Linear Least-Squares algorithms for temporal difference learning (2004) (620)
- Task Decomposition Through Competition in a Modular Connectionist Architecture: The What and Where Vision Tasks (1990) (615)
- Recent Advances in Hierarchical Reinforcement Learning (2003) (589)
- Adaptive Control Processes (2010) (586)
- Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density (2001) (526)
- Adaptive Critics and the Basal Ganglia (1995) (496)
- Adaptive Control of Duty Cycling in Energy-Harvesting Wireless Sensor Networks (2007) (438)
- Intrinsically Motivated Learning of Hierarchical Collections of Skills (2004) (435)
- Dimensions of Reinforcement Learning (1998) (369)
- Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective (2010) (369)
- Adaptive linear quadratic control using policy iteration (1994) (364)
- Pattern-recognizing stochastic learning automata (1985) (335)
- Robot learning from demonstration by constructing skill trees (2012) (302)
- Optimal learning: computational procedures for bayes-adaptive markov decision processes (2002) (301)
- Elevator Group Control Using Multiple Reinforcement Learning Agents (1998) (294)
- Models of the cerebellum and motor learning (1996) (284)
- Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining (2009) (281)
- Reinforcement Learning is Direct Adaptive Optimal Control (1992) (274)
- Building Portable Options: Skill Transfer in Reinforcement Learning (2007) (269)
- Identifying useful subgoals in reinforcement learning by local graph partitioning (2005) (266)
- Reinforcement learning is direct adaptive optimal control (1991) (263)
- Learning and Sequential Decision Making (1989) (246)
- Novelty or Surprise? (2013) (245)
- Using relative novelty to identify useful temporal abstractions in reinforcement learning (2004) (227)
- Autonomous shaping: knowledge transfer in reinforcement learning (2006) (219)
- Learning by statistical cooperation of self-interested neuron-like computing elements. (1985) (214)
- Distributed motor commands in the limb premotor network (1993) (209)
- Reinforcement learning with analogue memristor arrays (2019) (205)
- Intrinsic Motivation and Reinforcement Learning (2013) (200)
- Learning Parameterized Skills (2012) (200)
- Learning grounded finite-state representations from unstructured demonstrations (2015) (195)
- Where Do Rewards Come From (2009) (182)
- A Neural Signature of Hierarchical Reinforcement Learning (2011) (180)
- Learning and generalization of complex tasks from unstructured demonstrations (2012) (180)
- Lyapunov Design for Safe Reinforcement Learning (2003) (170)
- Skill Characterization Based on Betweenness (2008) (166)
- Linear Least-Squares Algorithms for Temporal Difference Learning (2005) (152)
- Training and Tracking in Robotics (1985) (151)
- Distributed Representation of Limb Motor Programs in Arrays of Adjustable Pattern Generators (1993) (149)
- Associative search network: A reinforcement learning associative memory (1981) (148)
- A Cerebellar Model of Timing and Prediction in the Control of Reaching (1999) (146)
- Repairing Disengagement With Non-Invasive Interventions (2007) (145)
- Prediction of complex two-dimensional trajectories by a cerebellar model of smooth pursuit eye movement. (1997) (143)
- Handbook of Learning and Approximate Dynamic Programming (IEEE Press Series on Computational Intelligence) (2004) (125)
- An intrinsic reward mechanism for efficient exploration (2006) (124)
- Incremental Semantically Grounded Learning from Demonstration (2013) (121)
- Connectionist learning for control: an overview (1990) (121)
- Optimal Behavioral Hierarchy (2014) (118)
- PolicyBlocks: An Algorithm for Creating Useful Macro-Actions in Reinforcement Learning (2002) (117)
- Preventing undesirable behavior of intelligent machines (2019) (114)
- Intrinsically Motivated Hierarchical Skill Learning in Structured Environments (2010) (111)
- Transfer in Reinforcement Learning via Shared Features (2012) (108)
- SMDP Homomorphisms: An Algebraic Approach to Abstraction in Semi-Markov Decision Processes (2003) (107)
- Causal Graph Based Decomposition of Factored MDPs (2006) (107)
- Reinforcement learning control (1994) (105)
- A computational model of muscle recruitment for wrist movements. (2002) (104)
- Simulation of anticipatory responses in classical conditioning by a neuron-like adaptive element (1982) (103)
- An algebraic approach to abstraction in reinforcement learning (2004) (102)
- Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories (2010) (102)
- Approximate optimal control as a model for motor learning. (2005) (99)
- Robot Weightlifting By Direct Policy Search (2001) (99)
- Efficient skill learning using abstraction selection (2009) (98)
- Learning reactive admittance control (1992) (95)
- Simulation of the classically conditioned nictitating membrane response by a neuron-like adaptive element: Response topography, neuronal firing, and interstimulus intervals (1986) (93)
- Improved Temporal Difference Methods with Linear Function Approximation (2004) (89)
- Reinforcement Learning and Its Relationship to Supervised Learning (2004) (88)
- Supervised Actor‐Critic Reinforcement Learning (2012) (87)
- Model Minimization in Hierarchical Reinforcement Learning (2002) (86)
- Autonomous discovery of temporal abstractions from interaction with an environment (2002) (83)
- Intrinsically Motivated Reinforcement Learning: A Promising Framework for Developmental Robot Learning (2005) (74)
- Sequential Decision Problems and Neural Networks (1989) (73)
- An Adaptive Robot Motivational System (2006) (71)
- Automated State Abstraction for Options using the U-Tree Algorithm (2000) (70)
- Monte Carlo Matrix Inversion and Reinforcement Learning (1993) (69)
- Connectionist learning for control (1990) (67)
- An Adaptive Sensorimotor Network Inspired by the Anatomy and Physiology (1989) (67)
- Symmetries and Model Minimization in Markov Decision Processes (2001) (66)
- Robust Reinforcement Learning in Motion Planning (1993) (63)
- Autonomous Skill Acquisition on a Mobile Manipulator (2011) (62)
- On the Computational Economics of Reinforcement Learning (1991) (58)
- LEARNING AND APPROXIMATE DYNAMIC PROGRAMMING Scaling Up to the Real World (2003) (58)
- Relativized Options: Choosing the Right Transformation (2003) (57)
- A Unified View (1998) (57)
- Genetic Programming for Reward Function Search (2010) (56)
- Intrinsic motivations and open-ended development in animals, humans, and robots: an overview (2014) (56)
- Adaptive Step-Size for Online Temporal Difference Learning (2012) (53)
- Learning parameterized motor skills on a humanoid robot (2014) (49)
- Convergence of Indirect Adaptive Asynchronous Value Iteration Algorithms (1993) (49)
- ModelBased Adaptive Critic Designs (2004) (49)
- A causal approach to hierarchical decomposition of factored MDPs (2005) (48)
- Shaping as a method for accelerating reinforcement learning (1992) (48)
- Building a Basic Block Instruction Scheduler with Reinforcement Learning and Rollouts (2002) (47)
- Variable risk control via stochastic optimization (2013) (46)
- Distributed sensorimotor learning (1992) (46)
- Landmark learning: An illustration of associative search (1981) (45)
- Goal Seeking Components for Adaptive Intelligence: An Initial Assessment. (1981) (44)
- Online Bayesian changepoint detection for articulated motion models (2015) (44)
- Learning admittance mappings for force-guided assembly (1994) (43)
- Intrinsic Motivation For Reinforcement Learning Systems (2005) (42)
- Area Under Curve (2020) (42)
- Task Decomposition through Competition in A (1990) (42)
- Cerebellar learning for control of a two-link arm in muscle space (1997) (41)
- Competence progress intrinsic motivation (2010) (41)
- Reinforcement learning, efficient coding, and the statistics of natural tasks (2015) (39)
- Reinforcement Learning and Dynamic Programming (1995) (39)
- Reinforcement learning in motor control (1998) (39)
- Reinforcement Learning in Artificial Intelligence (1997) (38)
- Enriching behavioral ecology with reinforcement learning methods (2018) (36)
- Improving Intelligent Tutoring Systems: Using Expectation Maximization to Learn Student Skill Levels (2006) (36)
- Guidance in the Use of Adaptive Critics for Control (2004) (35)
- Cerebellar control of endpoint position-a simulation model (1990) (34)
- An Actor/Critic Algorithm that is Equivalent to Q-Learning (1994) (34)
- Evaluating the Feasibility of Learning Student Models from Data (2005) (33)
- ADP: Goals, Opportunities and Principles (2004) (32)
- Explaining Temporal Differences to Create Useful Concepts for Evaluating States (1990) (32)
- Reinforcement Learning for Mixed Open-loop and Closed-loop Control (1996) (30)
- Hierarchical Decision Making (2004) (30)
- Learning Instance-Independent Value Functions to Enhance Local Search (1998) (30)
- Active Learning of Dynamic Bayesian Networks in Markov Decision Processes (2007) (30)
- Large-scale dynamic optimization using teams of reinforcement learning agents (1996) (29)
- Local Bandit Approximation for Optimal Learning Problems (1996) (28)
- Lyapunov-Constrained Action Sets for Reinforcement Learning (2001) (28)
- Conjugate Markov Decision Processes (2011) (28)
- Cortical involvement in the recruitment of wrist muscles. (2004) (27)
- Learning to reach via corrective movements (1999) (27)
- Combining Reinforcement Learning with a Local Control Algorithm (2000) (26)
- Clustering via Dirichlet Process Mixture Models for Portable Skill Discovery (2011) (26)
- Decision Tree Methods for Finding Reusable MDP Homomorphisms (2006) (26)
- Active Learning of Parameterized Skills (2014) (25)
- Motor primitive discovery (2012) (25)
- Supervised Learning Combined with an Actor-Critic Architecture TITLE2: (2002) (24)
- Cooperativity in Networks of Pattern Recognizing Stochastic Learning Automata (1986) (23)
- A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands to Control a Kinematic Arm (1991) (22)
- Some Recent Applications of Reinforcement Learning (2017) (22)
- Learning dynamic arm motions for postural recovery (2011) (22)
- Behavioral building blocks for autonomous agents: description, identification, and learning (2008) (22)
- Statistical Machine Learning for Large-Scale Optimization (2000) (21)
- From Chemotaxis to cooperativity: abstract exercises in neuronal learning strategies (1989) (21)
- Behavioral Hierarchy: Exploration and Representation (2013) (21)
- Variational Bayesian Optimization for Runtime Risk-Sensitive Control (2012) (21)
- Learning at the level of synergies for a robot weightlifter (2006) (20)
- Temporal difference learning (2007) (20)
- Learning to exploit dynamics for robot motor coordination (2003) (20)
- Reinforcement learning in the real world (2004) (19)
- Effect on movement selection of an evolving sensory representation: A multiple controller model of skill acquisition (2009) (19)
- A model of cerebellar learning for control of arm movements using muscle synergies (1997) (19)
- Accelerating Reinforcement Learning through the Discovery of Useful Subgoals (2001) (18)
- Hierarchical Representations of Behavior for Efficient Creative Search (2008) (17)
- Task Decompostiion Through Competition in a Modular Connectionist Architecture: The What and Where Vision Tasks (1993) (17)
- Simulation Experiments with Goal-Seeking Adaptive Elements. (1984) (17)
- Text-Based Information Retrieval Using Exponentiated Gradient Descent (1996) (17)
- Robust Reinforcement Learning for Heating, Ventilation, and Air Conditioning Control of Buildings (2004) (16)
- A Predictive Switching Model of Cerebellar Movement Control (1995) (16)
- Sensorimotor abstraction selection for efficient, autonomous robot skill acquisition (2008) (16)
- Planning and Learning (1998) (16)
- Evolution of reward functions for reinforcement learning (2011) (16)
- Apprenticeship Learning (2010) (16)
- On Ensuring that Intelligent Machines Are Well-Behaved (2017) (15)
- Chapter 2 – Reinforcement Learning (1997) (15)
- CST: Constructing Skill Trees by Demonstration (2011) (14)
- The IM-CLeVeR Project : Intrinsically Motivated Cumulative Learning Versatile Robots (2009) (14)
- Linear systems analysis of the relationship between firing of deep cerebellar neurons and the classically conditioned nictitating membrane response in rabbits (1991) (13)
- Learning to Maximize Rewards: A Review of "Reinforcement Learning: An Introduction (2000) (13)
- CHAMP: Changepoint Detection Using Approximate Model Parameters (2014) (13)
- Game-theoretic cooperativity in networks of self-interested units (1987) (12)
- An approach to learning control surfaces by connectionist systems (1990) (12)
- DISCRETE AND CONTINUOUS MODELS (1978) (12)
- Reinforcement learning with supervision by a stable controller (2004) (11)
- Supervised ActorCritic Reinforcement Learning (2004) (11)
- Editorial: Intrinsically Motivated Open-Ended Learning in Autonomous Robots (2020) (11)
- Deictic Option Schemas (2007) (10)
- Book Reviews (1999) (10)
- Autonomous Hierarchical Skill Acquisition in Factored MDPs (2008) (10)
- Synthesis of nonlinear control surfaces by a layered associative search network (2004) (10)
- 1 Supervised Actor-Critic Reinforcement Learning (2007) (10)
- The Reinforcement Learning Problem (1998) (9)
- Toward Dynamic Stochastic Optimal Power Flow (2004) (9)
- Incremental Structure Learning in Factored MDPs with Continuous States and Actions (2009) (9)
- Betweenness Centrality as a Basis for Forming Skills Özgür Şimşek (2007) (9)
- Heuristic Search in Infinite State Spaces Guided by Lyapunov Analysis (2001) (9)
- Lyapunov Design for Safe Reinforcement Learning Control (2002) (9)
- Paying attention to what matters: observation abstraction in partially observable environments (2010) (8)
- Optimal Control Methods for Simulating the Perception of Causality in Young Infants (2020) (8)
- Looking Back on the Actor–Critic Architecture (2021) (8)
- A causal approach to hierarchical decomposition in reinforcement learning (2006) (8)
- Learning and incremental dynamic programming (1991) (8)
- Learning Skills in Reinforcement Learning Using Relative Novelty (2005) (8)
- Attribute Selection (2010) (8)
- Cellular automata as models of natural systems (1975) (8)
- The Emergence of Multiple Movement Units in the Presence of Noise and Feedback Delay (2001) (8)
- A Dual Process Account of Coarticulation in Motor Skill Acquisition (2013) (7)
- A Computational Hypothesis for Allostasis: Delineation of Substance Dependence, Conventional Therapies, and Alternative Treatments (2013) (7)
- Hierarchical Approaches to Concurrency, Multiagency, and Partial Observability (2004) (7)
- Lyapunov methods for safe intelligent agent design (2002) (6)
- Automated Aircraft Recovery via Reinforcement Learning: Initial Experiments (1997) (6)
- Machine Learning for Subproblem Selection (2000) (6)
- A Note on Pattern Reproduction in Tessellation Structures (1978) (6)
- Neural Networks and Adaptive Control (1993) (6)
- Adaptive System (2010) (6)
- On Separating Agent Designer Goals from Agent Goals : Breaking the Preferences – Parameters Confound (2010) (5)
- Learning as hill-climbing in weight space (1998) (5)
- This Excerpt from Reinforcement Learning. Introduction 1.2 Examples 1.3 Elements of Reinforcement Learning 1.3 Elements of Reinforcement Learning (5)
- Basic-block Instruction Scheduling Using Reinforcement Learning and Rollouts (2002) (4)
- The emergence of movement units through learning with noisy efferent signals and delayed sensory feedback (2002) (4)
- Skill Chaining : Skill Discovery in Continuous Domains (2009) (4)
- Controlling a Nonlinear Spring-Mass System with a Cerebellar Model (1994) (4)
- Variable Risk Dynamic Mobile Manipulation (2012) (4)
- Backpropagation Through Time and Derivative Adaptive CriticsA Common Framework for ComparisonPortions of this chapter were previously published in [4, 7,9, 1214,23]. (2004) (4)
- Adaptive Critic Based Neural Network for ControlConstrained Agile Missile (2004) (4)
- Cooperative Interaction of Self-Interested Neuron-Like Processing Units (1989) (3)
- TD-δπ: a model-free algorithm for efficient exploration (2012) (3)
- More models of the cerebellum (1996) (3)
- Learning from a Single Demonstration: Motion Planning with Skill Segmentation (2010) (3)
- Local Graph Partitioning as a Basis for Generating Temporally-Extended Actions in Reinforcement Learning (2005) (3)
- Multiobjective Control Problems by Reinforcement Learning (2004) (2)
- Motor programs and sensorimotor integration (1990) (2)
- Functional mechanisms of motor skill acquisition (2007) (2)
- Anytime Algorithm (2010) (2)
- Acquiring Transferrable Mobile Manipulation Skills (2011) (2)
- Some Learning Tasks from a Control Perspective (2018) (2)
- Control models of natural language parsing (2005) (2)
- Reinforcement Learning: Connections, Surprises, Challenges (2019) (2)
- Reinforcement Learning in Large, High‐Dimensional State Spaces (2004) (2)
- Learning Articulation Changepoint Models from Demonstration (2014) (2)
- Robot Learning : Some Recent Examples (2013) (2)
- Reinforcement Learning: Connections, Surprises, and Challenge (2019) (2)
- Errata Preface Recent Advances in Hierarchical Reinforcement Learning (2003) (1)
- Intrinsically Motivated Machines (2011) (1)
- BPTT and DAC — A Common Framework for Comparison (2012) (1)
- Learning and control in a chaotic system (1999) (1)
- Control, Optimization, Security, and Self‐healing of Benchmark Power Systems (2012) (1)
- Reinforcement and Local Searc: A Case Study TITLE2: (1997) (1)
- Generalization and Function Approximation (1998) (1)
- A Neural Network Simulation MethodUsing the FastFourier Transform (1976) (1)
- Book Review Reinforcement Learning: an Introduction (1)
- Homomorphisms : An Algebraic Approach to Abstraction in Semi-Markov Decision Processes (2003) (1)
- Robust Reinforcement Learning Using IntegralQuadratic Constraints (2004) (1)
- Learned Subproblem Selection Techniques for Combinatorial Optimization (1999) (1)
- Attribute-Value Learning (2010) (1)
- Agent-Based Simulation Models (2010) (1)
- Solutions to Selected Problems In : Reinforcement Learning : An Introduction by (2008) (1)
- Motor Learning and Synaptic Plasticity in the Cerebellum: Models of the cerebellum and motor learning (1997) (1)
- Commentary on Utility and Bounds (2014) (0)
- TD-DeltaPi: A Model-Free Algorithm for Efficient Exploration (2012) (0)
- Analytical Learning (2010) (0)
- Simulation of networks using multidimensional Fast Fourier Transforms (1974) (0)
- Adaptive Networks For Sequential Decision Problems (1992) (0)
- Summary of Notation (1998) (0)
- The Project IM-CLeVeR – Intrinsically Motivated Cumulative Learning Versatile Robots : A Toolbox for Research on Intrinsic Motivations and Cumulative Learning (2013) (0)
- Average-Cost Neuro-Dynamic Programming (2010) (0)
- Sequential Decision Probelms and Neural Networks (1989) (0)
- Journal of Cognitive Neuroscience 11:1 (1999) (0)
- Solutions to Exercises in Reinforcement Learning (2017) (0)
- Near‐Optimal Control Via Reinforcement Learning and Hybridization (2012) (0)
- A Agent Environment StatesActions Rewards Critic B Agent Internal Environment Rewards Critic External Environment Sensations StatesDecisions Actions " Organism " Figure 1 (2004) (0)
- Adaptive Real-Time Dynamic Programming (2017) (0)
- Average-Cost Optimization (2010) (0)
- Control, Optimization, Security, and Selfhealing of Benchmark Power SystemsThe views expressed here are those of the authors, Momoh (on leave from Howard University), and Zivi and not the official views of NSF and U.S. Naval Academy, USA (2004) (0)
- Imitation Learning in The Game of Go with Joseki Options (2010) (0)
- Structurally Invariant Linear Models of Structurally Varying Linear Systems (1978) (0)
- Main Results Accomplished by the EU-Funded Project IM-CLeVeR – Intrinsically Motivated Cumulative Learning Versatile Robots (2013) (0)
- Adaptive Neural Network Architecture. (1987) (0)
- Proceedings of the IJCNN Winter Meeting, IEEE, 1990, Washington, DC, USA (1990) (0)
- Average-Payoff Reinforcement Learning (2010) (0)
- Reinforcement Learning with Stability Guarantees TITLE2 (2000) (0)
- Approximate Dynamic Programming (2011) (0)
- 21. Helicopter Flight Control Using Direct Neural Dynamic Programming (2012) (0)
- Remediating disengagement with non-invasive interventions (0)
- Designing Adaptive Sensing Policies for Meteorological Phenomena via Spectral Analysis of Radar Images (2012) (0)
- Studies of mind and brain : Stephen Grossberg, Boston: D. Reidel Publishing Company, 1982, xvii+662 pp, $55.00 (cloth), $24.00 (paper) (1984) (0)
- NearOptimal Control Through Reinforcement Learning and Hybridization (2004) (0)
- Elementary Solution Methods (1998) (0)
- Movements Under Resistive and Assistive Force Fields Encoding of Movement Dynamics by Purkinje Cell Simple Spike Activity During Fast Arm (2008) (0)
- 10. Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems (2012) (0)
- Gradient Ascent Critic Optimization (2010) (0)
- Reinforcement Learning and Local Search: a Case Study Reinforcement Learning and Local Search: a Case Study (1997) (0)
- 6. The Linear Programming Approach to Approximate Dynamic Programming (2012) (0)
- Reinforcement Learning with Stability Guarantees (2009) (0)
- or ces to Create Useful Concepts for Evaluating States (1990) (0)
- Monte Carlo Methods (1998) (0)
- Multi-Agent Reinforcement Learning and Adaptive Neural Networks. (1996) (0)
- Chapter 12 Time-Derivative Models of Pavlovian Reinforcement (1990) (0)
- Where Do Rewards Come From ? 1 (2010) (0)
- Toward the Autonomous Acquisition of Robot Skill Hierarchies (2009) (0)
- Learning as Hillclimbing in Weight Space (2020) (0)
- Reinforcement learning with analogue memristor arrays (2019) (0)
- Multilayer Networks of Self-Interested Adaptive Units. (1987) (0)
- Fagg, Barto & Houk: Learning to Reach via Corrective Movements; Tenth Yale Workshop on Adaptive and Learning Systems Learning to Reach via Corrective Movements (1998) (0)
- Transfer in Reinforcement Learning via Shared Features Citation (2012) (0)
- Absolute Error Loss (2010) (0)
- Associative Bandit Problem (2010) (0)
- Statistical Machine Learning for Large-Scale Optimization Contributors (0)
- Stochastic Scheduling and Planning Using Reinforcement Learning (2000) (0)
This paper list is powered by the following services:
Other Resources About Andrew Barto
What Schools Are Affiliated With Andrew Barto?
Andrew Barto is affiliated with the following schools: