Leslie P. Kaelbling
American roboticist
Leslie P. Kaelbling's AcademicInfluence.com Rankings
Download Badge
Computer Science
Leslie P. Kaelbling's Degrees
- PhD Computer Science Stanford University
Similar Degrees You Can Earn
Why Is Leslie P. Kaelbling Influential?
(Suggest an Edit or Addition)According to Wikipedia, Leslie Pack Kaelbling is an American roboticist and the Panasonic Professor of Computer Science and Engineering at the Massachusetts Institute of Technology. She is widely recognized for adapting partially observable Markov decision processes from operations research for application in artificial intelligence and robotics. Kaelbling received the IJCAI Computers and Thought Award in 1997 for applying reinforcement learning to embedded control systems and developing programming tools for robot navigation. In 2000, she was elected as a Fellow of the Association for the Advancement of Artificial Intelligence.
Leslie P. Kaelbling's Published Works
Published Works
- Reinforcement Learning: A Survey (1996) (7965)
- Planning and Acting in Partially Observable Stochastic Domains (1998) (4258)
- Learning in embedded systems (1993) (805)
- Learning Policies for Partially Observable Environments: Scaling Up (1997) (777)
- Acting Optimally in Partially Observable Stochastic Domains (1994) (747)
- Acting under uncertainty: discrete Bayesian models for mobile-robot navigation (1996) (584)
- On the Complexity of Solving Markov Decision Problems (1995) (569)
- Hierarchical task and motion planning in the now (2011) (478)
- Exact and approximate algorithms for partially observable markov decision processes (1998) (442)
- Effective reinforcement learning for mobile robots (2002) (426)
- The Synthesis of Digital Machines With Provable Epistemic Properties (1986) (382)
- An Architecture for Intelligent Reactive Systems (1987) (367)
- Belief space planning assuming maximum likelihood observations (2010) (354)
- Integrated task and motion planning in belief space (2013) (352)
- Generalization in Deep Learning (2017) (346)
- Learning to Achieve Goals (1993) (321)
- Hierarchical Solution of Markov Decision Processes using Macro-actions (1998) (320)
- Learning to Cooperate via Policy Search (2000) (313)
- Action and planning in embedded agents (1990) (295)
- Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons (1991) (292)
- Planning under Time Constraints in Stochastic Domains (1993) (274)
- Practical Reinforcement Learning in Continuous Spaces (2000) (262)
- Planning With Deadlines in Stochastic Domains (1993) (254)
- Hierarchical Learning in Stochastic Domains: Preliminary Results (1993) (235)
- Learning Finite-State Controllers for Partially Observable Environments (1999) (233)
- Solving Very Large Weakly Coupled Markov Decision Processes (1998) (232)
- Learning Symbolic Models of Stochastic Domains (2007) (226)
- Learning Topological Maps with Weak Local Odometric Information (1997) (219)
- Lifted Probabilistic Inference with Counting Formulas (2008) (218)
- LQR-RRT*: Optimal sampling-based motion planning with automatically derived extension heuristics (2012) (216)
- From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning (2018) (207)
- A Situated View of Representation and Control (1995) (205)
- Solving POMDPs by Searching the Space of Finite Policies (1999) (204)
- Integrated Task and Motion Planning (2020) (167)
- Grasping POMDPs (2007) (167)
- Goals as Parallel Program Specifications (1988) (158)
- A constraint-based method for solving sequential manipulation planning problems (2014) (152)
- All learning is Local: Multi-agent Learning in Global Reward Games (2003) (148)
- Collision Avoidance for Unmanned Aircraft using Markov Decision Processes (2010) (142)
- FFRob: An Efficient Heuristic for Task and Motion Planning (2015) (133)
- Hierarchical Planning in the Now (2010) (130)
- Ecological Robotics (1998) (127)
- Learning Policies with External Memory (1999) (122)
- A Dynamical Model of Visually-Guided Steering, Obstacle Avoidance, and Route Selection (2003) (117)
- FFRob: Leveraging symbolic planning for efficient task and motion planning (2016) (111)
- Mobilized ad-hoc networks: a reinforcement learning approach (2004) (107)
- Residual Policy Learning (2018) (105)
- Modular meta-learning (2018) (98)
- Planning for decentralized control of multiple robots under uncertainty (2014) (97)
- Bayesian Optimization with Exponential Convergence (2015) (96)
- PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning (2018) (96)
- Constructing Symbolic Representations for High-Level Planning (2014) (95)
- Approximate Planning in POMDPs with Macro-Actions (2003) (90)
- Task-Driven Tactile Exploration (2010) (88)
- Learning Probabilistic Relational Planning Rules (2004) (86)
- Learning Planning Rules in Noisy Stochastic Worlds (2005) (84)
- Representing hierarchical POMDPs as DBNs for multi-scale robot localization (2004) (80)
- Transfer Learning with an Ensemble of Background Tasks (2005) (80)
- Efficient dynamic-programming updates in partially observable Markov decision processes (1995) (79)
- Augmenting Physical Simulators with Stochastic Neural Networks: Case Study of Planar Pushing and Bouncing (2018) (79)
- Sampling-based methods for factored task and motion planning (2017) (78)
- Policy search for multi-robot coordination under uncertainty (2015) (74)
- Robust grasping under object pose uncertainty (2011) (73)
- A hierarchical approach to manipulation with diverse actions (2013) (69)
- Planning with macro-actions in decentralized POMDPs (2014) (69)
- Learning to guide task and motion planning using score-space representation (2017) (68)
- Algorithms for the multi-armed bandit problem (2000) (66)
- Online Replanning in Belief Space for Partially Observable Task and Motion Problems (2019) (62)
- Active Model Learning and Diverse Action Sampling for Task and Motion Planning (2018) (61)
- Inferring finite automata with stochastic output functions and an application to map learning (1992) (60)
- POMCoP: Belief Space Planning for Sidekicks in Cooperative Games (2012) (60)
- Manipulation with Multiple Action Types (2012) (58)
- Efficient Planning in Non-Gaussian Belief Spaces and Its Application to Robot Grasping (2011) (57)
- Differentiable Algorithm Networks for Composable Robot Learning (2019) (57)
- A situated-automata approach to the design of embedded agents (1991) (54)
- Influence-Based Abstraction for Multiagent Systems (2012) (53)
- Envelope-based Planning in Relational MDPs (2003) (52)
- CAPIR: Collaborative Action Planning with Intention Recognition (2011) (52)
- Playing is believing: The role of beliefs in multi-agent learning (2001) (50)
- Graph Element Networks: adaptive, structured computation and memory (2019) (50)
- Symbol Acquisition for Probabilistic High-Level Planning (2015) (50)
- Continuous-State POMDPs with Hybrid Dynamics (2008) (49)
- Provably safe robot navigation with obstacle uncertainty (2017) (49)
- Manipulation-based active search for occluded objects (2013) (49)
- Automated Design of Adaptive Controllers for Modular Robots using Reinforcement Learning (2008) (49)
- Modeling and Planning with Macro-Actions in Decentralized POMDPs (2019) (48)
- Learning compositional models of robot skills for task and motion planning (2020) (48)
- Multi-Value-Functions: Efficient Automatic Action Hierarchies for Multiple Goal MDPs (1999) (47)
- Rex: A Symbolic Language for the Design and Parallel Implementation of Embedded Systems (1987) (46)
- Effect of Depth and Width on Local Minima in Deep Learning (2018) (46)
- Efficient Bayesian Task-Level Transfer Learning (2007) (45)
- Tracking 3-D Rotations with the Quaternion Bingham Filter (2013) (45)
- Associative Reinforcement Learning: Functions in k-DNF (1994) (45)
- Backward-forward search for manipulation planning (2015) (45)
- Recent Advances in Reinforcement Learning (1996) (45)
- Virtual Training for Multi-View Object Class Recognition (2007) (45)
- Unifying perception, estimation and action for mobile manipulation via belief space planning (2012) (44)
- Tracking the spin on a ping pong ball with the quaternion Bingham filter (2014) (43)
- Hierarchical planning for multi-contact non-prehensile manipulation (2015) (42)
- Learning models for robot navigation (1999) (41)
- Elimination of All Bad Local Minima in Deep Learning (2019) (41)
- Data association for semantic world modeling from partial views (2015) (40)
- Adaptive Importance Sampling for Estimation in Structured Domains (2000) (39)
- Partially Observable Markov Decision Processes for Artificial Intelligence (1995) (38)
- Neural Relational Inference with Fast Modular Meta-learning (2019) (38)
- Learning composable models of parameterized skills (2017) (37)
- Bayesian Policy Search with Policy Priors (2011) (37)
- DetH*: Approximate Hierarchical Solution of Large Markov Decision Processes (2011) (37)
- The Thing that we Tried Didn't Work very Well: Deictic Representation in Reinforcement Learning (2002) (37)
- Model-Based Optimization of Airborne Collision Avoidance Logic (2010) (37)
- Meta-learning curiosity algorithms (2020) (36)
- Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks (2020) (35)
- Learning Dynamics: System Identification for Perceptually Challenged Agents (1995) (34)
- Activity Recognition from Physiological Data using Conditional Random Fields (2006) (34)
- Combining Physical Simulators and Object-Based Networks for Control (2019) (34)
- Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap (2011) (34)
- Nearly deterministic abstractions of Markov decision processes (2002) (34)
- An Introduction to Reinforcement Learning (1995) (32)
- On reinforcement learning for robots (1996) (32)
- Multi-Agent Filtering with Infinitely Nested Beliefs (2008) (31)
- Accelerating EM: An Empirical Study (1999) (31)
- Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior (2018) (31)
- Interactive Bayesian identification of kinematic mechanisms (2014) (30)
- Non-Gaussian belief space planning: Correctness and complexity (2012) (30)
- Ecological Robotics: Controlling Behavior with Optical Flow (1995) (30)
- Toward Hierachical Decomposition for Planning in Uncertain Environments (2001) (29)
- The foundation of efficient robot learning (2020) (29)
- Learning Symbolic Operators for Task and Motion Planning (2021) (28)
- Robust Belief-Based Execution of Manipulation Programs (2008) (27)
- State-based Classification of Finger Gestures from Electromyographic Signals (2000) (27)
- Learning to Rank for Synthesizing Planning Heuristics (2016) (27)
- Guiding Search in Continuous State-Action Spaces by Learning an Action Sampler From Off-Target Search Experience (2018) (27)
- Foresight and reconsideration in hierarchical planning and execution (2013) (27)
- Monte Carlo Tree Search in Continuous Spaces Using Voronoi Optimistic Optimization with Regret Bounds (2020) (27)
- Deliberation Scheduling for Time-Critical Sequential Decision Making (1993) (26)
- Logical Particle Filtering (2007) (24)
- Associative reinforcement learning: A generate and test algorithm (2004) (24)
- Inferring Finite Automata with Stochastic Output Functions and an Application to Map Learning (1992) (24)
- The NSF Workshop on Reinforcement Learning: Summary and Observations (1996) (24)
- Sampling Methods for Action Selection in Influence Diagrams (2000) (23)
- Efficient Distributed Reinforcement Learning through Agreement (2008) (22)
- Simultaneous Localization and Grasping as a Belief Space Control Problem (2011) (22)
- Selecting Representative Examples for Program Synthesis (2017) (22)
- Learning Quickly to Plan Quickly Using Modular Meta-Learning (2018) (22)
- Pre-image Backchaining in Belief Space for Mobile Manipulation (2011) (22)
- Hedged learning: regret-minimization with learning experts (2005) (21)
- Foundations of learning in autonomous agents (1991) (21)
- Reinforcement learning for robot control (2002) (21)
- Hierarchical Solution of Large Markov Decision Processes (2010) (21)
- Few-Shot Bayesian Imitation Learning with Logical Program Policies (2019) (21)
- Learning Static Object Segmentation from Motion Segmentation (2005) (20)
- Off-Policy Policy Search (2007) (20)
- Uncertainty in Graph-Based Map Learning (1993) (20)
- Rex Programmer's Manual (1988) (19)
- Toward Approximate Planning in Very Large Stochastic Domains (1994) (19)
- Learning distributed control for modular robots (2004) (19)
- Not seeing is also believing: Combining object and metric spatial information (2014) (19)
- Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGB-D video (2019) (19)
- Learning sparse relational transition models (2018) (18)
- Learning Topological Maps from Weak Odometric Information (1997) (18)
- Object placement as inverse motion planning (2013) (17)
- Heuristic search of multiagent influence space (2012) (17)
- Integrated Robot Task and Motion Planning in the Now (2012) (17)
- Planning in partially-observable switching-mode continuous domains (2010) (17)
- Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning (2021) (17)
- A Sufficient Statistic for Influence in Structured Multiagent Environments (2019) (16)
- STRIPStream: Integrating Symbolic Planners and Blackbox Samplers (2018) (16)
- Learning Grammatical Models for Object Recognition (2008) (16)
- CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs (2020) (15)
- Learning with Deictic Representation (2002) (15)
- Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time (2020) (15)
- Focused model-learning and planning for non-Gaussian continuous state-action systems (2016) (15)
- Practical Reinforcement Learning (2000) (15)
- Learning Probabilistic Relational Dynamics for Multiple Tasks (2007) (14)
- A Formal Framework for Learning in Embedded Systems (1989) (14)
- Adversarial Actor-Critic Method for Task and Motion Planning Problems Using Planning Experience (2019) (13)
- Collision-free state estimation (2012) (13)
- Integrating Human-Provided Information into Belief State Representation Using Dynamic Factorization (2018) (12)
- GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling (2020) (12)
- Learning Hidden Markov Models with Geometric Information (1997) (11)
- Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances (2021) (11)
- Action-Space Partitioning for Planning (2007) (11)
- Learning in Worlds with Objects (2017) (11)
- Implicit belief-space pre-images for hierarchical planning and execution (2016) (11)
- Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning (2022) (11)
- Searching for physical objects in partially known environments (2016) (11)
- Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators (2021) (11)
- The National Science Foundation Workshop on Reinforcement Learning (1996) (11)
- Discovering State and Action Abstractions for Generalized Task and Motion Planning (2021) (10)
- Segmentation According to Natural Examples: Learning Static Segmentation from Motion Segmentation (2009) (10)
- A hypothesis-based algorithm for planning and control in non-Gaussian belief spaces (2011) (10)
- Associative reinforcement learning: Functions ink-DNF (2004) (10)
- Heading in the Right Direction (1998) (10)
- Active Learning of Abstract Plan Feasibility (2021) (10)
- Reliably Arranging Objects in Uncertain Domains (2018) (10)
- Predicting Partial Paths from Planning Problem Parameters (2007) (9)
- A large-scale benchmark for few-shot program induction and synthesis (2021) (9)
- Model-based Monitoring, Diagnosis and Control (2003) (9)
- Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning (2019) (9)
- Learning Neuro-Symbolic Skills for Bilevel Planning (2022) (9)
- Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation (2005) (9)
- Integrated Agent Architectures: Benchmark Tasks and Evaluation Metrics (1990) (9)
- Integrating Planning and Reactive Control (1989) (8)
- STRIPS Planning in Infinite Domains (2017) (8)
- Learning Functions in k-DNF from Reinforcement (1990) (8)
- Optimization in the now: Dynamic peephole optimization for hierarchical planning (2013) (8)
- Specifying Complex Behavior for Computer Agents (1991) (8)
- Domain and Plan Representation for Task and Motion Planning in Uncertain Domains (2011) (8)
- Decidability of Semi-Holonomic Prehensile Task and Motion Planning (2016) (8)
- Class-specific grasping of 3D objects from a single 2D image (2010) (8)
- Approaches to macro decompositions of large Markov decision process planning problems (2002) (7)
- Learning Hierarchical Structure in Policies (2007) (7)
- Backward-Forward Search for Manipulation Planning Completeness Argument (2015) (7)
- Probabilistic Planning for Decentralized Multi-Robot Systems (2015) (7)
- Relational envelope-based planning (2008) (7)
- Symbol Acquisition for Task-Level Planning (2013) (7)
- On Scalability Issues in Reinforcement Learning for Self-Reconfiguring Modular Robots (2006) (6)
- Learning to Acquire Information (2017) (6)
- Compiling Operator Descriptions into Reactive Strategies using Goal Regression (1991) (6)
- Object-Based World Modeling in Semi-Static Environments with Dependent Dirichlet Process Mixtures (2015) (6)
- Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Geometrical-Topological Gap (2002) (6)
- Constructing Semantic World Models from Partial Views (2013) (6)
- Two Algorithms for Transfer Learning (6)
- Notes on methods based on maximum-likelihood estimation for learning the parameters of the mixture of Gaussians model (1999) (6)
- Learning with Deictic Representations (2001) (6)
- OmniPush : accurate , diverse , real-world dataset of pushing dynamics with RGBD images (2018) (6)
- Heuristic Search for Task and Motion Planning (2014) (5)
- IJCAI-05, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, UK, July 30 - August 5, 2005 (2005) (5)
- Shape-Based Transfer of Generic Skills (2021) (5)
- Learning to generate novel views of objects for class recognition (2009) (5)
- Integrated robot task and motion planning in belief space (2012) (5)
- Towards Understanding Generalization via Analytical Learning Theory (2018) (5)
- Representation, learning, and planning algorithms for geometric task and motion planning (2021) (5)
- Allocation of Air Resources Against and Intelligent Adversary (2007) (5)
- Few-Shot Bayesian Imitation Learning with Logic over Programs (2019) (5)
- SE(3)-Equivariant Relational Rearrangement with Neural Descriptor Fields (2022) (5)
- Multi-Agent Learning in Mobilized Ad-Hoc Networks (2004) (5)
- Adaptive Envelope MDPs for Relational Equivalence-based Planning (2008) (5)
- A Framework for Reinforcement Learning on Real Robots (1998) (4)
- Algorithms for Partially Observable Markov Decision Processes (1994) (4)
- Reasoning about Large Populations with Lifted Probabilistic Inference (2007) (4)
- Learning When to Quit: Meta-Reasoning for Motion Planning (2021) (4)
- Finding aircraft collision-avoidance strategies using policy search methods (2009) (4)
- Planning with Macro-Actions in Decentralized POMDPs Citation (2014) (4)
- Associative methods in reinforcement learning: an empirical study (1994) (3)
- Optimizing a Start-Stop Controller Using Policy Search (2014) (3)
- Grasping POMDPs: Theory and Experiments (2007) (3)
- Time-Critical Planning and Scheduling Research at Brown University (1994) (3)
- Fragmented Spatial Maps from Surprisal: State Abstraction and Efficient Planning (2022) (3)
- Decentralized Decision-Making Under Uncertainty for Multi-Robot Teams (2014) (3)
- Look before you sweep: Visibility-aware motion planning (2018) (3)
- Optimizing Cascade Classifiers (3)
- Combining dynamic abstractions in large MDPs (2004) (3)
- Guiding the search in continuous state-action spaces by learning an action sampling distribution from off-target samples (2017) (3)
- Predicate Invention for Bilevel Planning (2022) (3)
- Learning What Information to Give in Partially Observed Domains (2018) (3)
- Holonomic planar motion from non-holonomic driving mechanisms: the front-point method (2002) (3)
- An Efficient Algorithm for Dynamic Programming in Partially Observable Markov Decision Processes (1995) (3)
- Generalizing Over Uncertain Dynamics for Online Trajectory Generation (2015) (3)
- Discrete Bayesian Uncertainty Models for Mobile-Robot Navigation (1996) (3)
- Planning and Control under Uncertainty for the PR2 (2011) (3)
- Planning under Time Constraints in Stochastic Domains Planning under Time Constraints in Stochastic Domains (1993) (2)
- Learning Planning Rules in Stochastic Worlds (2005) (2)
- Computing action equivalences for planning under time-constraints (2006) (2)
- FFRob: An efficient heuristic for task and motion planning (2014) (2)
- Assistant Agents for Sequential Planning Problems (2012) (2)
- Collected notes from the Benchmarks and Metrics Workshop (1991) (2)
- Adaptable replanning with compressed linear action models for learning from demonstrations (2018) (2)
- Automatic Class-Specific 3D Reconstruction from a Single Image (2009) (2)
- Learning to select examples for program synthesis (2017) (2)
- A Bibliography of Work Related to Reinforcement Learning (1994) (2)
- Learning Three-Dimensional Shape Models for Sketch Recognition (2005) (2)
- Learning object segmentation from video data (2003) (2)
- Temporal and Object Quantification Networks (2021) (2)
- Local Neural Descriptor Fields: Locally Conditioned Object Representations for Manipulation (2023) (2)
- and Planning in Embedded Agents (1990) (2)
- GLIB: Exploration via Goal-Literal Babbling for Lifted Operator Learning (2020) (2)
- Associative Reinforcement Learning: A Generate and Test Algorithm (1994) (2)
- Every Local Minimum is a Global Minimum of an Induced Model (2019) (1)
- Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models (2020) (1)
- Learning Hidden Markov Models with Geometric Information Learning Hidden Markov Models with Geometric Information (1997) (1)
- Fragmented Spatial Maps: State Abstraction and Efficient Planning from Surprisal (2021) (1)
- Reports of the AAAI 2010 Conference Workshops (2010) (1)
- Modular meta-learning in abstract graph networks for combinatorial generalization (2018) (1)
- Specifying and achieving goals in open uncertain robot-manipulation domains (2021) (1)
- Planning Robust Strategies for Constructing Multi-object Arrangements (2017) (1)
- Intelligent Robots in the Real World (1989) (1)
- The Synthesis of Intelligent Real-Time Systems (1990) (1)
- 6.01 Introduction to Electrical Engineering and Computer Science I, Fall 2009 (2009) (1)
- Instructions for Formatting JMLR Articles (2000) (1)
- ATC-356 Unmanned Aircraft Collision Avoidance Using Partially Observable Markov Decision Processes (2009) (1)
- Finding Frequent Entities in Continuous Data (2018) (1)
- Visual Prediction of Priors for Articulated Object Interaction (2020) (1)
- Learning Skill Hierarchies from Predicate Descriptions and Self-Supervision (2019) (1)
- Learning as an Increase in Knowledge (1987) (1)
- Artificial intelligence and robotics (1988) (1)
- Time-Critical Planning and Scheduling Research atBrown (2008) (0)
- Learning Image Segmentations from Experience (2001) (0)
- Learning Action Maps with State (2008) (0)
- Technical perspectiveNew bar set for intelligent vehicles (2010) (0)
- A Necessary Condition and Elimination of Local Minima for Deep Neural Networks Appendix (2020) (0)
- N 90-29077 Integrating Planning and Reactive (0)
- [4] Reinforcement Learning for Planning and Control (2008) (0)
- Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains (2022) (0)
- 6.034 Artificial Intelligence, Spring 2003 (2003) (0)
- Planning Wit stic s (1993) (0)
- Acquiring and Exploiting Rich Causal Models for Robust Decision Making (2012) (0)
- Appendix for Monte Carlo Tree Search in high-dimensional continuous spaces using Voronoi optimistic optimization with regret bounds (2020) (0)
- Program committee (2022) (0)
- Scaling Techniques for Large Markov Decision Process Planning Problems (2001) (0)
- Learning Probabilistic Rules from Experience (2000) (0)
- Learning Object-Based State Estimators for Household Robots (2020) (0)
- Search for Multi-Robot Coordination under Uncertainty (2016) (0)
- Intelligent Robots in an Uncertain World (2017) (0)
- Submitted to Artificial Intelligence Special Issue on Planning and Scheduling Planning under Time Constraints in Stochastic Domains (1993) (0)
- 992 Aaai Robot Exhibition and Competition Background and Planning (0)
- PDSketch: Integrated Planning Domain Programming and Learning (2023) (0)
- Computing action equivalences for planning (2006) (0)
- The Importance of Being Adaptable (2018) (0)
- Representing hierarchical POMDPs as DBNs for multi-scale map learning (2003) (0)
- Getting Reinforcement Learning to Work on Real Robots (2005) (0)
- Learning object boundary detection from motion data (2003) (0)
- Overcoming the Pitfalls of Prediction Error in Operator Learning for Bilevel Planning (2022) (0)
- 2 Short-term Research Priorities 2 . 1 Optimizing AI ’ s Economic Impact (0)
- I Administrative Title: Adaptive Intelligent Mobile Robots (0)
- PLANNING UNDER UNCERTAINTY IN COMPLEX STRUCTURED ENVIRONMENTS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY (2003) (0)
- Task-Directed Exploration in Continuous POMDPs for Robotic Manipulation of Articulated Objects (2022) (0)
- Learning Online Data Association (2020) (0)
- Learning structured transition models for multi-object manipulation (2018) (0)
- Visibility-Aware Navigation Among Movable Obstacles (2022) (0)
- Representation Discovery in Non-Parametric Reinforcement Learning by Dawit Zewdie (2014) (0)
- Finding Good Policies for Large Domains (2002) (0)
- Heuristic Search of Multiagent Influence Space Citation (2011) (0)
- Learning Boolean Functions in k-DNF (2008) (0)
- "hierarchical Learning in Stochastic Domains" Hierarchical Learning in Stochastic Domains (2009) (0)
- Learning and intelligent Agents (1994) (0)
- Learning to Plan with Optimistic Action Models (2022) (0)
- Interval Estimation Method (2008) (0)
- Time-Critical Planning and Scheduling inStochastic Domains ( Extended Abstract ) (2007) (0)
- Multi-Resolution Planning in Large Uncertain Domains (2005) (0)
- Learning Rational Subgoals from Demonstrations and Instructions (2023) (0)
- Sparse and Local Networks for Hypergraph Reasoning (2023) (0)
- Automated Quantification Of Blood Microvessels In Hematoxylin And Eosin Whole Slide Images (2021) (0)
- PG3: Policy-Guided Planning for Generalized Policy Generation (2022) (0)
- Introduction (1996) (0)
- A Belief-Space Approach to Integrated Intelligence - Research Area 10.3: Intelligent Networks (2017) (0)
- Revised submission to Artificial Intelligence special issue on Planning and Scheduling (1995) (0)
- Effective, interpretable algorithms for curiosity automatically discovered by evolutionary search (2020) (0)
- Effective Bayesian Transfer Learning (2010) (0)
- Learning Rich , Tractable Models of the Real World (1999) (0)
- Intelligent Interaction with the Real World (2010) (0)
- Action 35 and Planning in Embedded Agents * (2013) (0)
- Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Proposal for Thesis Research in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy (2008) (0)
- Engine s " s " # $ Learned Physics Engine s " s " # $ Learned Physics Engine s " s " # $ Learned Physics Engine s " (2018) (0)
- HYPERGRAPH REASONING NETWORKS (2021) (0)
- Experiments in Complex Domains (2008) (0)
- Introduction to Computer Science and Programming (PDF) (2004) (0)
- Weakly Supervised Global-Local Feature Learning for Cervical Cytology Image Analysis (2021) (0)
- Report of the 1996 Workshop on Reinforcement Learning Sponsored by the National Science Foundation Preface and Acknowledgements (2007) (0)
- Approximate Optimal control with Markov Decision Processes for Autonomous artificial intelligence (0)
- A Generate-and-Test Algorithm (2008) (0)
- Adversarial actor-critic algorithm for task and motion planning problems using planning experience (2018) (0)
- PDSketch: Integrated Domain Programming, Learning, and Planning (2022) (0)
- Statistics in GTRL (2008) (0)
- Combining Physical Simulators and Object-Based Networks for Prediction and Control (2018) (0)
- On the Expressiveness and Generalization of Hypergraph Neural Networks (2023) (0)
- Acting to gain information (1993) (0)
- Statistical Relational Artificial Intelligence, Papers from the 2010 AAAI Workshop, Atlanta, Georgia, USA, July 12, 2010 (2010) (0)
- Intelligence in the Now: Robust Intelligence in Complex Domains (2015) (0)
- Automatic Synthesis of Rules for Planning in Belief Space (2013) (0)
- Spatial and Temporal Abstractions in POMDPS : Learning and Planning (2004) (0)
- Probabilistic Planning for Multi-Robot Systems (0)
- Planning with Probabilistic Rules in a Relational World (2002) (0)
- Weighted geometric grammars for object detection in context (2010) (0)
- Planning to Give Information in Partially Observed Domains with a Learned Weighted Entropy Model (2018) (0)
- Learning Hierarchies in Stochastic Domains (1994) (0)
- Experience with web-based computer science education (2002) (0)
- Simplifying Boolean Expressions in GTRL (2008) (0)
- Interval Programming : A Multiple Criteria Decision Making Model for Autonomous Vehicle Control (2001) (0)
- Discovering Representative Examples for Program Synthesis (2018) (0)
- Doing for Our Robots What Nature Did for Us (2019) (0)
- Fully Persistent Spatial Data Structures for Efficient Queries in Path-Dependent Motion Planning Applications (2022) (0)
- Visually Guided Topological Mapping for Mobile Robots (2001) (0)
- Combining dynamic abstractions in very large MDPs (2004) (0)
- The Witness Algorithm : Solving PartiallyObservable Markov Decision (1994) (0)
- Distributed Learning for Controlling Modular Robots (2004) (0)
- Learning in Average Reward Stochastic Games A Reinforcement Learning (Nash-R) Algorithm for Average Reward Irreducible Stochastic Games (2013) (0)
- IJCAI Organization (2009) (0)
This paper list is powered by the following services:
Other Resources About Leslie P. Kaelbling
What Schools Are Affiliated With Leslie P. Kaelbling?
Leslie P. Kaelbling is affiliated with the following schools: