Nicolas Manfred Otto Heess
#161,210
Most Influential Person Now
Nicolas Manfred Otto Heess's AcademicInfluence.com Rankings
Nicolas Manfred Otto Heesscomputer-science Degrees
Computer Science
#9287
World Rank
#9754
Historical Rank
Machine Learning
#4027
World Rank
#4075
Historical Rank
Artificial Intelligence
#4364
World Rank
#4424
Historical Rank
Database
#6259
World Rank
#6489
Historical Rank

Download Badge
Computer Science
Why Is Nicolas Manfred Otto Heess Influential?
(Suggest an Edit or Addition)Nicolas Manfred Otto Heess's Published Works
Number of citations in a given year to any of this author's works
Total number of citations to an author for the works they published in a given year. This highlights publication of the most important work(s) by the author
Published Works
- Continuous control with deep reinforcement learning (2015) (8827)
- Recurrent Models of Visual Attention (2014) (2937)
- Deterministic Policy Gradient Algorithms (2014) (2873)
- Relational inductive biases, deep learning, and graph networks (2018) (2037)
- Emergence of Locomotion Behaviours in Rich Environments (2017) (737)
- FeUdal Networks for Hierarchical Reinforcement Learning (2017) (686)
- Sample Efficient Actor-Critic with Experience Replay (2016) (645)
- Learning Continuous Control Policies by Stochastic Value Gradients (2015) (478)
- Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards (2017) (475)
- Sim-to-Real Robot Learning from Pixels with Progressive Nets (2016) (460)
- Graph networks as learnable physics engines for inference and control (2018) (458)
- Attend, Infer, Repeat: Fast Scene Understanding with Generative Models (2016) (456)
- Imagination-Augmented Agents for Deep Reinforcement Learning (2017) (450)
- Distral: Robust multitask reinforcement learning (2017) (438)
- Unsupervised Learning of 3D Structure from Images (2016) (366)
- Learning by Playing - Solving Sparse Reward Tasks from Scratch (2018) (339)
- Gradient Estimation Using Stochastic Computation Graphs (2015) (336)
- Maximum a Posteriori Policy Optimisation (2018) (334)
- Distributed Distributional Deterministic Policy Gradients (2018) (333)
- Learning Generative Texture Models with extended Fields-of-Experts (2009) (274)
- Learning an Embedding Space for Transferable Robot Skills (2018) (273)
- Reinforcement and Imitation Learning for Diverse Visuomotor Skills (2018) (253)
- Memory-based control with recurrent neural networks (2015) (236)
- Data-efficient Deep Reinforcement Learning for Dexterous Manipulation (2017) (216)
- The Shape Boltzmann Machine: A Strong Model of Object Shape (2012) (202)
- Stabilizing Transformers for Reinforcement Learning (2019) (195)
- A Generalist Agent (2022) (194)
- Learning and Transfer of Modulated Locomotor Controllers (2016) (191)
- Learning human behaviors from motion capture by adversarial imitation (2017) (174)
- Robust Imitation of Diverse Behaviors (2017) (174)
- Filtering Variational Objectives (2017) (170)
- dm_control: Software and Tasks for Continuous Control (2020) (159)
- Critic Regularized Regression (2020) (150)
- Emergent Coordination Through Competition (2019) (113)
- Neural probabilistic motor primitives for humanoid control (2018) (102)
- Learning a Generative Model of Images by Factoring Appearance and Shape (2011) (100)
- Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search (2018) (90)
- Meta reinforcement learning as task inference (2019) (90)
- Learning model-based planning from scratch (2017) (87)
- Information asymmetry in KL-regularized RL (2019) (86)
- Hierarchical visuomotor control of humanoids (2018) (85)
- Searching for objects driven by context (2012) (82)
- V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control (2019) (82)
- Visual Boundary Prediction: A Deep Neural Prediction Network and Quality Dissection (2014) (75)
- Metacontrol for Adaptive Imagination-Based Optimization (2017) (69)
- Actor-Critic Reinforcement Learning with Energy-Based Policies (2012) (68)
- Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures (2018) (64)
- Catch & Carry: Reusable Neural Controllers for Vision-Guided Whole-Body Tasks (2019) (62)
- Meta-learning of Sequential Strategies (2019) (59)
- RL Unplugged: Benchmarks for Offline Reinforcement Learning (2020) (56)
- A Generalized Training Approach for Multiagent Learning (2019) (55)
- RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning (2020) (55)
- Mix&Match - Agent Curricula for Reinforcement Learning (2018) (55)
- Relative Entropy Regularized Policy Iteration (2018) (53)
- Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics (2020) (52)
- Neural Production Systems (2021) (50)
- From Motor Control to Team Play in Simulated Humanoid Football (2021) (50)
- Learning to Pass Expectation Propagation Messages (2013) (43)
- Value constrained model-free continuous control (2019) (43)
- The Termination Critic (2019) (40)
- Hindsight Credit Assignment (2019) (40)
- Action and Perception as Divergence Minimization (2020) (40)
- Exploiting Hierarchy for Learning and Transfer in KL-regularized RL (2019) (38)
- A Distributional View on Multi-Objective Policy Optimization (2020) (38)
- Self-supervised Learning of Image Embedding for Continuous Control (2019) (37)
- Credit Assignment Techniques in Stochastic Computation Graphs (2019) (37)
- Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models (2019) (33)
- Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages (2015) (31)
- Data-efficient Hindsight Off-policy Option Learning (2020) (30)
- Game Plan: What AI can do for Football, and What Football can do for AI (2020) (30)
- CoMic: Complementary Task Learning & Mimicry for Reusable Skills (2020) (25)
- Bayes-Adaptive Simulation-based Search with Value Function Approximation (2014) (25)
- Compositional Transfer in Hierarchical Reinforcement Learning (2019) (25)
- Behavior Priors for Efficient Reinforcement Learning (2020) (24)
- Learning Dexterous Manipulation from Suboptimal Experts (2020) (23)
- Composing Entropic Policies using Divergence Correction (2018) (22)
- Learning to swim in potential flow (2020) (21)
- Regularized Hierarchical Policies for Compositional Transfer in Robotics (2019) (21)
- Counterfactual Credit Assignment in Model-Free Reinforcement Learning (2020) (21)
- Weakly Supervised Learning of Foreground-Background Segmentation Using Masked RBMs (2011) (21)
- Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion (2020) (20)
- Offline Meta-Reinforcement Learning for Industrial Insertion (2021) (18)
- Neural belief states for partially observed domains (2018) (17)
- Catch & Carry (2020) (17)
- Particle Value Functions (2017) (15)
- Twisted Variational Sequential Monte Carlo (2018) (15)
- Retrieval-Augmented Reinforcement Learning (2022) (15)
- RL Unplugged: A Collection of Benchmarks for Offline Reinforcement Learning (2020) (14)
- Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors (2022) (13)
- Reinforced Variational Inference (2015) (13)
- Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning (2020) (13)
- Reinforcement Learning Agents acquire Flocking and Symbiotic Behaviour in Simulated Ecosystems (2019) (12)
- Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration (2021) (12)
- Local Search for Policy Iteration in Continuous Control (2020) (12)
- Towards Real Robot Learning in the Wild: A Case Study in Bipedal Locomotion (2021) (12)
- Learning Dynamics Models for Model Predictive Agents (2021) (11)
- On Multi-objective Policy Optimization as a Tool for Reinforcement Learning (2021) (11)
- Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies (2021) (11)
- Temporal Difference Uncertainties as a Signal for Exploration (2020) (11)
- Value-driven Hindsight Modelling (2020) (10)
- Direction Opponency, Not Quadrature, Is Key to the 1/4 Cycle Preference for Apparent Motion in the Motion Energy Model (2010) (10)
- Physically Embedded Planning Problems: New Challenges for Reinforcement Learning (2020) (9)
- Robust Constrained Reinforcement Learning for Continuous Control with Model Misspecification (2020) (8)
- Learning Hierarchical Information Flow with Recurrent Neural Modules (2017) (8)
- The Body is Not a Given: Joint Agent Policy Learning and Morphology Evolution (2019) (8)
- Approximate Inference in Discrete Distributions with Monte Carlo Tree Search and Value Functions (2019) (8)
- NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields (2022) (8)
- Reusable neural skill embeddings for vision-guided whole body movement and object manipulation (2019) (8)
- NeuPL: Neural Population Learning (2022) (7)
- Imagination-Based Decision Making with Physical Models in Deep Neural Networks (2016) (7)
- Evaluating model-based planning and planner amortization for continuous control (2021) (7)
- Learning Coordinated Terrain-Adaptive Locomotion by Imitating a Centroidal Dynamics Planner (2021) (6)
- Collect & Infer - a fresh look at data-efficient Reinforcement Learning (2021) (6)
- Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces (2019) (6)
- COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation (2022) (6)
- Transferring Task Goals via Hierarchical Reinforcement Learning (2018) (5)
- Simple Sensor Intentions for Exploration (2020) (5)
- Learning generative models of mid-level structure in natural images (2012) (5)
- MO2: Model-Based Offline Options (2022) (4)
- Multiagent off-screen behavior prediction in football (2022) (4)
- Entropic Desired Dynamics for Intrinsic Control (2021) (3)
- Importance Weighted Policy Learning and Adaption (2020) (3)
- Quinoa: a Q-function You Infer Normalized Over Actions (2019) (3)
- Beyond Tabula-Rasa: a Modular Reinforcement Learning Approach for Physically Embedded 3D Sokoban (2020) (3)
- Simplex Neural Population Learning: Any-Mixture Bayes-Optimality in Symmetric Zero-sum Games (2022) (3)
- Entropic Policy Composition with Generalized Policy Improvement and Divergence Correction (2018) (2)
- A Constrained Multi-Objective Reinforcement Learning Framework (2021) (2)
- The Learning Workshop Snowbird (2010) (2)
- Multimodal Nonlinear Filtering Using Gauss-Hermite Quadrature (2011) (2)
- Just-In-Time Kernel Regression for Expectation Propagation (2015) (2)
- Success at any cost: value constrained model-free continuous control (2018) (2)
- Representation Learning in Deep RL via Discrete Information Bottleneck (2022) (1)
- Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014 (2014) (1)
- Forgetting and Imbalance in Robot Lifelong Learning with Off-policy Data (2022) (1)
- CoMic: Co-Training and Mimicry for Reusable Skills (2020) (1)
- Stateful active facilitator: Coordination and Environmental Heterogeneity in Cooperative Multi-Agent Reinforcement Learning (2022) (1)
- Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach (2022) (1)
- Lossless Adaptation of Pretrained Vision Models For Robotic Manipulation (2023) (1)
- SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration (2022) (1)
- Learning Skill Embeddings for Transferable Robot Skills (2017) (1)
- Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains (2023) (0)
- Entropic Desired Dynamics for Intrinsic Control: Supplemental Material (2021) (0)
- Increases as Stimulus Contrast Is Decreased Direction Selectivity of Neurons in the Striate Cortex (2015) (0)
- Data augmentation for efficient learning from parametric experts (2022) (0)
- Offline Distillation for Robot Lifelong Learning with Imbalanced Experience (2022) (0)
- Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning (2023) (0)
- Intelligent Perception (0)
- learn continuous control with deep bestärkendem (2016) (0)
- Mix & Match – Agent Curricula for Reinforcement Learning [ Appendix ] (2018) (0)
- Neural ExpectationMaximization (2017) (0)
- Top Down Attentional Modulation for Object Recognition ? (2003) (0)
- A Strong Model of Object Shape (2017) (0)
- Multimodal nonlinear ltering using (2011) (0)
- Passing Expectation Propagation Messages with Kernel Methods (2015) (0)
- Explorer The Shape Boltzmann Machine (2013) (0)
- 4 Variational Inference as Reinforcement Learning (2015) (0)
- VALUATION : A N ADVERSARIAL APPROACH TO UNCOVER CATASTROPHIC FAILURES (2019) (0)
- Coordinating Policies Among Multiple Agents via an Intelligent Communication Channel (2022) (0)
- Learning Robot Skill Embeddings (2017) (0)
- Thalamus Gated Recurrent Modules (2017) (0)
- EARNING AN E MBEDDING S PACE FOR T RANSFERABLE R OBOT S KILLS (2018) (0)
- Structure & Priors in Reinforcement Learning (SPiRL) (2019) (0)
- Neural Production Systems: Learning Rule-Governed Visual Dynamics (2021) (0)
- Leave Graphs Alone: Addressing Over-Squashing without Rewiring (2022) (0)
- Spatial integration in direction selective cortical neurons and the notion of a fundamental spatial subunit (2008) (0)
- CO PTI DICE: O FFLINE C ONSTRAINED R EINFORCE MENT L EARNING VIA S TATIONARY D ISTRIBUTION C ORRECTION E STIMATION (2022) (0)
- Category : visual processing and pattern recognition Preference : oral 1 Deep Segmentation Networks (2010) (0)
This paper list is powered by the following services:
What Schools Are Affiliated With Nicolas Manfred Otto Heess?
Nicolas Manfred Otto Heess is affiliated with the following schools: