Research
The goal of my research is to discover the computational principles of human thinking by building intelligent machines that learn through interaction with complex simulated worlds. Within deep reinforcement learning, my work focuses on:
- Unsupervised world models learned from raw video data for artificial intelligence to develop a general understanding of the world and plan by imagining future outcomes of actions. The main challenges here are representation learning and temporal abstraction.
- Unsupervised agent objectives to autonomously explore and influence the environment, moving artificial intelligence beyond narrow task-specified behaviors. This includes artificial curiosity, information gain, empowerment, skill discovery, and active inference.
Key papers: PlaNet, Dreamer, DreamerV2, Plan2Explore, APD, ClockworkVAE
Progress
World models: | PlaNet (2018), Dreamer (2019), DreamerV2 (2020) |
Foundations: | DeepNeuro (2019), APD (2020) |
Unsupervised exploration: | Plan2Explore (2020), IC2 (2021) |
Skill discovery: | MPH (2018), LSP (2020), LEXA (2021) |
Temporal abstraction: | ClockworkVAE (2021) |
Evaluation: | AgentEval (2021), Crafter (2021) |
Uncertainty estimation: | NCP (2018), BayesLayers (2019) |
Technical Talk
This 20 minute talk gives an overview of my research. It shows a general framework for designing unsupervised intelligent agents and our practical progress on scaling them up.
It is an invited talk I gave at the ICLR 2021 workshops on Self-Supervised Reinforcement Learning and Never-Ending Reinforcement Learning.
Podcast
Robin invited me to the TalkRL Podcast, where we talk about deep learning & neuroscience, PlaNet, Dreamer, world models, latent dynamics, curious agents, and more!