I am a PhD student in artificial intelligence at the University of Toronto with Jimmy Ba and Geoffrey Hinton and a researcher at Google Brain. I’m currently visiting Pieter Abbel’s lab at UC Berkeley. Previously, I completed my MRes in Computational Statistics and Machine Learning at UCL and the Gatsby Unit with Tim Lillicrap and Karl Friston. My work is supported by Canada’s Vanier Scholarship.
Preferred way of being contacted: [email protected]
The goal of my research is to discover the computational principles of human thinking by building intelligent machines that learn through interaction with complex simulated worlds. Within deep reinforcement learning, my work focuses on:
- Unsupervised world models learned from raw video data for artificial intelligence to develop a general understanding of the world and plan by imagining future outcomes of actions. The main challenges here are representation learning and temporal abstraction.
- Unsupervised agent objectives to autonomously explore and influence the environment, moving artificial intelligence beyond narrow task-specified behaviors. This includes artificial curiosity, information gain, empowerment, skill discovery, and active inference.
You can hear more on the TalkRL Podcast and on my research page.
See Google Scholar for more publications.
Discovering and Achieving Goals via World Models
NeurIPS 2021 (26%), URL 2021 (oral), SSL 2021 (oral)
Benchmarking the Spectrum of Agent Capabilities
ICLR 2022, DRLW 2021 (oral)
Clockwork Variational Autoencoders
NeurIPS 2021 (26%)
Evaluating Agents without Rewards
BARL 2020 (oral)
Latent Skill Planning for Exploration and Transfer
ICLR 2021 (28%)
Mastering Atari with Discrete World Models
ICLR 2021 (28%)
Planning to Explore via Self-Supervised World Models
ICML 2020 (22%)
Dream to Control: Learning Behaviors by Latent Imagination
ICLR 2020 (oral, 4%), DRLW 2019 (oral)
A Deep Learning Framework for Neuroscience
Noise Contrastive Priors for Functional Uncertainty
UAI 2019 (26%)
Learning Latent Dynamics for Planning from Pixels
ICML 2019 (23%)
Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion
NeurIPS 2018 (oral, 0.6%)
Sim-to-Real: Learning Agile Locomotion For Quadruped Robots
RSS 2018 (31%)
Danijar Hafner is a PhD candidate in artificial intelligence at the University of Toronto with Jimmy Ba and Geoffrey Hinton. He is also a researcher at the Brain Team at Google Research and the Vector Institute. His research focuses on building intelligent machines based on the computational principles of the brain and evaluating them in complex simulations. Specifically, he focuses on learning general world models and on deep reinforcement learning without rewards. Danijar completed his MRes in Computational Statistics and Machine Learning at UCL and the Gatsby Unit with Tim Lillicrap and Karl Friston. His work is supported by Canada’s Vanier Scholarship.