We don’t know how to do it yet – but we’re fixated on General Intelligence and we have a map for the journey. To guide us, we try to understand the computational properties of human general intelligence.

We believe there are no fundamental obstacles to artificial general intelligence, such as new physical discoveries. We’re just looking for better algorithms.

We believe the first hurdle is a generative memory system capable of unsupervised¬†continuous learning in nonstationary environments. We narrow the field to biologically plausible machine learning techniques that deliver robust empirical performance. We’ll use anything that works and fits into an AGI framework: The real world doesn’t come with score-cards or loss functions. We have only blood, sweat and tears!

Recognizing an abstract concept from a specific instance. Reprojecting the internal state of memory into the input space reveals a generated, "generic" 5.

Reproduced from Sherman SM, and Guillery RW (2006): Exploring the Thalamus and its Role in Cortical Function. Cambridge, MA: MIT Press. The architecture of the thalamocortical loop is believed to play a key role in attention.

Attention and selective learning

The second hurdle is how to exploit the memory to achieve goals. We want to selectively encode the most salient and useful features. We need one-shot learning at critical moments, and extremely slow learning at other times – learning both faster and slower than other machine learning algorithms! We will look to computational models of attention and episodic memory for these.

Simulation and planning

We want to use our memory system for hierarchical simulation and planning. We firmly believe that the same representation should be used for both perception, understanding, prediction and planning. This means being able to simulate the world inside the memory system – and use reinforcement learning to shape the simulation.


Interaction with the world is crucial to exploring and understanding it. Embodied systems are exposed to much richer but far harsher conditions. Some think that enough training data solves most of the limitations with today’s algorithms. We’re going to bet the other way, that we need better algorithms first. We’ll get (back) to the robots later!

Here's a robot we made earlier