Research Strategy

We don't know how, but we'll get there...

We don’t know how to do it yet – but we’re fixated on General Intelligence and we have a map for the journey. To guide us, we try to understand the  computational properties of human general intelligence.

We believe there are no fundamental obstacles to artificial general intelligence, such as new physical discoveries. We’re just looking for better algorithms.

We don’t even need to succeed in creating AGI. Incrementally “more general purpose” algorithms will find useful applications. Microsoft and OpenAI recently announced their intention to work on “pre-AGI” technology.

We believe the first hurdle is a generative memory system capable of unsupervised continuous learning in nonstationary environments. We narrow the field to biologically plausible machine learning techniques that deliver robust empirical performance. We’ll use anything that works and fits into an AGI framework.

Recognizing an abstract concept from a specific instance. Reprojecting the internal state of memory into the input space reveals a generated, “generic” 5. This early success for us used Self-Organizing Maps (SOMs).

Attention and selective learning

The second hurdle is how to exploit the memory to achieve goals. We want to selectively encode the most salient and useful features. We need one-shot learning at critical moments, and extremely slow learning at other times – learning both faster and slower than other machine learning algorithms! We will look to computational models of attention and episodic memory for these.

The figure left was reproduced from Sherman SM, and Guillery RW (2006): Exploring the Thalamus and its Role in Cortical Function. Cambridge, MA: MIT Press. The architecture of the thalamocortical loop is believed to play a key role in attentional modulation of representations in the Neocortex, and this work was highly influential for us.

Mental Simulaton & Planning

We want to use our memory system for hierarchical simulation and planning. We firmly believe that the same representation should be used for both perception, understanding, prediction and planning. This means being able to simulate the world inside the memory system – and use reinforcement learning to shape the simulation. This capability is known as Mental Simulation. We will develop the algorithms for learning representations first, and then enhance them into sequential Generative models capable of evaluating sequences of actions.

Here’s a robot we made earlier…


Interaction with the world is crucial to exploring and understanding it. Embodied systems are exposed to much richer but far harsher conditions. Some think that enough training data solves most of the limitations with today’s algorithms. We’re going to bet the other way, that we need better algorithms first. We’ll get (back) to the robots later!