Skip to content

Experiment

seahorse

Continual Few-Shot Learning with Hippocampal Replay

In continual learning, the neural network learns from a stream of data, acquiring new knowledge incrementally. It’s not possible to assume an i.i.d. stationary dataset available in one batch. Catastrophic forgetting of previous knowledge is a well known challenge. A wide variety of approaches fall broadly into 3 categories [6]:… Read More »Continual Few-Shot Learning with Hippocampal Replay

https://brickpixels.net/2018/09/01/robot-exploration/ By Ben Teoh

Research Roadmap: 2020-2021

We’ve just undertaken a review and refresh of our research roadmap! The topics and approach we will take in the coming year are all laid out in a new page: Research Roadmap Our primary topics for the coming year include: Continual Few-Shot Learning (CFSL) via our Episodic memory system Using… Read More »Research Roadmap: 2020-2021

Hippocampus

AHA! an ‘Artificial Hippocampal Algorithm’ for Episodic Machine Learning

We’re very happy to report that we recently published a preprint on AHA, an ‘Artificial Hippocampal Algorithm’ for Episodic Machine Learning. It’s the culmination of a multi-year research project and is a starting point for the next wave of developments. This article describes the motivation for developing AHA and a… Read More »AHA! an ‘Artificial Hippocampal Algorithm’ for Episodic Machine Learning

Learning partially-observable higher-order sequences using local and immediate credit assignment

One of our key projects is a memory system that can learn to associate distant cause & effect while only using local, immediate & unsupervised credit assignment. Our approach is called RSM – Recurrent Sparse Memory. We recently uploaded a preprint describing RSM. This is the first of several blog… Read More »Learning partially-observable higher-order sequences using local and immediate credit assignment

Learning distant cause and effect using only local and immediate credit assignment

We’ve uploaded a new paper to arXiv presenting our algorithm for biologically-plausible learning of distant cause & effect using only local and immediate credit assignment. This is a big step for us – it ticks almost all our requirements for a general purpose representation. The training regime is unsupervised &… Read More »Learning distant cause and effect using only local and immediate credit assignment

Predictive Capsules Networks – Research update

We recently talked about Capsules networks and equivariances. NB: If you’re not familiar with Capsules networks, read this first. Our primary objective with Capsules networks is to exploit their enhanced generalization abilities. However, what we’ve found instead raises new questions about how generalization can be measured and whether Capsules networks are… Read More »Predictive Capsules Networks – Research update

Experiment Setup Overview

Experiment Infrastructure at Project AGI

It’s such a joy to be able to test an idea, go straight to the idea without wrestling with the tools. We recently developed an experimental setup which, so far, looks like it will do just that. I’m excited about it and hope it can help you too, so here it is. We’ll go through the why we created another framework, and how each module in the experiment setup works.

Understanding Equivariance

We are exploring the nature of equivariance, a concept that is now closely associated with the capsules network architecture (see key papers Sabour et al, and Hinton et al). Machine learning representations that capture equivariance must learn the way that patterns in the input vary together, in addition to statistical clusters in… Read More »Understanding Equivariance