Skip to content

Biologically-plausible learning rules for artificial neural networks

Artificial neural networks (ANNs) – are conceptually simple; the combination of inputs and weights in a classical ANN can be represented as a single matrix product operation followed by an elementwise nonlinearity. However, as the number of learned parameters increases, it becomes very difficult to train these networks effectively. Most… Read More »Biologically-plausible learning rules for artificial neural networks

Learning partially-observable higher-order sequences using local and immediate credit assignment

One of our key projects is a memory system that can learn to associate distant cause & effect while only using local, immediate & unsupervised credit assignment. Our approach is called RSM – Recurrent Sparse Memory. We recently uploaded a preprint describing RSM. This is the first of several blog… Read More »Learning partially-observable higher-order sequences using local and immediate credit assignment

Cerebral networks for conscious access and decision making

Originally published in March 2019 in an electronic journal in Japanese Introduction The purpose of this essay is to survey the relationship between decision making and large-scale cerebral networks with regard to conscious access, a purported neural correlate of consciousness, and to provide clues for computational modelling and general understanding… Read More »Cerebral networks for conscious access and decision making

Learning distant cause and effect using only local and immediate credit assignment

We’ve uploaded a new paper to arXiv presenting our algorithm for biologically-plausible learning of distant cause & effect using only local and immediate credit assignment. This is a big step for us – it ticks almost all our requirements for a general purpose representation. The training regime is unsupervised &… Read More »Learning distant cause and effect using only local and immediate credit assignment

Predictive Capsules Networks – Research update

We recently talked about Capsules networks and equivariances. NB: If you’re not familiar with Capsules networks, read this first. Our primary objective with Capsules networks is to exploit their enhanced generalization abilities. However, what we’ve found instead raises new questions about how generalization can be measured and whether Capsules networks are… Read More »Predictive Capsules Networks – Research update

Experiment Setup Overview

Experiment Infrastructure at Project AGI

It’s such a joy to be able to test an idea, go straight to the idea without wrestling with the tools. We recently developed an experimental setup which, so far, looks like it will do just that. I’m excited about it and hope it can help you too, so here it is. We’ll go through the why we created another framework, and how each module in the experiment setup works.

Understanding Equivariance

We are exploring the nature of equivariance, a concept that is now closely associated with the capsules network architecture (see key papers Sabour et al, and Hinton et al). Machine learning representations that capture equivariance must learn the way that patterns in the input vary together, in addition to statistical clusters in… Read More »Understanding Equivariance