We’ve uploaded a new paper to arXiv presenting our algorithm for biologically-plausible learning of distant cause & effect using only local and immediate credit assignment.
This is a big step for us – it ticks almost all our requirements for a general purpose representation. The training regime is unsupervised & continuous. Local and immediate credit assignment is a very challenging aspect, because supervised learning via deep-backpropagation and backpropagation-through-time (BPTT) are key foundations of modern artificial neural networks. It’s a substantial departure from popular practices – and it works pretty well in a range of tasks!
In addition to the ideological goals described above, we’re also able to report some practical benefits. The algorithm runs really fast; and it uses a lot less memory than existing approaches to sequence learning – typically 3-10% of the memory of the equivalent LSTM/BPTT trained network.
Over the next few weeks we’ll post some articles expanding on the experiments in the paper. Each experiment tackles a different aspect of sequence learning. They include:
- Learning to associate distant causes & effects in a random or stochastic process
- Learning to predict higher-order, partially-observable sequences of images
- Learning to navigate an agent through a maze
- Learning to predict the next word in a document (language modelling)
You can see a video of the maze navigation behaviour here:
The model is also generative, meaning that we can run the predictions in a loop and generate sequences of observations from the model. After training on the Penn-TreeBank corpus (excerpts from the Wall St. Journal) we generated some text from the model by priming it with an unseen string and observing what it produced in response. Here’s an example:
- Primer:
the company has reported declines … - Generated:
… in the company ‘s athletic officials in the past four years <end>
Of course, this is the type of nonsense you get when you train a language model that has no real-world common sense grounding of any of these concepts. At least it’s grammatically correct!
More to come. Watch this space… !