We were really excited to present a couple of papers recently at IJCNN, the International Joint Conference on Neural Networks.
One of those papers was an extension to the work that we previously published on AHA, Artificial Hippocampal Algorithm based on Complementary Learning Systems by O’Reilly and McClelland (see that paper summary blog post here).
In the original paper, we demonstrated how AHA can learn rapidly, but its a short term memory and it forgets rapidly as well.
Here, we showed how it can be used to transfer memories to a long term memory that learns statistics from many samples (like a conventional ML model, and analogous to Neocortex).
A very high level summary is given below with a copy of a recent Twitter thread on the topic.
“One-shot learning for the long term: consolidation with an Artificial Hippocampal Algorithm, AHA”
We were really excited to present our work recently at #IJCNN
See the thread below for Cognitive Science and ML implications.
We took a conventional supervised learning model,
augmented it with a complementary hippocampal component (AHA)
The system re-uses learnt concept-primitives to one-shot learn new classes.
AHA is a short term memory that learns from 1 example
a) In the short term, memories are recalled, improving inference
b) Spontaneous ‘replay’ consolidates knowledge for the long term
—> New knowledge is transferred
—> There is no catastrophic forgetting
Traditionally in ML, one-shot learning is ephemeral.
A model learns to match exemplars of novel classes.
When it does this matching, there is no permanent weight adaptation, and no knowledge is retained for later cognition/inference.
Here we showed learning for the long term.
AHA is based on CLS (Complementary Learning Systems by McClellan and O’Reilly). To our knowledge, CLS has not previously been used for consolidation as described in the original theory.
This is a modest step toward continual few shot learning using hippocampal style replay, which we’re working on at the moment.