Skip to content

Reading List

Exciting New Directions in ML/AI

Over the last few years, there have been several breakthroughs and exciting new research directions in Reinforcement Learning, Hippocampus Inspired Architectures, Attention and Few-Shot Learning. There has been a move towards multi-component, heterogeneous, stateful architectures, many guided by ideas from cognitive sciences. Google DeepMind and Google Brain are leading the… Read More »Exciting New Directions in ML/AI

New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)

  Reproduced left to right from [8,10,1] Within a 5 day span in October, 4 papers came out that take a significantly different approach to AI hierarchical networks. They are all inspired by biological principles to varying degrees. It’s exciting to see different ways of thinking. Particularly at a time… Read More »New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)

Reading list – October 2017

This month’s reading list has two parts: a non-Reinforcement Learning list, and a Reinforcement Learning list. Since our next blog post will be on Reinforcement Learning, readers might like to refer to our RL reading list separately. Non-Reinforcement Learning reading list A Framework for searching for General Artificial Intelligence Authors:… Read More »Reading list – October 2017

Reading list – August 2017

1. Neuroscience-inspired Artificial Intelligence Authors: Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick Type: Review article in Neuron Publication date: 19 July 2017 This paper outlined the contribution of neuroscience to the most recent advances in AI and argued that the study of neural computation in humans and other… Read More »Reading list – August 2017

Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

This week we have a couple of interesting links to share. From our experiments with generative hierarchical models, we claimed that the model produced by feed-forward processing should not have loops. Now we have discovered a paper by Bengio et al titled “Towards biologically plausible deep learning” [1] that supports this… Read More »Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

New HTM paper – “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”

The artificial neuron model used by Jeff Hawkins and Subutai Ahmad in their new paper (image reproduced from their paper, and cropped). Their neuron model is inspired by the pyramidal cells found in neocortex layers 2/3 and 5. It has been several years since Jeff Hawkins and Numenta published the… Read More »New HTM paper – “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”