Skip to content

Deep Learning

Biologically-plausible learning rules for artificial neural networks

Artificial neural networks (ANNs) – are conceptually simple; the combination of inputs and weights in a classical ANN can be represented as a single matrix product operation followed by an elementwise nonlinearity. However, as the number of learned parameters increases, it becomes very difficult to train these networks effectively. Most… Read More »Biologically-plausible learning rules for artificial neural networks

New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)

  Reproduced left to right from [8,10,1] Within a 5 day span in October, 4 papers came out that take a significantly different approach to AI hierarchical networks. They are all inspired by biological principles to varying degrees. It’s exciting to see different ways of thinking. Particularly at a time… Read More »New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)

Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

This week we have a couple of interesting links to share. From our experiments with generative hierarchical models, we claimed that the model produced by feed-forward processing should not have loops. Now we have discovered a paper by Bengio et al titled “Towards biologically plausible deep learning” [1] that supports this… Read More »Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

Reading list – July 2015

This month’s reading list continues with a subtheme on recurrent neural networks, and in particular Long Short Term Memory (LSTM). First here’s an interesting report on a panel discussion about the future of Deep Learning at the International Conference on Machine Learning (ICML), 2015: http://deeplearning.net/2015/07/13/a-brief-summary-of-the-panel-discussion-at-dl-workshop-icml-2015/ Participants included Yoshua Bengio (University… Read More »Reading list – July 2015

Reading List – May 2015

John Lisman, “The Challenge of Understanding the Brain: Where We Stand in 2015“, Neuron, 2015 For many in ML and AI,  biological knowledge is focussed on cortex. This paper gives an excellent broad overview of current biological understanding of intelligence. Sebastian Billaudelle and Subutai Ahmad, “Porting HTM Models to the Heidelberg Neuromorphic… Read More »Reading List – May 2015

A Unifying View of Deep Networks and Hierarchical Temporal Memory

Browsing the NUPIC Theory mailing list, I came across a post by Fergal Byrne on the differences and similarities between Deep Learning and MPF/HTM. It’s a great background into some of the pros and cons of each. Given the popularity and demonstrated success of Deep Learning methods it’s good to understand… Read More »A Unifying View of Deep Networks and Hierarchical Temporal Memory