Adaptive optimization methods, such as Adam and Adagrad, maintain some statistics over time about the variables and gradients (e.g. moments) which affect the learning rate. These statistics won’t be very accurate when working with sparse tensors, where most of its elements are zero or near zero. We investigated the effects… Read More »Optimization using Adam on Sparse Tensors
Sparse Distributed Representations
New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)
Reproduced left to right from [8,10,1] Within a 5 day span in October, 4 papers came out that take a significantly different approach to AI hierarchical networks. They are all inspired by biological principles to varying degrees. It’s exciting to see different ways of thinking. Particularly at a time… Read More »New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)
Figure 1: The Region-Layer component. The upper surface in the figure is the Region-Layer, which consists of Cells (small rectangles) grouped into Columns. Within each Column, only a few cells are active at any time. The output of the Region-Layer is the activity of the Cells. Columns in the Region-Layer… Read More »The Region-Layer: A building block for AGI
Digit classification error over time in our experiments. The image isn’t very helpful but it’s a hint as to why we’re excited 🙂 Project AGI A few weeks ago we paused the “How to build a General Intelligence” series (part 1, part 2, part 3, part 4). We paused it… Read More »Reading list – May 2016
Erik Laukien is back with a demo of Sparse, Distributed Representation with Reinforcement Learning. This topic is of intense interest to us, although the problem is quite a simple one. SDRs are a natural fit with Reinforcement Learning because bits jointly represent a state. If you associate each bit-pattern with… Read More »SDR-RL (Sparse, Distributed Representation with Reinforcement Learning)
An interesting article by Gerard Rinkus comparing the qualities of sparse distributed representation and quantum computing. In effect, he argues that because distributed representations can simultaneously represent multiple states, you get the same effect as a quantum superposition. The article was originally titled “sparse distributed coding via quantum computing” but… Read More »“Quantum computing” via Sparse distributed coding?
By Gideon Kowadlo, David Rawlinson and Alan Zhang Can you hear silence or see pitch black? Should we classify no input as a valid state or ignore it? To my knowledge, the machine learning and statistics literature mainly regards an absence of input as missing data. There are several ways… Read More »When is missing data a valid state?
TL;DR: An SDR is a Sparse Distributed Representation, described below SDRs are biologically plausible data structures SDRs have powerful properties SDRs have received a lot of attention recently There are a few really great new resources on the topic: Presentation by Subutai Ahmad of Numenta Older introductory presentation by Jeff Hawkins Excellent… Read More »Sparse Distributed Representations (SDRs)
Toward a Universal Cortical Algorithm: Examining Hierarchical Temporal Memory in Light of Frontal Cortical Function
This post is about a fantastic new paper by Michael R. Ferrier, titled: Toward a Universal Cortical Algorithm: Examining Hierarchical Temporal Memory in Light of Frontal Cortical Function The paper was posted to the NUPIC mailing list and can be found via: http://numenta.org/community-content.html The paper itself… Read More »Toward a Universal Cortical Algorithm: Examining Hierarchical Temporal Memory in Light of Frontal Cortical Function
By David Rawlinson and Gideon Kowadlo Jeff’s new Temporal Pooler This is article 2 in a 3 part series about Temporal Pooling (TP) in MPF/CLA-like algorithms. You can read part 1 here. For the rest of this article we will assume you’ve read part 1. This article is about the new TP… Read More »TP 2/3: Jeff’s new Temporal Pooler