Over the last few years, there have been several breakthroughs and exciting new research directions in Reinforcement Learning, Hippocampus Inspired Architectures, Attention and Few-Shot Learning. There has been a move towards multi-component, heterogeneous, stateful architectures, many guided by ideas from cognitive sciences. Google DeepMind and Google Brain are leading the… Read More »Exciting New Directions in ML/AI
This is the second part of our comparison between convolutional competitive learning and convolutional or fully-connected sparse autoencoders. To understand our motivation for this comparison, have a look at the first article. We decided to compare two specific algorithms that tick most of the features we require: K-Sparse autoencoders, and… Read More »Convolutional Competitive Learning vs. Sparse Autoencoders (2/2)
Competitive learning is a branch of unsupervised learning that was popular a long, long time ago in the 1990s. Older readers may remember – the days before widespread use of GSM mobile phones and before Google won the search engine wars! Although competitive learning is now rarely used, it is worth… Read More »Convolutional Competitive Learning vs. Sparse Autoencoders (1/2)
Eager Execution is an imperative, object oriented and more Pythonic way of using TensorFlow. It is a flexible machine learning platform for research and experimentation where operations are immediately evaluated and return concrete values, instead of constructing a computational graph that is executed later.
We’ve just uploaded a spin-off research paper to arXiv titled “Sparse Unsupervised Capsules Generalize Better”. So what’s it all about? Capsules Networks You may have heard of Capsules Networks already – if not, have a read of one of these blog articles (here, here, here, or here (EM routing)), watch this video,… Read More »Sparse Unsupervised Capsules Generalize Better
The dataset is an integral part of an ML engineer’s toolkit. We recently compiled useful information about a range of these well known datasets. It’s all in one place, and hopefully useful to others as well.
ML Today Today’s Machine Learning has demonstrated unprecedented performance in what seems like every application thrown at it. Almost all the success has been based on advanced memory systems that can learn to recognise an input based on a large number of training examples. This is the equivalent to memory… Read More »The case for Episodic Memory in Machine Learning
2018 is a fresh new year and an exciting milestone for Project AGI. Dave and I have been discussing, dreaming, playing around with and striving towards general purpose AI for over 6 years. It started with musings on the algorithmic underpinnings of consciousness and the nature of intelligence. We quickly… Read More »2018 a Milestone for Project AGI
There are plenty of established machine learning frameworks out there, and new frameworks are popping up frequently to address specific niches. We were interested in examining if one of these frameworks fits in our workflow. I surveyed the most popular frameworks, and aim to provide a helpful comparative analysis.
SVHN is relatively new and popular dataset, a natural next step to MNIST and complement to other popular computer vision datasets. This is an overview of the common preprocessing techniques used and the best performance benchmarks, as well as a look at the state-of-the-art neural network architectures used.