“The Arcade Learning Environment (ALE) is a simple object-oriented framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator Stella and separates the details of emulation from agent design.”
Why are old computer games good tests for general intelligence? Well, these games can’t be easily described with simple rules. You have to learn the rules by playing them. This is an acquired, general skill. In contrast, the best chess software includes heuristics and rules written by humans, that help it to play. The variety of arcade gameplay also ensures that algorithms aren’t too tailored to specific problem types. Some games are strategic; others are simply reactive.
Here’s an article on 538.com that discusses the varying difficulty and relevance of training general purpose artificial intelligence algorithms on older computer games vs classical board games:
The 538 article also discusses the issue of imperfect and incomplete information. In board games such as chess, the entire state of the game is usually visible to both players. However, in arcade games, there are often graphical or design choices that make total knowledge of the game state impossible (e.g. events can occur out of sight).
Here’s a link to the developers’ paper, which describes how to connect to the environment and visual encoding of game screens:
If you can write an algorithm that does well on a range of these algorithms you’re in a very good position to advance the-state-of-the-art in artificial general intelligence (AGI). This is suitably demonstrated by the response to DeepMind’s paper, that also uses the Arcade Learning Environment.