Knowledge published on the internet is free but in the ocean of opinions and points of view, there is no clear chronological pathway on how to learn about the history of AI in Games.

Here we will introduce some basic technical instructions required to begin training Artificial Intelligent agents with games before discussing how this could be used in potential applications. Games and gameplay are an integral part of the history and development of Artificial Intelligence (AI) and can provide mathematically programmable environments that can be used to develop AI programs. As part of this article, we will provide a tutorial on how you can begin experimenting with training your own Artificially intelligent system on your computer with some open source arcade games.

Open AI GYM

The long history of gameplay between humans and artificial intelligent computers demonstrate that games provide a reliable framework for computer scientists to test computational intelligence against human players. But games also provide an additional environment for machine learning to develop intelligence through repeatedly playing and interacting with a game by itself. In 2015 an organization was formed by Elon Musk and Sam Altman to develop Machine Learning algorithms through the use of games. Their organization, Open AI has launched a platform to develop machine learning algorithms with arcade games. This is done to simulate environments for AI models in order to improve and develop ways in which AI systems can interact ‘in the real world’. These systems make uses of image recognition technology in order for the AI agent to interpret the pixels on the screen and interact with the game. Rather than identifying objects like Imagga’s image recognition API, the open AI Gym modules use computer vision libraries (such as open CV) in order to interpret key elements such as the user’s score or incoming pixels.  In 2015 a series of videos were published showing how neural networks could be used to train and AI agent to successfully play a number of classic arcade games. AI agent that can learn to complete Mario in under 24hours which have led to hundreds of similar attempts to train AI on Super Mario Bros, many of which are continuously live streamed and you can watch here.

The above picture displays a breakdown and explainer from creator Seth Bling on how Mario is learning to play the arcade game through a combination of computer vision, neural networks, and machine learning. The box on the left indicates the computer vision interpretation of the pixels on the arcade game, while the red and green lines indicate the weighting of neural networks assigned to different inputs from the controller (A, B, X, Y, Z etc). The bar at the top indicates the status of Mario and the AI agent playing, fitness refers to how far the agent has advanced through the game and Gen, species and genome all indicate the progress of the neural networks calibrating to make the best possible Mario player. People are building upon the original code written by Seth Bling and competing to see who can engineer and AI agent to complete Mario in the quickest time.

What is the benefit of an AI expert that can complete super Mario in less than 24 hours

The use of games to train AI agents is most obvious in developing software for self-driving cars, a small team even set out to turn GTA to machine learning system after removing all the violence from the game. Games offer a framework for an AI agent to initiate and direct behavior, through gameplay AI agents can develop strategies for interacting with different environments, which can then be applied to real-world situations. But before we get too far ahead, let’s look at building one simple AI agent to learn the classic arcade game, Pong. In this tutorial, we are going to take a look at how to get started running your own machine learning algorithm on the classic arcade game Pong!

For this tutorial, we are going to be working with Open AI module ‘Gym’, open source tool, that provides a library to run your own reinforcement machine learning algorithm. They have good online support documentation and the biggest repository of classic arcade games, so if Pong is not your game and you are more of a Space Invaders or Pacman fan, getting that set up once you have GYM installed is just as easy.

Pong Tutorial

For this tutorial, you will need to be able to use the command line interface (terminal) to install package and be able to execute small python scripts.

To get started, you’ll need to have Python 3.5+ installed if you have a package manager such as Brew.sh you can install Python 3.5 with the command:

1
brew install python3

Simply install gym using pip:

1
pip install gym

Once gym is installed we will install all the environments for us to load into our python scripts later on.
Cd into the gym directory:

1
2
Cd gym
pip install -e '.[all]'

To run a simple test you can run a bare-bones instance copied from the GYM website. Save the code below and name it something with the extension

1
.py
1
2
3
4
5
6
import gym
env = gym.make('CartPole-v0').
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample()) # take a random action

For structure place the file in your GYM directory and from the terminal enter:

1
Python yourfile.py

 ATARI

Now we have a demo up and running it’s time to move on to some classic arcade games. The actual script is quite similar, just make sure to change the ROM that we are loading when we call

1
gym.make(‘arcade_game’)

In a new text document place the following:

1
2
3
4
5
import gym
env = gym.make(''Pong-v0'')
env = gym.make(''Pong-v0'')
env.reset()
env.render()

This will generate an instance of Pong in a viewing window. But wait… nothing is happening? That is because we have only rendered an instance of the game without providing any instructions to the computer script on how we want the computer to interact with the game. For example, we could tell the computer to perform a random action 1000 times:

1
2
3
4
5
6
import gym
env = gym.make('Pong-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample()) # take a random action

The script above will tell the application to take 1000 random actions. So you might see the most chaotic fastest version of pong you have ever witnessed. That is because our script is performing a random action, without processing or storing the result from each action, consequently playing like a stupid machine rather than a learning system. To create an algorithm that learns we need to be able to respond to the environment in different ways and program aims and objectives for our program. This is where things get a little more advanced For now try playing around with different games by checking out the list of environments and begin exploring the sample code to get the computer to play the game in different ways. This is the basic framework we are going to be using to build more advanced interactions with arcade games in next tutorial. 

If you want to see the next tutorial, we will be looking at implementing AI agents to play the games and develop our reinforcement algorithms so that they begin to become machine learning applications. Share with us what you want to see in the comment section below.

Machine Learning Toolkits 

https://github.com/openai/gym

https://blog.openai.com/universe/

https://experiments.withgoogle.com/ai

https://teachablemachine.withgoogle.com/

https://ml4a.github.io/

https://github.com/tabacof/adversarial