Artificial Intelligence Takes Another Step Forward

artificial intellegence

Practically as soon as computers were invented, programmers and computer engineers started trying to come up with programs and machines that could compete with humans in traditional measures of intelligence. Chess was an obvious choice for these challenges, and now the latest wave of advances in artificial intelligence is happening with another classic game – Go.

A History of Computers Battling Humans at Chess. Artificial intelligence developments with respect to chess began in the 1960’s. By the late 1980’s, the top chess computers were able to beat strong chess players on a consistent basis. In 1997, the “Deep Blue” computer created by IBM was able to beat World Chess Champion Garry Kasparov. By the mid-2000’s, commercial chess programs running on home computers became capable of beating all but the strongest players.

Taking on the Challenge of Go. The ancient game of Go is even more complicated than chess with respect to artificial intelligence. The game itself is thought to be about 2,500 years old, and is played with black and white stones on a 19×19 grid of lines. Despite a relatively simple set of rules, Go is thought to be significantly more complicated, and more difficult to master, than chess.

Google has created a program called AlphaGo that recently played a top Go player named Lee Se-dol. AlphaGo played solidly and consistently, and beat Lee three games in a row. Prior to the matches, many experts and commentators in the

A Newer Approach to AI. It’s important to distinguish between the types of “artificial intelligence” that we’ve seen with chess programs versus what was recently on display with AlphaGo. Chess programs tend to win by brute force, using raw computational power to calculate the relative strength of millions of different positions and make their game moves accordingly.

On the other hand, because Go is significantly more complex in terms of the number of possible game board positions, it would not be feasible to use a brute force approach to game strategy. Even the most powerful computers in the world would not be able to sort through the near countless possibilities.

AlphaGo achieved its “expertise” by learning patterns of successful play, and then playing millions and millions of games against itself, and discerning additional patterns and techniques that lead to favorable outcomes.

Artificial Intelligence for the Classic Game of Go. This new Go-playing computer is a great example of the newer approaches to AI. Rather than rely solely on computational power, the programming team trained the computer a number expert strategies (an estimated 30 million!) and then let the computer play against itself and “learn” based on the outcomes of those games.

It’s not clear how this technique may be adapted to other applications, but the results are sure to be exciting!

Save

Tags: , ,

Like this post? Tweet it!

"Artificial Intelligence Takes Another Step Forward" on GeekMeet

Tweet Close