Go engine

The challenge is daunting. In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, … Continue reading Go engine

The challenge is daunting. In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.

An interview with Martin Müller

David Ormerod: To start with please tell us a bit about yourself and your research interests. How did you learn of Go and how did you become involved in computer Go?

martin mueller computer go picture

Martin Müller: I am a professor in the Department of Computing Science at the University of Alberta in Edmonton, Canada.

My research interests are in heuristic search, studying how to solve large, complex problems using computer searches.

The main application areas studied in my research group are games such as Go, and automated planning.

In recent years, Monte Carlo search methods have been our main focus – both for games and for planning. As part of my game-related activities, I am the leader of the team developing the open source software Fuego, which was the first program to defeat a top professional in an even game on 9×9.

I learned Go when I was 15 years old and played a lot in my teens and early twenties. I am a 5, 6 or 7 Dan amateur player, depending on the country. My biggest success was probably taking 2nd place at the US Go congress open tournament in 1998.

I became interested in computer Go as an undergraduate in my home country of Austria, through my supervisor. This was around 1985. I have stayed with the topic ever since, doing a Diploma thesis, a PhD and a few postdocs, before getting my current job.

What’s Monte Carlo?

Most people with any interest at all in computer Go know that the strongest programs these days use a ‘Monte Carlo’ algorithm, but many people don’t know much more about it than that.

Could you briefly explain where the term Monte Carlo came from and what it means in this context?

The term Monte Carlo refers to an affluent suburb of Monaco which is famous for its Casino. Monte Carlo methods use statistics collected from randomized simulations as a way to analyze complex systems which are too hard to ‘solve’ by other means.

They were first developed for nuclear physics and atomic bomb research in the 1940s. Nowadays they are very widely used, but their application to games such as Go took off just a few years ago.

Now that computers are powerful enough, Monte Carlo methods are used across a wide variety of disciplines.

For example, I’ve used them at work to help with risk analysis. It’s often difficult to explain to people why this approach works though, because it seems so counterintuitive at first.

Do you have a good analogy to explain how a large enough number of random simulations can provide a useful answer to a question?

Statistical sampling, which is at the core of Monte Carlo methods, is a very powerful technique.

For example, think about opinion polls. Any single random person who you ask about their opinion may be completely crazy, but if you ask one thousand people, who are carefully selected to represent the overall population, then you get quite a good idea of the general mood and can use that to make informed decisions.

This is why we keep getting more and more of those pesky phone calls doing surveys at dinner time!

How computer Go programs improved

It’s been more than five years since UCT (an extension of Monte Carlo search) was first applied to Go, but the strongest programs were still at the kyu level not that long ago (at least on 19×19 boards).

In contrast, the strongest programs these days are dan level and they seem relatively sharp, even in tactical situations.

To what extent do they make use of heuristics for shape, tesuji, life and death, the opening and so on?

Many programs use learned local patterns such as 3×3 for simple shape, and they modify the playouts to avoid some bad tactical moves.

Also, when there is a single important fight going on, the full board search will be able to analyze it quite deeply, and do well in the tactics. The problems start when there are several fights going on at the same time.

For the opening, some programs simply use large scale patterns to imitate popular openings played by human experts. But usually those are not absolute rules. These moves simply get a bonus, but the search can override them. So it is better than the hard coded ‘expert systems’ of the 1980s.

What other changes and improvements have helped computers get to their current mid-dan level on larger boards since then?

I think many factors are involved. Better patterns and rules as above, better search, better parallel scaling, several years of testing, debugging and tuning the programs, and better hardware all help.

What are the pros and cons of combining a knowledge based approach with a Monte Carlo approach?

Crazy Stone is a program that plays the game of Go (Weiqi, Baduk), by Rémi Coulom.

It is one of the first computer Go programs to utilize a modern variant of the Monte-Carlo tree search. It is part of the Computer Go effort. In January 2012 Crazy Stone was rated as 5 dan on KGS, in March 2014 as 6 dan.

Coulom began writing Crazy Stone in July 2005, and at the outset incorporated the Monte Carlo algorithm in its design. Early versions were initially available to download as freeware from his website, albeit no longer.[2] Pattern recognition and searching was added in 2006, and later that year Crazy Stone took part in its first tournament, winning a gold medal in the 9×9 competition at the 11th Computer Olympiad.[2] Coulom subsequently entered the program into the 12th Computer Olympiad the following year, winning bronze in the 9×9 and silver in the 19×19 competitions.

However, Crazy Stone’s most significant accomplishment was to defeat Kaori Aoba, a professional Japanese 4 dan, in an 8-stone handicap match in 2008. In doing so, the engine became the first to officially defeat an active professional in Japan with a handicap of less than nine stones. Three months later, on 12 December 2008, Crazy Stone defeated Aoba again in a 7-stone match.[3]

In March 2013, Crazy Stone beat Yoshio Ishida, Japanese honorary 9-dan, in a 19×19 game with four handicap stones.[4]

On March 21, 2014, at the second annual Densei-sen competition, Crazy Stone defeated Norimoto Yoda, Japanese professional 9-dan, in a 19×19 game with four handicap stones by a margin of 2.5 points.

Crazy Stone computer Go program defeats Ishida Yoshio 9 dan with 4 stones

Crazy Stone, a computer Go program by Rémi Coulom, defeated Ishida Yoshio9p with a four stone handicap, as part of the inaugural Denseisen at the 6thComputer Go UEC Cup in Japan (March 20, 2013).

The Computer vs the computer

It was an ironic showdown between the computer and ‘The Computer’.

Ishida was nicknamed ‘The Computer’ in his prime, because of the accuracy of his counting and endgame skills.

Ishida Yoshio

Ishida Yoshio picture

Born in 1948, Ishida is now 64 years old.

However, back in the 70s, Ishida won the prestigious Honinbotitle for an impressive five consecutive years, making him one of the top players of that era.

After the game, Ishida said that he thought the program was a ‘genius’ and marvelled at the calmness and flexibility of its moves.


Zen is a strong Go engine by an individual Japanese programmer Yoji Ojima (cluster parallelism is added by Hideki Kato). On KGS several bots run engine maintaining ranks between 3d and 5d: Zen19, Zen19b, Zen19D and Zen19n. Zen was the first bot to hold a KGS 3d rating for more than 20 rated games in a row, and a blitz version seems to be holding 5 dan ratings in 2011. It was also the first to hold a 2d and 1d rating for more than 20 games, respectively. Hardware used to run Zen19 on KGS: Mac Pro 8 core, Xeon 2.26GHz.

It won the 2009 Computer Olympiad in Pamplona, Spain, running on the slowest hardware among the competitors. It also won the 2011 Olympiad in Tilburg.

Zen was released commercially under the name Tencho no Igo Zenith Go on 18 September 2009. Version 2 release on August 27, 2010 and version 3 release on 30 September 2011. Website for the software (Japanese) [ext] http://soft.mycom.co.jp/pcigo/tencho3/index.html

See latest go software updates for current version information.


In 2011, several different experiments of Zen started playing on KGS:

Name Rating Time Hardware KGS Archive
Zen19N 4D 20 Minutes + 30 seconds Byo-Yomi Mac Pro 8 cores, Xeon 2.26 GHz [ext] Zen19N
Zen19B 5D 15 seconds per move Mac Pro 8 cores, Xeon 2.26 GHz [ext] Zen19B
Zen19D 6D 15 seconds per move Mini-cluster of 6 PCs [ext] Zen19D
Zen19S 5D 20 Minutes + 30 seconds Byo-Yomi Mini-cluster of 6 PCs [ext] Zen19S
Zen19 5D 15 seconds per move [ext] Zen19

The only version active in 2014 has been Zen19S

Solved game

From Wikipedia, the free encyclopedia A solved game is a game whose outcome (win, lose, or draw) can be correctly predicted from any position, given that both players play perfectly. Games which have not been solved are said to be “unsolved”. Games for which only some positions have been solved are said to be “partially … Continue reading Solved game

A solved game is a game whose outcome (win, lose, or draw) can be correctly predicted from any position, given that both players play perfectly. Games which have not been solved are said to be “unsolved”. Games for which only some positions have been solved are said to be “partially solved”. This article focuses on two-player games that have been solved.

A two-player game can be “solved” on several levels:[1][2]

Ultra-weak

Prove whether the first player will win, lose, or draw from the initial position, given perfect play on both sides. This can be a non-constructive proof (possibly involving astrategy-stealing argument) that need not actually determine any moves of the perfect play.

Weak

Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game. That is, produce at least one complete ideal game (all moves start to end) with proof that each move is optimal for the player making it. It does not necessarily mean a computer program using the solution will play optimally against an imperfect opponent. For example, the checkers program Chinook will never turn a drawn position into a losing position (since the weak solution of checkers proves that it is a draw), but it might possibly turn a winning position into a drawn position because Chinook does not expect the opponent to play a move that will not win but could possibly lose, and so it does not analyze such moves completely.

Strong

Provide an algorithm that can produce perfect play (moves) from any position, even if mistakes have already been made on one or both sides.

Despite the name, many game theorists believe that “ultra-weak” are the deepest, most interesting and valuable proofs. “Ultra-weak” proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized.[citation needed]

By contrast, “strong” proofs often proceed by brute force — using a computer to exhaustively search a game tree to figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs aren’t as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win.

Given the rules of any two-person game with a finite number of positions, one can always trivially construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database, and are effectively nothing more.

As an example of a strong solution, the game of tic-tac-toe is solvable as a draw for both players with perfect play (a result even manually determinable by schoolchildren). Games like nim also admit a rigorous analysis using combinatorial game theory.

Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if its solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (e.g. Maharajah and the Sepoys). An ultra-weak solution (e.g. Chomp or Hex on a sufficiently large board) generally does not affect playability.

In non-perfect information games, one also has the notion of essentially weakly solved[3]. A game is said to be essentially weakly solved if a human lifetime of play is not sufficient to establish with statistical significance that the strategy is not an exact solution. As an example, the poker variation heads-up limit Texas hold ‘em have been essentially weakly solved by the poker bot Cepheus[3][4][5].

Perfect play

In game theory, perfect play is the behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent. Based on the rules of a game, every possible final position can be evaluated (as a win, loss or draw). By backward reasoning, one can recursively evaluate a non-final position as identical to that of the position that is one move away and best valued for the player whose move it is. Thus a transition between positions can never result in a better evaluation for the moving player, and a perfect move in a position would be a transition between positions that are equally evaluated. As an example, a perfect player in a drawn position would always get a draw or win, never a loss. If there are multiple options with the same outcome, perfect play is sometimes considered the fastest method leading to a good result, or the slowest method leading to a bad result.

Perfect play can be generalized to non-perfect information games, as the strategy that would guarantee the highest minimal expected outcome regardless of the strategy of the opponent. As an example, the perfect strategy for Rock, Paper, Scissors would be to randomly choose each of the options with equal (1/3) probability. The disadvantage in this example is that this strategy will never exploit non-optimal strategies of the opponent, so the expected outcome of this strategy versus any strategy will always be equal to the minimal expected outcome.

Although the optimal strategy of a game may not (yet) be known, a game-playing computer might still benefit from solutions of the game from certain endgame positions (in the form of endgame tablebases), which will allow it to play perfectly after some point in the game. Computer chess programs are well known for doing this.

Solved games

Awari (a game of the Mancala family)
The variant of Oware allowing game ending “grand slams” was strongly solved by Henri Bal and John Romein at the Vrije Universiteit in Amsterdam, Netherlands (2002). Either player can force the game into a draw.
Checkers
See “Draughts, English”
Chopsticks
The second player can always force a win.[6]
Connect Four
Solved first by James D. Allen (Oct 1, 1988), and independently by Victor Allis (Oct 16, 1988).[7] First player can force a win. Strongly solved by John Tromp’s 8-ply database[8](Feb 4, 1995). Weakly solved for all boardsizes where width+height is at most 15[7] (Feb 18, 2006).
Draughts, English (Checkers)
This 8×8 variant of draughts was weakly solved on April 29, 2007 by the team of Jonathan Schaeffer, known for Chinook, the “World Man-Machine Checkers Champion“. From the standard starting position, both players can guarantee a draw with perfect play.[9] Checkers is the largest game that has been solved to date, with a search space of 5×1020.[10] The number of calculations involved was 1014, which were done over a period of 18 years. The process involved from 200 desktop computers at its peak down to around 50.[11]

The game of checkers has roughly 500 billion billion possible positions (5 × 1020). The task of solving the game, determining the final result in a game with no mistakes made by either player, is daunting. Since 1989, almost continuously, dozens of computers have been working on solving checkers, applying state-of-the-art artificial intelligence techniques to the proving process. This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. This is the most challenging popular game to be solved to date, roughly one million times as complex as Connect Four. Artificial intelligence technology has been used to generate strong heuristic-based game-playing programs, such as Deep Blue for chess. Solving a game takes this to the next level by replacing the heuristics with perfection.

Google Hummingbird

Google Hummingbird is a search algorithm used by Google. To celebrate their 15th birthday, on September 27, 2013 Google launched [1] a new “Hummingbird” algorithm,[2] claiming that Google search can be a more human way to interact with users and provide a more direct answer.[3] Google started using Hummingbird about 30 August 2013,[4] it said. Google only announced the change on September 26.   […]

Google Hummingbird is a search algorithm used by Google. To celebrate their 15th birthday, on September 27, 2013 Google launched [1] a new “Hummingbird” algorithm,[2] claiming that Google search can be a more human way to interact with users and provide a more direct answer.[3]

Google started using Hummingbird about 30 August 2013,[4] it said. Google only announced the change on September 26.

 

What type of “new” search activity does Hummingbird help?

Conversational search” is one of the biggest examples Google gave. People, when speaking searches, may find it more useful to have a conversation.

I thought Google did this conversational search stuff already!

It does (see Google’s Impressive “Conversational Search” Goes Live On Chrome), but it had only been doing it really within its Knowledge Graph answers. Hummingbird is designed to apply the meaning technology to billions of pages from across the web, in addition to Knowledge Graph facts, which may bring back better results.

How do you know all this stuff?

Google shared some of it at its press event today, and then I talked with two of Google’s top search execs, Amit Singhal and Ben Gomes, after the event for more details. I also hope to do a more formal look at the changes from those conversations in the near future. But for now, hopefully you’ve found this quick FAQ based on those conversations to be helpful.

By the way, another term for the “meaning” connections that Hummingbird does is “entity search,” and we have an entire panel on that at our SMX East search marketing show in New York City, next week. The Coming “Entity Search” Revolution session is part of an entire “Semantic Search” track that also gets into ways search engines are discovering meanings behind words. Learn more about the track and the entire show on the agenda page.

Google zeitgeist

Google, the number one search engine on the planet, has published its comprehensive list of what was on people’s minds during 2012. The company’s 2012 Zeitgeist— the defining spirit or mood of a particular period of history as shown by … Continue reading

Google, the number one search engine on the planet, has published its comprehensive list of what was on people’s minds during 2012.

The company’s 2012 Zeitgeist

the defining spirit or mood of a particular period of history as shown by the ideas and beliefs of the time”—is as accurate a reflection of humanity’s current state of mind as we’ve got.

Here’s the overall global ranking (probably censored):

  1. Whitney Houston
  2. Gangnam Style
  3. Hurricane Sandy
  4. iPad 3
  5. Diablo 3
  6. Kate Middleton
  7. Olympics 2012
  8. Amanda Todd
  9. Michael Clarke Duncan
  10. BBB12