The exploitation of this advantage is typically manifested as the program growing a huge tree of moves to anticipate what moves might occur in the game. The computer considers all the moves it might make and then looks at all the replies, and then the moves it might make after these opponent replies. This is generally controlled by minimax search, detailed in earlier articles. As can be imagined, this generates the possibility of vast numbers of positions being analysed. In chess, building a tree of 8 moves/ply (4 white + 4 black replies) gives a tree of some 1,000,000,000,000 possible positions. With minimax optimisations applied (alpha beta), this can reduce to as little as 1,500,000.
Given this huge number of possibilities, a natural goal is to find a way of reducing the number of positions that are to be examined. By rejecting many bad moves, the search tree can be radically reduced. This is forward pruning and requires Plausibility Analysis of moves to allow moves to be assessed for possible rejection.
In chess this technique was initially very popular, but subsequent experience has found that chess programs are often better off considering just about all moves, rather than spend a great deal of time examining just a few. Although this outcome was contrary to expectations, it has proved very successful with chess.
However not all games are so easy to analyse. AI Factory has a strong interest in Shogi (Japanese Chess), where examining all moves becomes expensive in terms of computation. In Shogi a position might commonly have 120 legal moves, which for an 8-ply search with perfect alpha beta gives about 400,000,000 positions, rather than 1,500,000 for chess.
This is where plausibility analysis offers a solution.
The role of Plausibility Analysis:
There are two issues here. Naturally it makes sense to eliminate moves that are not worth considering, but even then it is necessary to also make sure that the search looks at the remaining moves in the right order. So there are two things to be achieved:
(a) Good move ordering
An alpha-beta optimised minimax search will only deliver the optimum optimised tree size if the moves are considered in the right order. Poorly ordered moves very rapidly decay the performance of alpha-beta. A bad move examined last is nearly as good as not looking at it at all. However searching this same move early on costs a great deal, as it will be fully explored, as no proven better alternative may already exist. Good move ordering can be achieved by using many very simple techniques, which depend on probability rather than analytical reasons. For example the killer heuristic simply asserts: "If the move worked at this depth before, it might work again now, even if we have no other reason to try it.". Choosing to look at a move is therefore only on the basis that the same move has worked before.
(b) Elimination of moves to examine
If moves can be shown to either not be interesting, or be obviously bad, then removing these from the list is desirable. This cannot be done simply using techniques such as the killer heuristic, as you have to have a reason to reject a move completely. This inevitably demands some analytical understanding of the move. There are some simple techniques still possible, but these are unsafe and do not perform well. For example, discarding a move because it has never been shown to be good before is possible, but a high risk.
What is Plausibility Analysis?
Plausibility Analysis is used to intelligently assess whether a move is good or bad. Once this technique has been applied to give some value to a move, then the list of moves can be ordered by their plausibility value and searched in that order. The program can then choose, at some point, to stop searching any further moves. This plausibility value will be cheaper to calculate than a whole board evaluation assessment.
The goal therefore is to derive a value for each move. This will probably be some linear evaluation term that simply sums the value of many assessments into a single value.
Considering the criteria for assessing whether moves are good or bad
(1) |
Simple assessments
|
|
A move that captures a big piece using a small piece
is very likely to be a good move. So ordering by "value of piece
captured" minus "value of capturing piece" is a good
start. This can be further enhanced by assessing whether any re-capture
is possible. Note that this can also be used to detect moves that are
unsafe. However simply re-ordering unsafe moves to the bottom of the
list can easily risk overlooking a move that may actually be very good,
even if it apparently loses material.
|
||
(2)
|
Making use of probability-based assessments
|
|
Although plausibility values will be largely derived
analytically, this can also include non-analytical assessment. The killer
heuristic (above) is such a case. Also the hash table, which records
the move that previously worked best in this position. It is highly
likely that a move that worked well with a shallow search will work
well with a deeper search. This has no game reasoning other than "try
the same thing again".
|
||
(3)
|
Using assessments based on detailed game knowledge
|
|
This can include moves that appear to achieve some
positional or tactical goal, such as moving a piece to a position where
it threatens the enemy king.
|
Blending these assessments together
In a games playing program that uses minimax search, the value of a position will generally be assessed by a linear evaluation that will provide an accurate assessment of the value of a position. The goal in plausibility analysis is different. Here the linear evaluation need only provide a value that allows the moves to be correctly ordered, not needing to provide a measure of whether the position is won or lost. The actual value of a particular plausibility analysis need only be used to rank it in the list of moves.
This aspect is very important, as it gives plausibility analysis the ability to freely assign values to attributes that would otherwise be difficult to give an exact assessment. This makes the assessment easier as it is not constrained to having to deliver an accurate absolute value.
Plausibility: combining multiple independent assessments of which are the best moves.
The fault of simplified plausibility assessment is that any one set of criteria may only be 90% reliable in catching the best move among the top few. However a second set of independent and unrelated criteria may have the same error rate, but miss different moves. If each set of criteria return their top few "best moves" into a list then the risk of missing a best move is greatly reduced. The top few moves in the plausibility list may therefore have moves voted near the top for completely different reasons. Move 1 may be from assessing capture value, move 2 because of a probability assessment (e.g. a killer move), which receives a high rating simply because "it worked before".
This blending of multiple criteria combines well to provide a list of moves that has a high chance of placing the correct move near the top of the list.
The goal: Best move near the top and good moves near the top
The "best move" and "good move" are not equivalent. You need the best move near the top so that you have a good chance of finding it. However a list of moves that provides a large set of plausible candidates to be the best move, with a high risk of being very bad, is not what you want. If the top 8 moves are all very bad, even though promising, then the tree search optimisation will suffer, as it is necessary to have a reasonable move provided very quickly so that suspect "best move" candidates can be rejected quickly. This is all to do with alpha-beta pruning. The goal of a fast tree search is to find the actual value of a position as quickly as possible, so that suspect moves can be quickly rejected.
To give an example. It is better to assess a "good" move first and then not reach the best move until move 10, than assess the best move at move 5, having first assessed 4 weak moves.
Therefore the list needs to make sure the top few moves contain one move that can be reliably assessed as a "good move", even if unlikely to be the best.
A practical example: Shogi
For the purposes of this article, I will consider the main techniques used by the Shogi program Shotest. As indicated Shogi offers potentially huge numbers of moves, so the need to analyse moves in the right order is paramount. Shogi has so many moves because of the option to "drop" pieces on the board: A captured piece can be replayed on any square on the board, generating vast numbers of potential moves. The assessment above quoted a typical value of 120 possible moves, but actually this can exceed 500. It is this later figure that needs to be considered, as it is not enough to create a program that can cope with the average number of legal moves; it must manage significantly above average.
(a)
|
Tactical assessments
|
|||
As it happens, the Shogi program "Shotest"
has a very sophisticated tactical analyser, so can make very accurate
assessments of tactical moves. The analysis can detect not only attacks,
but also pins, ties and discovered attacks. A "tie" is a commitment
to some defence. A piece may need to defend an attacked piece. This defence
commitment includes ties to defending against mate threats. The analysis
knows which squares the defending piece can occupy and the cost of ignoring
the defence commitment. This sophistication is a significant advantage,
as the tactical analysis can detect that an apparent unsafe capture is
safe, because the defending piece is committed to some other defence.
Using the same idea it can detect that a move to an apparently unsafe
square is in fact safe
|
||||
In practice this reduces many possible errors, making it highly likely that the tree search can safely search a narrow tree without overlooking some key move. This assessment is very useful for detecting "good moves". | ||||
(b)
|
Generic Probability assessments
|
|||
These are the standard techniques that offer the killer
heuristic, refutation tables, hash tables and history heuristic. They
are simple techniques that have no game knowledge. However there are many
such techniques and many variants of these.
|
||||
(c)
|
Game-specific probability assessments
|
|||
These are more subtle. An extension of the techniques
above is to make use of some game-specific criteria that depend on probability.
Shotest makes great use of this.
|
||||
For these assessments it is necessary to assess how
"interesting" each piece is. For example, a valuable piece is
interesting, or a piece that has low mobility, is undefended or has recently
moved (either in the game or in the search), or is tied/pinned or needed
to maintain a tie or pin is interesting. A more complex measure of interesting
is a piece whose situation has changed during the tree search. A piece
that has lost mobility becomes interesting as the piece may lose all its
mobility.
|
||||
The criteria include:
|
||||
1. | Crediting moves that attack or defend pieces | |||
This simply gives a weight to playing a move that attacks an interesting piece. This may be positionally useless but might create some tactical chance or create some advantageous pressure. | ||||
2. | Moving a piece to a square near an interesting piece | |||
Squares near a critical piece or a piece that moved in the search are more interesting than square away from the action | ||||
3. | Moving a piece to a square than has been recently occupied | |||
A square recently occupied is likely to be important. In shogi often a succession of drops may occur on the same square. | ||||
(d)
|
Game-specific assessments
|
|||
This may include moves that follow some known plan
or pattern. A move that places a piece on a square that achieves some
standard attack is good. A piece may be dropped to repair a damaged defensive
position. This has many forms.
|
How well does this work?
In the broad sense the scheme above works well. Given sufficient criteria it allows the detection of a wide variety of possible moves that need to be detected.
However this architecture is not enough. A serious flaw is that the assessments are all independent, so that plausibility may throw up vast numbers of similar moves. It may be that the program detects 100s of drops that each look very interesting, and these may flood the moves list, forcing the correct move beyond the search window.
This needs another significant new technique.
Varying the assessment as moves are considered
This is the radical improvement needed to make this work. It stops the plausibility analysis from flooding the list with many similar moves. The underlying basis is to keep track of moves played and penalise the assessment of similar moves. The criteria for this may include:
1. | penalising dropping many different pieces on the same square | |
2. | penalising multiple drops of the same type | |
3. | penalising multiple drops of any type | |
4. | penalising moves where similar moves have already been tried |
Depending on the type of criteria, the penalty increases with the number of times a move has been tried. E.g. the 2nd time the move may receive 66% of its credit value, and only 40% the 3rd time it is tried. The 4th criteria above requires moves to be classified so that the number of times such a move has been tried can be detected.
Conclusion
The scheme above gives a broad outline of the plausibility analysis used by Shotest. However it requires some 2500 lines of code just to sort the moves. Shotest spends some 40% of its time choosing moves rather than searching them. This is a big overhead but works well and allows the tree search to be kept under control. It allows the program to perform well in positions that are highly complex as it is able to reduce the number of moves to be considered to a manageable number, and as a result it rarely finds itself overwhelmed by the number of options.
Plausibility analysis will almost certainly be an essential part of most tree-searching programs that are faced with vast numbers of possible moves, so is a technique that is always going to be around.
Jeff Rollason: March 2006