macdevcenter.com
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button

Building a Game Engine with Cocoa, Part 3
Pages: 1, 2, 3

The entire body for the function (excluding auxiliary functions) is short and simple:



        
        int connected(int board[DIMENSION][DIMENSION], int player) {

            memcpy(foo_board, board, sizeof(int)*DIMENSION*DIMENSION);
            
            Coord c;
            for (int i=0; i < DIMENSION; ++i)
                for (int j=0; j < DIMENSION; ++j)
                    if (board[i][j] == player) {
                        c = MakeCoord(i,j);
                        goto _breakout;
                    }
                        
            _breakout:

            //depth first search to find cluster connected to Coord c
            _dfs(c, player); 
            
            for (int i=0; i < DIMENSION; ++i)
                for (int j=0; j < DIMENSION; ++j)
                    if (foo_board[i][j] == player)
                        return 0;

                    
            return 1;
        }

Now, because we're able to determine connectedness for a player, we're also able to reliably determine game-over conditions with ease. Recall that the first player to connect all of her pieces wins the game, but two fine-print rules must be accounted for as well: 1) If a player's move simultaneously connects both sides, the player who just moved wins, and 2) if a player is reduced to a single piece, the player with the single piece wins the game since he's implicitly connected.

Here's the code that checks for game over:

        
            - (BOOL)checkGameOver {

                BOOL playerAboutToMoveConnected = [self playerIsConnected:playerMoving]; 
                BOOL playerJustMovedConnected = [self playerIsConnected:[self opponentOfPlayerMoving]];

                // even if the previous move simultaneously connected both 
                // players, the player who just moved wins
                if (playerJustMovedConnected) 
                    return YES;
                
                // did the previous move reduce the player that's about to 
                // move to a single piece? if so, they win
                if (playerAboutToMoveConnected) 
                    return YES;
                
                return NO;
            }

Game Tree Searching with Negamax

With all of the game logic now in place, we're ready to implement our opponent using the AI technique of a game tree search. A game tree search is conceptually simple: from a given board state, calculate all of the possible moves that the moving player could make and evaluate these board positions. Let's call the player who is moving MAX since this player wants to make the move with the maximum utility. MAX's opponent will be called MIN. Since LOA is a zero-sum game, the convenient property that's important to note is that the utility of a board position from MIN's perspective is equal to the negation of that same position from MAX's vantage point. For example, if MAX is almost fully connected, the board would be scored very highly from MAX's point of view. MIN, however, would consider this position to be terrible and score it proportionally low compared with MAX's evaluation. Thus, MAX's board heuristic is always equal to the negation of MIN's board heuristic and vice versa.

Conceptually, this gives way to MAX calculating all of the possible board positions resulting from a move and evaluating them according to what MIN can do in return. MIN, however, is also in it to win and plays the same way. Therefore, MAX can reliably assume that MIN will make the best move possible (which is to try to minimize MAX's winning potential and maximize his own), and this basis allows MAX to look ahead to yet another level using the same approach. This process can theoretically go for dozens of levels until end game positions are reached and used as a basis for the search heuristic. In fact, minimax is a well known search technique that does exactly that. From Wikipedia, the pseudocode for minimax looks like this:

        
function minimax(node, depth)
    if node is a terminal node or depth = 0
        return the heuristic value of node
    if the adversary is to play at node
        let val := +INFINITY
        foreach child of node
            val := min(val, minimax(child, depth-1))
    else {we are to play at node}
        let val := -INFINITY
        foreach child of node
            val := max(val, minimax(child, depth-1))
    return val

An entire game tree for a game like LOA on an 8x8 board is enormous, however, so minimax can go only several levels deep before this gets prohibitively time consuming. A key factor to how deep a minimax search can go without getting lost in the ether is very much related to the game's branching factor--the number of moves, on average, that can be made from any given board position for a player. The average branching factor of LOA is approximately 30, and for comparison purposes, chess has a branching factor of about 40, and checkers has a branching factor of about 10. If you sketch it out, you'll see that the number of nodes in a tree of height h with branching factor b is bh. For LOA, this means that each level in the search tree has 30 times the number of nodes as the previous level. Thus, minimax is only of limited usefulness because it is a brute-force search technique that examines all possible nodes in the search space to a given depth--even those that lead to scenarios that neither player would want to be in. There must be a better way, right?

Alpha-beta pruning is a small enhancement on minimax that can often remove large portions of the search space that are considered worthless because they involve either a move that is so good that the opponent wouldn't allow it or so bad that we couldn't expect the opponent to make it. This is along the same lines as you looking at the checkers board and thinking, "Well, if my opponent made this really terrible move, then I'd be able to make this really good move and win the game." Since you're not going to win by banking on your opponent playing sub-optimally, there's no reason to continue reasoning about a scenario in which your opponent makes a move that's not in her best interest. Thus, you "prune" away that portion of the search space and look at other potential moves--trying to always find one that's the best available. For a nice step-by-step illustration of alpha-beta pruning at work, check out this site.

Pages: 1, 2, 3

Next Pagearrow