Published on MacDevCenter (
 See this if you're having trouble printing code examples

Building a Game Engine with Cocoa, Part 3

by Matthew Russell

In the grand finale of this three-part series, we'll finish up the infrastructure for our Lines of Action (LOA) game. In this rendition, we'll do some basic refactoring of the work from last time, introduce the logic for determining game-over conditions, and implement a game tree search called negamax--a variant of minimax that uses alpha-beta pruning to reduce the search space. If you'd like to download the finished project file and use it to follow along as you read this article, you may do so by clicking here.

At the end of this article, you'll have a working LOA game, complete with a built-in computer opponent that plays against you based upon a custom heuristic that you define.

Good Housekeeping

By the end of the previous article, we had developed a nice game board that enforced making valid moves and displaying them to the user. Although the user interface looked fine, the code underneath was starting to get cluttered and there were at least a couple of fairly obvious weaknesses that would have been nice to change. Namely, our coordinate system was relying on NSPoint, which is a struct defined in the Foundation framework, and we weren't using categories to compartmentalize our code into logical groupings of related methods.

The primary problem with using NSPoint to represent our coordinate system is two-fold. Semantically, it doesn't make sense to represent discrete coordinates on a checkers board as float values, and as a related consequence, we have to continually cast the coordinate components into integer values whenever we want to access a value in the board array. Since our game tree search algorithm will be almost constantly accessing and manipulating the board array, we'll get a noticeable increase in performance for "free" by defining a custom structure (Coord) that consists of two integer components instead of relying on NSPoint. As we'll see a bit later, we'll want to do anything we can to make our game tree search as efficient as possible. Coord is defined in GameBoard.h.

The other issue mentioned was code clutter. The approach we'll use to declutter our code is employing a category. A category is a powerful Objective-C mechanism for adding additional functionality to an existing class (maybe even one like NSString for which the source code is not available.) We'll use categories for their lesser-known benefits of breaking up a single large object into smaller, more manageable chunks. Although it's fine to use categories for this purpose, it should be pointed out that using categories to compartmentalize code can be like a double-edged sword if misused. On the one hand, categories give you distinct source files to help manage complexity. On the other hand, categories can lead to lots of confusion because code gets tedious to follow when methods of the same class are spread out in all different places.

Here's our basic breakdown of what could have remained one big GameBoard class:

In addition to reflecting a new Coord struct that replaces the previous NSPoint references, introducing a new GameBoardMove struct that abstracts two Coord structs into the concept of a move, and employing categories, a few other tweaks have also taken place. One important modification is that all previous method parameters that were defined as int[][] for purposes of passing the board array around are now fully qualified as int[DIMENSION][DIMENSION]. In addition to being a good practice, this change is necessary to successfully compile the code with GCC 4.0 and beyond.

Also note that since we're starting to introduce C++ code, Xcode requires that files containing or calling any of our custom C++ functions be renamed as *.mm files instead of *.m files. This is a small price to pay for getting instant access to things such as the STL, which we'll be using underneath the hood. Finally, there are no longer references to the class formerly known as AppController. (The last installment illustrated how you could fetch the board array and get it into AppController, if you prefer a design that requires you to pass the underlying board array outside of the GameBoard class.)

Getting Connected

After tidying up a bit, we can now finish the project. As you already know, the objective of the game is for a player to connect all of his pieces together. Thus, determining connectedness is necessary for determining game-over conditions as well as potentially being very useful in the heuristic that is used in the game tree search. It is interesting to note that LOA is especially unique with respect to ending board positions, because the objective of the game is to connect all of your pieces together. It doesn't matter if they're spread out in a big long line, packed densely together in one huge cluster, or wrapped around in ring structure.

Since checking for connectedness will be a commonly used operation, we'll want it to be as efficient as possible. We could certainly implement the method for checking connectedness in Objective-C, but as a critical operation that will likely be executed thousands and thousands of times with each turn of the computer opponent, we'll take advantage of C++'s vector data structure, which allows us to group Coord structs without having to first wrap them as NSValues, as we do with NSArray. Recall that we are currently still doing this with the COORD_OBJECT macro (previously POINT_OBJECT) in validMovesFromSpot: defined in GameBoard's Moves category.

The basic technique we'll use for checking a player's connectedness is simple:

The entire body for the function (excluding auxiliary functions) is short and simple:

        int connected(int board[DIMENSION][DIMENSION], int player) {

            memcpy(foo_board, board, sizeof(int)*DIMENSION*DIMENSION);
            Coord c;
            for (int i=0; i < DIMENSION; ++i)
                for (int j=0; j < DIMENSION; ++j)
                    if (board[i][j] == player) {
                        c = MakeCoord(i,j);
                        goto _breakout;

            //depth first search to find cluster connected to Coord c
            _dfs(c, player); 
            for (int i=0; i < DIMENSION; ++i)
                for (int j=0; j < DIMENSION; ++j)
                    if (foo_board[i][j] == player)
                        return 0;

            return 1;

Now, because we're able to determine connectedness for a player, we're also able to reliably determine game-over conditions with ease. Recall that the first player to connect all of her pieces wins the game, but two fine-print rules must be accounted for as well: 1) If a player's move simultaneously connects both sides, the player who just moved wins, and 2) if a player is reduced to a single piece, the player with the single piece wins the game since he's implicitly connected.

Here's the code that checks for game over:

            - (BOOL)checkGameOver {

                BOOL playerAboutToMoveConnected = [self playerIsConnected:playerMoving]; 
                BOOL playerJustMovedConnected = [self playerIsConnected:[self opponentOfPlayerMoving]];

                // even if the previous move simultaneously connected both 
                // players, the player who just moved wins
                if (playerJustMovedConnected) 
                    return YES;
                // did the previous move reduce the player that's about to 
                // move to a single piece? if so, they win
                if (playerAboutToMoveConnected) 
                    return YES;
                return NO;

Game Tree Searching with Negamax

With all of the game logic now in place, we're ready to implement our opponent using the AI technique of a game tree search. A game tree search is conceptually simple: from a given board state, calculate all of the possible moves that the moving player could make and evaluate these board positions. Let's call the player who is moving MAX since this player wants to make the move with the maximum utility. MAX's opponent will be called MIN. Since LOA is a zero-sum game, the convenient property that's important to note is that the utility of a board position from MIN's perspective is equal to the negation of that same position from MAX's vantage point. For example, if MAX is almost fully connected, the board would be scored very highly from MAX's point of view. MIN, however, would consider this position to be terrible and score it proportionally low compared with MAX's evaluation. Thus, MAX's board heuristic is always equal to the negation of MIN's board heuristic and vice versa.

Conceptually, this gives way to MAX calculating all of the possible board positions resulting from a move and evaluating them according to what MIN can do in return. MIN, however, is also in it to win and plays the same way. Therefore, MAX can reliably assume that MIN will make the best move possible (which is to try to minimize MAX's winning potential and maximize his own), and this basis allows MAX to look ahead to yet another level using the same approach. This process can theoretically go for dozens of levels until end game positions are reached and used as a basis for the search heuristic. In fact, minimax is a well known search technique that does exactly that. From Wikipedia, the pseudocode for minimax looks like this:

function minimax(node, depth)
    if node is a terminal node or depth = 0
        return the heuristic value of node
    if the adversary is to play at node
        let val := +INFINITY
        foreach child of node
            val := min(val, minimax(child, depth-1))
    else {we are to play at node}
        let val := -INFINITY
        foreach child of node
            val := max(val, minimax(child, depth-1))
    return val

An entire game tree for a game like LOA on an 8x8 board is enormous, however, so minimax can go only several levels deep before this gets prohibitively time consuming. A key factor to how deep a minimax search can go without getting lost in the ether is very much related to the game's branching factor--the number of moves, on average, that can be made from any given board position for a player. The average branching factor of LOA is approximately 30, and for comparison purposes, chess has a branching factor of about 40, and checkers has a branching factor of about 10. If you sketch it out, you'll see that the number of nodes in a tree of height h with branching factor b is bh. For LOA, this means that each level in the search tree has 30 times the number of nodes as the previous level. Thus, minimax is only of limited usefulness because it is a brute-force search technique that examines all possible nodes in the search space to a given depth--even those that lead to scenarios that neither player would want to be in. There must be a better way, right?

Alpha-beta pruning is a small enhancement on minimax that can often remove large portions of the search space that are considered worthless because they involve either a move that is so good that the opponent wouldn't allow it or so bad that we couldn't expect the opponent to make it. This is along the same lines as you looking at the checkers board and thinking, "Well, if my opponent made this really terrible move, then I'd be able to make this really good move and win the game." Since you're not going to win by banking on your opponent playing sub-optimally, there's no reason to continue reasoning about a scenario in which your opponent makes a move that's not in her best interest. Thus, you "prune" away that portion of the search space and look at other potential moves--trying to always find one that's the best available. For a nice step-by-step illustration of alpha-beta pruning at work, check out this site.

A particular implementation of minimax that simplifies the coding a little bit by taking advantage of the zero sum property of a game like LOA is negamax. It works exactly the same as minimax except that there's no need for separate min and max functions. From Wikipedia, the pseudocode for negamax follows:

function negamax(node, depth, alpha, beta)
    if node is a terminal node or depth = 0
        return the heuristic value of node
        foreach child of node
            alpha := max(alpha, -negamax(child, depth-1, -beta, -alpha))
            {the following if statement constitutes alpha-beta pruning}
            if alpha >= beta
                return beta
        return alpha

You should take a moment to convince yourself that if you remove the alpha/beta parameters and the conditional statement that constitutes alpha-beta pruning, then you're left with a search that's exactly equivalent to minimax--the key being that whenever negamax is recursively called, its returned value is inverted, creating a min/max effect.

Our negamax search written in Objective-C looks strikingly similar to the pseudocode and is defined in GameBoard's AlphaBeta category, along with a method for defining a heuristic for a given board position.

- (int)alphaBetaWithDepth:(int)depth andAlpha:(int)alpha andBeta:(int)beta {
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
    if (depth == 0)
        return [self evaluateBoard];
    NSArray *moves = [self generateMoves];
    NSEnumerator *e = [moves objectEnumerator];
    NSValue *move;
    GameBoardMove gbm;
    while (move = [e nextObject]) {
        [move getValue:&gbm];
        [self makeMove:gbm];
        int val = -[self alphaBetaWithDepth:(depth-1) andAlpha:(-beta) andBeta:(-alpha)];
        [self unmakeMove:gbm];
        if (val > alpha) {
            alpha = val; // best that can be forced on the opponent

            //keep track of which is the current player's best
            if (depth == SEARCH_DEPTH)
                memcpy(&bestMove, &gbm, sizeof(GameBoardMove));    
        if (alpha >= beta) {
            [pool release];
            return beta; // opponent will not allow this.
    [pool release];
    return alpha; // the best we can force on the opponent

Aside from an extra conditional to explicitly keep track of the best move at any given time, the only other addition worth pointing out is Cocoa-specific. It's the first line that explicitly creates an autorelease pool. If you remove the three lines that create and release the pool, you can use top or any other tool that reports memory usage and see that our board game suddenly hogs massive amounts of memory. But why? All of the objects in alphaBetaWithDepth:andAlpha:andBeta should be autoreleased for us, right? Well, yes--except that the recursion taking place is so tight that the event loop never gets a chance to release them. Apple's Management Programming Guide for Cocoa mentions this, and it is a well known issue for scenarios such as these.

The sample heuristic evaluateBoard that's given in GameBoard's AlphaBeta category has the objective of trying to capture as many of the other player's pieces as possible. For LOA, this approach is not a very good one because capturing pieces leaves the opponent with fewer to connect themselves. Still, it is simple enough that it's immediately evident if it's working correctly or not, which is helpful when you're initially implementing your search algorithm. Besides, this is where you come in. Now that the work of developing an infrastructure is in place, you get to take some time to do the fun work of writing a clever heuristic function.

Writing a heuristic for a game like LOA is a lot of fun, because while the objective is to connect all of your pieces together, there are many subtle conditions that you should take into account when evaluating the board. For example, in addition to weighing your score according to how many of your own pieces are connected, you might also want to consider how many "orphan" pieces there are on the board as well. An orphan piece is one that is totally surrounded in such a way that it can't move anywhere--making it impossible to win unless the piece gets rescued first. Mark Winands wrote an entire thesis about searching the space in LOA that takes this and many other factors into consideration. It is a must-have resource if you're doing any work at all with LOA.

Have Fun

Take some time this weekend to cook up a custom heuristic, and play against it to see how well it does. Unless you're planning on making changes to the search algorithm itself, all of your work can probably be done in GameBoard's AlphaBeta category by examining and manipulating the board array. And remember: there's no reason to prematurely optimize. Efficiency is very important in a game tree search, but stick with a shallow search depth (say, three) and make sure your heuristic is good before trying to optimize it. Besides, you can make a number of improvements upon the search itself and the move generation before you optimize your heuristic.

If you do want to start optimizing, however, one obvious place to start is with GameBoard's generateMoves method. Currently, this method is stashing Coord structs into NSArrays by first wrapping them up with the COORD_OBJECT macro. A straightforward improvement would be to redesign generateMoves to take advantage ofthe C++ vector data structure, like GameBoard's Connection category does. Because generateMoves is called for every single node that's evaluated in the search tree, the performance improvement could be quite noticeable.

Although this is the last scheduled article in this series, there's still a lot of room for future enhancement. The UI could be spiffed up, the game tree search could be expanded to something like negascout, optimizations could take place based upon performance profiling, etc. The sky's the limit. If you'd like to see future articles that expand on this framework, or if you have questions about anything, please leave your comments below.

Matthew Russell is a computer scientist from middle Tennessee; and serves Digital Reasoning Systems as the Director of Advanced Technology. Hacking and writing are two activities essential to his renaissance man regimen.

Return to the Mac DevCenter.

Copyright © 2009 O'Reilly Media, Inc.