Enter the AI Article Writing Contest!', "Writing", "http://ai-depot.com/Contest" ); include('../Include/Header.php'); include('/usr/home/alexjc/public_www/Include/Utils.php'); Menu(); ?>

Game Trees in Realtime Games


The current AI scene regarding videogames is below average. Gamers fuss and whine about the AI in pretty much every game out there, and rightfully so in most cases. Flawless aim, more resources, all knowing and spawning out of thin air are just some of the frustrations that plague the poor gamer that plays on a regular basis.

This article examines how Heuristic Search in Game Trees can be used in certain videogames, namely videogames that are not very fast paced like shooters but more slow paced like real time strategy games. It is written from the viewpoint of both an AI programmer and a gamer.

The Challenge

The most important thing that a game AI should do is to provide a suitable challange. In more technical and problem solving applications, AI is aimed towards finding the best solution in the shortest time. This is not the goal of game AI. A human player should always be able to defeat a game AI, but not too easily of course. Basically what we need is an AI that operates on the same 'intelligence' level as the human player opposing it. Of course, humans vary widely in their ability.

Games consist of one or more players, and AI-controlled agents. All games have two kinds of agents: Supportive agents and Opposing agents. Supportive agents are agents that only exist to serve players or other agents. They will never perform an action harmful to their controlling player or agent. These agents act purely in reaction to changes in their environment. Opposing agents operate on a level equivalent to the human player; they can control Supportive agents to ultimately defeat the human player and other agents. Because of this, these are the agents that provide a challenge.

Stated concisely, the challenge is:

"Creating an agent that can provide a suitable challenge no matter who or what is opposing it."

I will occasionally touch on use of supportive agents but will concentrate on opposing agents for the most part. They are the most important for this article.

Game Trees

A game tree is a tree that consists of nodes. Each node represents a different state that the game can be in. From one node you can move to various other nodes.

Example: In chess the first node is obviously a representation of the starting field. From there you can go to 20 different states, i.e. the amount of different moves you can make at that point. Each pawn can make two moves(1 step or 2 steps) and both knights can jump forward left or forward right. So 2x8 + 2x2 = 16+4 = 20 different possible states Of course as the game progresses the amount of possible moves/states/nodes increases.

Each node has an heuristic value which says how close a given state is to the goal state (checkmate in our example). With these values you can determine which node is the best to move to next. The only problem is that the heuristic function which calculates the value doesn't take nodes that lie farther down the tree from the immediate successors of the current node into account. This is known as the horizon effect.

There are several ways to improve the effectivness of the heuristic. One is to take into consideration nodes farther than one step away from the current node. As you can see, the number of states that need to be searched increases rapidly the deeper you go, and various search algorithms have been devised to find the best path trough the tree as effecient and fast as possible.

This entire concept works fine for turnbased games with techniques like MiniMax evaluation. The problem is, the big majority of video games aren't turnbased.

In a turnbased games like chess you can always calculate the exact number of possible states. When it's your turn the opponent will not interfere with what you are doing and the current state will not change until you do something. In a game that is not turnbased this is right out as your opponent can change the current state at any given moment. And because this moment is random the number of possible states can be considered infinite (by the time you examined all of them the starting state probably changed so you have to start all over again => infinite searching). You can't wait for the search algorithm to finish and then go to the best node. You have to do both things at the same time.

In a non turnbased gametree you also can't have heuristic values based on closeness to goal. In chess you can make an exact estimate of this because you are at all times aware of the entire state. In a computer game this is almost never the case. What we need changed is that in a turnbased gametree you look at all the states and see what is best based on the goal state. In a non-turnbased gametree there is no point in looking far ahead because the current state can change radically at any moment. This means that the heuristic algorithm must not only incorperate the closeness to goal but also an estimate of how the situation could change due to actions of the opponent.

Example: Take a real time strategy game. An AI has 10 units X ready to attack the enemies base. Attacking the base and destroying it is a good idea as this will increase the closeness to goal (Destroy all enemy buildings). However there are also some estimates to be made. Is the base defended? And if so by how much units? And if it's not defended is there a force of enemy units located elsewhere which could counterattack? A corrective value is used to modify the closeness to goal value based on information the AI knows:

An all out attack could leave the AI's defenses non-existent and might therefore be a non-desirable path because this might mean destruction of the AI's base which of course has a value that is very far from the goal value.

This example also shows that gametrees like this don't work very well for very fastpaced games like shooters and the like. Things change so fast that the estimations are almost never correct. This AI is more applicable to a more slow paced game with less actions per second/gametick.

Another problem is that current search algorithms are all aimed at winning, and not providind a fun challenge for the human player. They are there to crush and kill. What we need is a new kind of search algorithm, one that is able to provide a balanced challenge.

Float Search

As mentioned, AI search algorithms are aimed at one thing. Finding the best solution in the shortest time. This is logical of course. However for the challenging Opposing agent we need something different. Sure we could give it an A* algorithm that effectively sniffs out the best and quickest path to victory but this results in a couple of things:

  1. predictable behaviour
  2. an opponent with a fixed difficulty

Humans have a knack for pattern recognition so after X defeats they will have the algorithm figured out and then will win every confrontation after that. A good example of this is the AI opponent in Starcraft. The first games you would get utterly obliterated by it cause it would always quickly send a pack of cheap units over to destroy your base. It would always do this however and the makeup of the pack of attacking units would always be the same. So after X games you could counter the attack with minimum losses. And the AI wasn't worth much beyond that.

Ofcourse the basis of this particular AI was probably scripting but the effect in the end remains the same.

Other algorithms like beam search or hill climbing aren't suited because they can't go back on a descision they made and will keep going on that path even if its turning out a complete disaster. This kind of behavoir by an AI is usuallly referred to as "stupid" by the human gamer.

So we don't want the normal approach to heuristic search of Game Trees. What we want is a search algorithm that doesn't find the shortest path in the gametree but the longest path. I.e. the path that causes the AI to neither win or lose but rather keep himself afloat. Hence the name Float seach.

Note: Whenever I refer to pathfinding I'm talking about finding a path trough the gametree, not the process of moving something from A to B.

This type of search works as follows.

  1. Establishing the average value
    First all the nodes in the first layer are checked. The average of all these nodes is taken and a node which matches the average value is expanded. If no node matches the average value the average value is increased until the average value matches the value of an existing node. This node is then expanded. The average value can also be hardcoded.
  2. Staying afloat
    Besides the average value we also establish an alpha threshold and a beta threshold. The alpha threshold is a value lower then the average value and the beta threshold is a value higher then the average value. The range between these two thresholds is called the buoyancy range. The center of this range of values is the average value.
    When a successor node is expanded its value is checked to see if it lies within the buoyancy range. If not the node and its following path is discarded. Once all the nodes are complete we have N nodes which lie within the buoyancy range. A random node N is then picked and expanded and step 2 repeats.
    However it's possible that no nodes are found that lie within the buoyancy range. In this case the algorithm goes back up one level, picks another random node within the buoyancy range and tries again.
  3. Trailing
    The actual execution of the node by the AI is trailing X layers behind the float search. Because most games are not turnbased the values of the nodes change constantly. If X is a small number the path laid out by the search algorithm is probably still the same or atleast very similar and thus the responses by the AI are accurate. Unfortunately a small number decreases the amount of backtracking the search algorithm can do and thus the chance of a dead end becomes bigger. A large X has less chance of a dead end but more chance that the path laid out by the float search isn't correct anymore and thus steering the AI away from the average and causing lagging responses.
  4. The Dead End
    A dead end might be reached where the algortihm can't back out from anymore because all paths in the buoyancy range are dead ends or the execution of the nodes caught up with the searching. If this is the case two things can happen.
    • The AI terminates
    • The buoyancy range is increased
  5. Option A isn't really a possibility when playing a game. It would mean that the moment you managed to move the Opposing agent into a certain loss it would cease to function. The same would happen when it would force the human player into a certain loss. So what we do is increasing the buoyancy range both up and down in equal steps. The first node encounterd either above or below the average value is expanded. The buoyancy range is reset between its two thresholds. If this leads to another dead end, repeat.


We will assume the value found was one lower then the alpha threshold. Because a low value node has a much larger chance of having low value successors the chance that a high value successor node is randomly selected becomes smaller. This either means that the losing trend of the AI continues (high chance) or it makes a brilliant comeback (low chance).

You can force the algorithm to pick the highest value after a low dead end was reached. This might introduce the brilliant comeback which seems very intelligent to the human player. But because you can't see beyond that one high value node it might be followed up by a layer filled with even lower value nodes.

This is a good thing. If the AI is manouvered outside the buoyancy range it deserves to win/lose. If we were to force the AI back towards the average value a game could potentially never end!

Float search may look a lot like beam-search for average values rather then optimal values but it's not the same. Both float and beam search examine all the nodes in a layer but float search always picks just one rather then extending all the nodes within the range. It can afford to do this cause situations are always prone to go towards the average situation. An average situation is much easier to maintain then an optimal situation cause you can afford to make mistakes. The speed of Float is thus much higher then of beam search. Speed is important for games.

A second difference is that you can change the buoyancy range to make the AI dumber or smarter. While you can also make the beam-width bigger or smaller in beam-search to achieve a similar effect. This affects the speed of the algorithm however and this is not the case with float search. This allows for an easy setting of difficulty without affecting performance. If you want the Opposing agent to be harder to beat you raise one or both thresholds. Lower one or both thresholds for an easier Opposing agent.

Establishing the size of the Buoyancy range is important. Make it too big and the AI will behave irratic and might end up in a condition that forces a win or a loss. Make it too small and it will behave predictable and might end up in a dead end in the game tree. Once you found the optium Buoyancy range it's best to shift the range rather then resizing it to achieve different difficulty levels. It remains anchored on the average value at all times i. e. the average value is always located in the buoyancy range.


To be able to use gametrees for realtime games there are some fundamental changes needed.

  1. Predictablilty and set difficulty is not done
  2. Heuristic values need to incorperate a closeness to goal value and a realtime correction value.
  3. The search algorithms speed should be independent of difficulty.
  4. Difficulty can be changed realtime based on a fixed range.

Implementing a suitable heuristic algorithm which is able to estimate all relevant things and asses situations is very difficult. However once it's done you have a good AI which only needs lots of tweaking. No need to script anything and no risk of people getting bored of the AI cause it always plays the same. And isn't that what all more hardcore gamers want from their computer opponent

Remember you can visit the Message Store to discuss this tutorial. Comments are always welcome!. There are already replies in the thread, why not join in?