Artificial Intelligence For Game Developers

e-Institute Publishing, Inc.

©Copyright 2004 e-Institute, Inc. All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system without prior written permission from e-Institute Inc., except for the inclusion of brief quotations in a review.

Editor: Susan Nguyen Cover Design: Adam Hoult

E-INSTITUTE PUBLISHING INC www.gameinstitute.com Brian Hall. Artificial Intelligence for Game Developers

All brand names and product names mentioned in this book are trademarks or service marks of their respective companies. Any omission or misuse of any kind of service marks or trademarks should not be regarded as intent to infringe on the property of others. The publisher recognizes and respects all marks used by companies, manufacturers, and developers as a means to distinguish their products.

E-INSTITUTE PUBLISHING titles are available for site license or bulk purchase by institutions, user groups, corporations, etc. For additional information, please contact the Sales Department at sales@gameinstitute.com

Table of Contents
CHAPTER 1:PATHFINDING I ..............................................................................................................................................1 INTRODUCTION .....................................................................................................................................................................2 1.1 A FEW GUIDELINES ........................................................................................................................................................4 1.1.1 LOVE AND KISSES ...........................................................................................................................................................4 1.1.2 HARD DOES NOT EQUAL FUN .........................................................................................................................................4 1.1.3 PLAY FAIR ......................................................................................................................................................................5 1.2 FUNDAMENTAL ARTIFICIAL INTELLIGENCE .......................................................................................................5 1.2.1 DECISION MAKING..........................................................................................................................................................6 1.2.2 PATHFINDING ..................................................................................................................................................................6 1.3 GETTING STARTED.........................................................................................................................................................7 1.4 INTRODUCTION TO PATHFINDING ...........................................................................................................................7 1.4.1 GRAPHS AND PATHFINDING.....................................................................................................................................8 1.5 GRAPH TRAVERSALS ...................................................................................................................................................10 1.5.1 NON-LOOK-AHEAD ITERATIVE TRAVERSALS ...............................................................................................................11 1.5.1.1 Random Backstepping..........................................................................................................................................12 1.5.1.2 Obstacle Tracing..................................................................................................................................................12 1.5.2 LOOK-AHEAD ITERATIVE TRAVERSALS ........................................................................................................................13 1.5.2.1 Breadth First Search ............................................................................................................................................13 1.5.2.2 Best First Search ..................................................................................................................................................14 1.5.2.3 Dijkstra’s Method.................................................................................................................................................14 1.5.2.4 A* Method ............................................................................................................................................................15 1.5.3 LOOK-AHEAD RECURSIVE TRAVERSALS ......................................................................................................................15 1.6 NON-LOOK-AHEAD ITERATIVE METHODS, IN DEPTH......................................................................................16 1.6.1 RANDOM BACKSTEPPING ..............................................................................................................................................16 The Algorithm ..................................................................................................................................................................16 1.6.2 OBSTACLE TRACING .....................................................................................................................................................17 The Algorithm ..................................................................................................................................................................18 1.7 LOOK-AHEAD ITERATIVE METHODS, IN DEPTH................................................................................................19 1.7.1 A NOTE ON IMPLEMENTATION EXAMPLES ....................................................................................................................19 1.7.2 BREADTH FIRST SEARCH ..............................................................................................................................................21 1.7.3 BEST FIRST SEARCH......................................................................................................................................................26 1.7.3.1 Max (dx, dy) .........................................................................................................................................................27 1.7.3.2 Euclidean Distance ..............................................................................................................................................27 1.7.3.3 Manhattan (dx + dy) ............................................................................................................................................28 1.8 EDSGER W. DIJKSTRA AND HIS ALGORITHM......................................................................................................30 1.8.1 THREE COMMON VERSIONS OF DIJKSTRA’S..................................................................................................................31 1.8.1.1 Version One..........................................................................................................................................................31 1.8.1.2 Version One Example...........................................................................................................................................32 1.8.1.3 Version Two .........................................................................................................................................................35 1.8.1.4 Version Two Example ..........................................................................................................................................36 1.8.1.5 Version Three .......................................................................................................................................................39 1.8.1.6 Version Three Example ........................................................................................................................................40 1.8.2 OUR VERSION OF THE ALGORITHM ...............................................................................................................................44 1.8.3 THE IMPLEMENTATION OF OUR VERSION .....................................................................................................................46

i

1.9 LOOK-AHEAD RECURSIVE METHODS ....................................................................................................................50 1.9.1 DEPTH FIRST SEARCH ...................................................................................................................................................50 CONCLUSION ........................................................................................................................................................................53 CHAPTER 2: PATHFINDING II ..........................................................................................................................................55 OVERVIEW.............................................................................................................................................................................56 2.1 A*: THE NEW STAR IN PATHFINDING .....................................................................................................................56 2.1.1 HOW A* WORKS ...........................................................................................................................................................57 2.1.2 LIMITATIONS OF A* ......................................................................................................................................................60 2.1.3 MAKING A* MORE EFFICIENT.......................................................................................................................................60 2.2 OUR VERSION OF THE ALGORITHM .......................................................................................................................61 2.4.1 TERRAIN TYPES.............................................................................................................................................................69 Jungle ...............................................................................................................................................................................69 Forest ...............................................................................................................................................................................69 Plains................................................................................................................................................................................69 Desert ...............................................................................................................................................................................69 Foothills ...........................................................................................................................................................................70 Mountains.........................................................................................................................................................................70 Roadway ...........................................................................................................................................................................70 Trail..................................................................................................................................................................................70 Swamp ..............................................................................................................................................................................70 Water ................................................................................................................................................................................70 2.4.2 UNITS ............................................................................................................................................................................71 Infantry.............................................................................................................................................................................71 Wheeled Vehicles..............................................................................................................................................................71 Tracked Vehicles ..............................................................................................................................................................71 Hovercraft ........................................................................................................................................................................71 2.4.3 TERRAIN TYPE VS. UNIT TYPE WEIGHTING HEURISTIC .................................................................................................72 2.4.4 DEFINING THE MAP .......................................................................................................................................................73 2.5 SIMPLIFYING THE SEARCH: HIERARCHICAL PATHFINDING ........................................................................73 2.5.1 A MAP OF THE US.........................................................................................................................................................74 2.5.2 A DUNGEON ..................................................................................................................................................................75 2.5.3 A REAL TIME STRATEGY MAP ......................................................................................................................................76 2.6 PATHFINDING ON NON-GRIDDED MAPS ................................................................................................................77 2.6.1 SUPERIMPOSED GRIDS...................................................................................................................................................77 2.6.2 VISIBILITY POINTS / WAYPOINT NETWORKS .................................................................................................................78 2.6.3 RADIAL BASIS ...............................................................................................................................................................79 2.6.4 COST FIELDS .................................................................................................................................................................79 2.6.5 QUAD-TREES ................................................................................................................................................................80 2.6.6 MESH-BASED NAVIGATION ..........................................................................................................................................80 2.7 ALGORITHM DESIGN STRATEGY.............................................................................................................................81 2.7.1 CLASS HIERARCHY .......................................................................................................................................................81 2.7.2 MAPGRIDWALKER INTERFACE .....................................................................................................................................81 2.8 GRID DESIGN STRATEGY............................................................................................................................................84 2.8.1 MAPGRID INTERFACE ...................................................................................................................................................84 2.8.2 MAPGRIDNODE CLASS .................................................................................................................................................86 2.8.3 MAPGRIDPRIORITYQUEUE CLASS ................................................................................................................................88 2.9 MFC DOCUMENT/VIEW ARCHITECTURE AND OUR DEMO .............................................................................91

ii

.....1..................2 SEPARATION ...........................................1........................................................................................................................................................ 141 Save File System........4 INTRODUCTION TO PYTHON....................................................................................................3 OUR IMPLEMENTATION ................................................98 3...................................................................1.....................................................................................................................................................................................................................................................................................1.....................................................................1 BEHAVIOR BASED MOVEMENT .......8 CRUISING ................................................................................................................... 158 iii .......................................................2....95 FLOCKING .............................................................................................................................................................9....................................................................................... 102 3... 137 4.......................... 137 Some Examples ...........1 INTRODUCTION TO FLOCKING.......................................................................................................................................................................... 134 SQUAD BEHAVIORS ............................................................................. 139 Animation.........................................................................................................................................................2 THE FLOCKING DEMO................................................................................. 145 The State Class.............................................................................................................4 SEPARATION ............................................................................ 118 3.......................................... 139 Game State .......2................................................3 COHESION .........92 2.................................................... 133 STATE MACHINES ...............................................2............................................................................................................................................................................................................................................................................................................ 157 4........................................................................................ 135 4................................................2...........................................1 THE FORM VIEW PANEL ............................................1 INTRODUCTION TO FINITE STATE MACHINES ............................................................2.............. 153 4................................................... 122 3................................................................................................... 148 The Action Classes ......1 THE IMPLEMENTATION .............................. 131 OVERVIEW ...............................................99 3.....................................................................................................96 3.................................................................................................................................................................................................................................... 134 RULE BASE ..................................................................................................................................................................................10 CONCLUSION ....................................................96 3............................................................................................4 AVOIDANCE ......................... 101 3............. 142 Artificial Intelligence .............................................................................................................................. 115 3................................5 ALIGNMENT ........................................................................................... 103 3...................................................2..1................ 151 The Transition Classes................................93 CHAPTER 3: DECISION MAKING I..........................................................................2........3 SCRIPTING IN GAMES ................7 ALIGNMENT ...... 120 3.............. 143 4......................... 100 3.................................................................................................. 129 CHAPTER 4: DECISION MAKING II: STATE MACHINES .......................................................................................................1................................................................................... 132 DECISION TREES ................................... 144 The State Machine Class..........1 DESIGN STRATEGIES ....6 COHESION .......................................................2................................................... 137 4..........................................................................................................................................2.................................................................................................2 MFC AND OUR DEMO ...............................................................................................................................................................................1.......................................................2...................................................................5 AVOIDANCE ................................................................................................................................................................................................................................................... 143 4............................................2 THE STATE MACHINE DEMO............96 3........................................................................................................................................................................................99 3.........................2.................................................................................................................................................................................................................................................................................................................................................................................................. 101 3...................................................................................................................................................................................1 TRANSITION DIAGRAMS ..........................................................1............................................................................................................................................................................................... 124 3........... 127 CONCLUSION ..9 STAY WITHIN SPHERE ................................................................................................2 USES OF FINITE STATE MACHINES ........................95 OVERVIEW ......................................... 100 3...........................................................................................................................................6 OTHER POSSIBLE BEHAVIORS .......................................

..........................................................201 5...............................................228 The Squad Leader State Machine.................................................4.....4..................................................................................................................................1..........................................................................................220 Poll the Leader ...........................169 The Scripted Action Class ..................1.164 4...................................................................................................................................5 CONTROL STATEMENTS ............................................................................................................................................193 5.......................................................................................210 Getting Stuck ...............................Python: Embedding Python using Templates...................................................................................................................................................................................................................................................................................................................................................................................................................................................................167 The Script Engine Class ......................216 5............................195 The Waypoint Network Class ..............3..........................................................................................................................................3 CLASSES ............................................................................................................................................................187 Discrete Simulations in Continuous Worlds................5 OUR SCRIPTING ENGINE..................................................232 5............................4...............2 NETWORK EDGES ...4.................166 4...............................................................................186 5........182 CHAPTER 5: WAYPOINT NETWORKS.......................................................................165 Exposing a Function.......................................................................................................................................................................................................................193 The Network Edge Class ........................221 Events ..........6 IMPORTING PACKAGES ...................................................2 A WORD ON AVOIDANCE ....................................5 SETTING UP THE DEMO.........................................221 5................................................................................................................................165 Exposing a Class .....................................................................................................................................................................................................................................195 5.....................................................4...............217 5............................................158 4......................................................................................................................4......................................................................................159 4............3 THE SQUAD LEADER ..................................4 FUNCTIONS .......................................................................................................................228 The Squad Leader Class..............................................................1 METHODS OF SQUAD COMMUNICATION ...........................7 EMBEDDING PYTHON .................................................234 CONCLUSION .............................................1 THE PATHFIND BEHAVIOR ..................................................................................................1 SCOPE AND WHITESPACE ........................189 5..160 4............................................................................................................................................164 Boost.............................................................................................................................................................................................................................................162 4..............................................................................................185 OVERVIEW.........241 iv .....................................................................................................................1................................................................4 SQUADS AND STATE MACHINES.........................................................................................................................................................................................................................3................................................4...........................221 The Squad Member State Machine................................................................................................................................................................................................................................................................................2 DEFAULT TYPES AND BUILT-INS ..........................................................187 5....................................226 5....................................................220 Direct Control ..................178 Some Examples...............180 CONCLUSION ...........................................................................................176 The Scripted Transition Class .........................................................................................................................220 5...........................................................................................................................................................................................................................................................................................................................................................................................................3 FLOCKING AND WAYPOINT NETWORKS ..................................221 The Squad Entity Class....................................................................................................................1 WAYPOINT NETWORKS............................................................................................188 The Waypoint Class...............1 WAYPOINTS ...........................165 Making a Module ....................................................................................................................................4.............................................................................................4.2 THE SQUAD MEMBER ..........................................................................161 4............................................................3 THE WAYPOINT NETWORK .......................................4...................................210 5...............2 NAVIGATING THE WAYPOINT NETWORK ...................................................................................................................................4.......................................................................................................................................................................................

Chapter 1 Pathfinding I 1 .

2 . roll out and fire. A quick story will illustrate this point. Much research has gone. hide behind cover. While working on the game Psi Ops: The Mindgate Conspiracy™. The AI breathes the life of the development team and the designers into the game.Introduction Artificial intelligence (AI) is one of the critical components in the modern game development project. in a manner of speaking. this is probably a good overall means for understanding AI conceptually.e. We often think of intelligence as a measure of one’s ability to acquire knowledge and learn from experience. A more utilitarian definition might focus on the use of reasoning faculties to solve problems. we need not delve too deep to arrive at a useful answer. It is a man-made substitute for something natural (i. our team spent a good deal of time making the various enemies behave more intelligently. the game designers did not think that the enemies were intelligent enough. But an important question to ask is. Artificial intelligence can be an awkward subject to define because many game developers hold a different set of ideas about what it means and what exactly it constitutes. yes. In many ways. the definition we have constructed here is quite applicable. we will simplify even further and define artificial intelligence as the means by which a system approximates the appearance of intelligent decision processes. In the relatively young field of artificial intelligence. but whether taken collectively or not. and continues to go. and presents the player with the challenges that keep the game interesting and fun to play. it is not uncommon for modern game engines to devote as much as 20% or more of their processing time solely to artificial intelligence. This seems like a fair enough way to characterize AI and it probably sits well with what most of you had in mind when pondering the nature of the terminology. But “intelligence” immediately begs the question. But for our purposes as game developers. It is significantly important because there is obviously a distinct difference between artificial intelligence being intelligent and artificial intelligence appearing intelligent. By combining and simplifying these various concepts we can arrive at a fairly good standard definition: artificial intelligence is the application of simulated reasoning for the purposes of making informed decisions and solving problems.. into creating machines that simulate human intelligence. So let us begin by trying to establish a working definition and then we will move on to what components it encompasses. Some people might place specific methods of solving problems in different AI categories while others bind it all up into a single unified AI concept. This concept of the “appearance” of intelligence is a very critical point. In fact. there are very few elements that are as vitally important when it comes to establishing engaging gameplay. They wanted them to be “smarter” and exhibit even more complex and intelligent behaviors. a simulation). “What do we mean when we say something is intelligent?” Since this is not a Psychology course. throw grenades from cover. and pull alarms when they needed help. We all have a pretty good understanding about what the term “artificial” means. Despite all this effort. artificial intelligence is the game. “is this what we mean when we talk about AI in games?” Well. With the exception of graphics and sound. chase a fleeing player. We programmed them to crouch and duck. dodge objects thrown at them. Certainly there are many ways that people intuitively characterize intelligence.

In other words. Centipede. While it may not immediately jump out at you. graphics and sound have their own dedicated hardware. Eventually.. How the AI was able to arrive at the decision it did (and we will explore some of the methods for handling decision making in this course) is not remotely as important as the action that took place when the decision was finally made. So our job is to build AI systems that can provide that added realism without consuming all of the processing load needed for other important game tasks that aid in player immersion (like realistic physics models for example). After all.g. it is clear that the characters were no more intelligent than they were before the hit-point increase. it is all about end user perception. game AI is essentially a results-oriented concept.The AI programming team’s initial response to this request was simply to double the hit points of the minions. Alan Turing’s famous test of machine intelligence is a good example. To be sure. 3 . Physically realistic looking game characters are expected to be paired with realistic looking behavioral traits. Indeed the earliest games used virtually no AI at all. The Turing Test conceived of locking away a human interrogator in one room while a human and a computer were situated in another room.” But in reality. a good player would recognize and memorize those patterns (e. It is ultimately irrelevant to the player how an AI system makes the decisions it does. could the interrogator tell the difference between the human and the computer based on an interactive exchange of questions and dialog? Turing suggested that the measure of the machine’s intelligence was its ability to convince the human interrogator that it was interacting with another human. This is not a new concept of course. for game development. The results of this small modification may surprise you -. the AI programmer will feel the pressure of having to maintain pace. This is not that far from where we find ourselves today. this story illustrates an important point: when it comes to artificial intelligence in games. it is fair to say that if it looks smart. where the next enemy would appear at the top of the screen) and use that knowledge to advance in the game. That is. in modern game systems. Games like Space Invaders. Galaga. more often than not. Essentially what we learned was that the enemies were dying too quickly for the designers to fully appreciate their intelligence. However. while AI remains CPU bound. this was not always the case in the game field. our goal is to design an AI that makes the player believe that the entities in the game world are behaving the way one would expect an intelligent being to behave. They care only that the system seems to produce behaviors that give at least the outward appearance of having been thoughtfully considered. Today’s games exhibit a considerably more sophisticated set of artificial intelligence than those early titles could provide. it is smart. As the rendering of more realistic scenery and in-game characters continues the steady march forward. and Donkey Kong made little effort to convince the player that they were interacting with truly intelligent beings.the designers were thrilled with the “intelligence enhancement. The central question was. Hardware limitations resulted in gameplay that relied almost exclusively on pattern-driven events with some varying degree of randomness. The means of communication between the two rooms would be text only.

You should always be aware of these types of imbalances which the game interface imposes on the player.1 A Few Guidelines Before we begin discussion about the different types of artificial intelligence we are going to examine in this course. To be sure. and units all of the time. the AI only needs to convince the player that it is doing something smart. Since an RTS game is deeply mathematical. on a more practical level. While simplicity can be a beautiful thing. As games become more advanced and hardware becomes ever faster.2 Hard Does Not Equal Fun It is probably fair to say that most people do not want to play a game where it takes fourteen hours to complete a small level because the puzzles are too hard or they keep dying over and over again.1.1 Love and Kisses One of the most important things to remember is the “KISS” method. The last thing we want is for our game AI to become predictable and boring. The problem with this approach is that a human player could never be as efficient as the computer driven artificial intelligence. As AI developers. Certainly in this case it is obvious that the player will never be able to play as effectively as the artificial intelligence and allowances will need to be made. it is very easy for a developer. Apart from the obvious advantage of pure calculation ability that the computer maintains. 4 . at any given time a player can only interact with so many units and can only view a subset of the entire map. we must be careful to make sure that the simplicity of our implementation does not come at the cost of the player experience. This places him at an impossible disadvantage versus the AI opponent.1. to build an artificial intelligence system that makes optimal use of its resources. let us first take a moment to establish a few helpful guidelines that will come in handy when designing artificial intelligence for games. In other words. This is where our second and related acronym comes in: “LOVE”. In most cases. buildings. 1. who has advance knowledge of all of the inner workings of the simulation. the software simulations are becoming more advanced to follow suit. not only do we not have the luxury of infinite processing time. there is a fine line that the AI developer must always be aware of. 1. Remember the mantra from earlier: artificial intelligence approximates the appearance of intelligent decision processes. Real time strategy (RTS) games can easily fall into this trap. This is short for “Leave Out Virtually Everything”. the simplicity of the artificial intelligence used in many games would probably shock you.1. but we will soon learn that we do not need the systems to be overly complicated. Indeed it is not complicated to code artificial intelligence that is so difficult to beat that the game is no longer any fun to play. KISS is an acronym that stands for “Keep It Simple Stupid”.

g. a very disenchanted player. letting them perform. Sometimes you will get the behavior you expect.. regardless of where he runs. for example. These systems require “training” in order to produce the classifications desired. when you cannot control the outcomes. For instance. but there is a major downside. the artificial intelligence also knows where the player is. and then rating them. Genetic algorithms are a popular example. but less so in games. This training typically involves showing the system examples of each of the things which need to be classified. is a type of artificial intelligence that typically takes some input data and classifies it as something else. Types of classification systems include neural networks and fuzzy systems. Life Systems are another type of artificial intelligence that is popular in AI research. We will not be discussing classification much in this course since it tends to be a more esoteric type of AI which is often more complicated than we will need in the typical commercial game. the idea of simulated “learning” makes these systems very popular in the wider AI research field. in many cases. real time strategy games often exhibit a tendency to cheat in this fashion. remain more in the domain of academic research than in game development (although there is often some crossover). The training algorithm then adjusts the internals of the classification system to attempt to produce the desired output on the fly. While there are some games that make use of this AI (e. Emergent behaviors tend to lead to systems that are very idiosyncratic. Life Systems work by creating a set of artificial intelligence systems. But RTS games are not the only culprits. Rather than scripted sequences that are hardwired into the AI. along with the expected result. and other times you will not. Clearly we see a relationship to the study of evolution at work here. The ones with the highest rated performances survive and evolve. most games do not. All of this adds up to an unfair advantage and potentially. This is grossly unfair to the player.1. as much as possible. Classification. In many first person shooters. This kind of AI can be a lot of fun to observe and study. you should make the artificial intelligence play by the same rules that the player must abide by.1. However. scenario design becomes very difficult. relentlessly chases him down. each of which has its own usage scenario. SimCity™). The problem is. Some of these types tend towards the complex and as such. Once again. The Adaptable AI seminar here at the Game Institute explores 5 . For example. enemies in many shooters do not have to worry about ammunition resources. What is most interesting about these systems is the concept of emergent behaviors. while the ones with the lowest rated performances are killed off. 1. there may be a system that looks at a small bitmap and determines what letter of the alphabet or number it is. It is supremely frustrating to play a game where the artificial intelligence “cheats”. Additionally. Here we see the concept of an AI actually learning and getting smarter as a result.3 Play Fair Taking a cue from our last rule. such systems will often produce completely unexpected behaviors that emerge as a result of the AI adapting and maturing over time. the game AI knows where the player units are located and often it does not need to pay the same costs for unit production. and when alerted.2 Fundamental Artificial Intelligence There are many subcategories of artificial intelligence. These systems can be very difficult to build and perfect and are not widely used in games.

You can create NPCs that range from the simplest of life-forms to emotionally complex entities that exhibit sophisticated reasoning ability. we are going to spend almost all of our time focusing on two of the most fundamental artificial intelligence systems that game developers need to learn and understand: decision making and environment navigation (or “pathfinding”).1 Decision Making Decision making is the core component of all artificial intelligence systems. It is a great way to follow up the material we will study in this course if you are interested in investigating this area further. It is worth noting upfront that in this course. If you are also taking the Graphics Programming series here at the Game Institute. when to get health. decision trees. entities will remain unable to take on any convincing physical presence. 1. etc. Pathfinding is arguably the most fundamental type of artificial intelligence for games because without it. 1. and squad behaviors will be examples of decision making that we will talk about in this course. when to run away. But as we discussed earlier. we are going to adopt a practical approach to studying AI. but rather to zero in on the key areas of study that the typical game AI programmer will need to master if he wants to join a professional programming team.2. Of course. As such. this means moving the entity from one location to the next without running into things. Decision making is the chief means by which the artificial intelligence appears intelligent.2. perception is everything and results are what matters. we know that appearances can be deceiving and that under the hood things may not be nearly as complicated as what the behaviors would indicate. evolution and genetic science. Indeed there are very few genres of games where these algorithms will not be needed. At its simplest. you will have a very impressive set of tools that you can leverage to build tech demos for your next interview or even complex games for your own enjoyment. The decision making and environment navigation systems we develop together in this course will serve as a solid foundation for your AI engine and will provide you with the ability to really express yourself creatively in future projects. The combination of just these two AI types can lead to the production of virtually any game scenario you desire. then combined with this course. when to attack. 6 .2 Pathfinding Pathfinding is the aspect of artificial intelligence systems which assists the AI driven entities with navigating in the game environment. Our goal is not to learn a little bit about everything. when to shoot. It is a compilation of routines which help the AI entities decide what they want to do next. This system typically determines decisions such as what to build. please feel free to drop by the live discussion sessions if you would like to talk about other types of AI that we will not cover in the text. State machines. While our focus in this course will be on the AI fundamentals. when to look for cover.an interesting way to approach Life Systems by utilizing concepts from biology.

The decision centers around the question: how do I get from point A to point B? The decisions themselves are ultimately a choice between directions of travel. but still very important game AI capability. Dijkstra? What is Dijkstra’s Algorithm? What are some traditional implementations of Dijkstra’s Algorithm? 1. In remainder of this chapter the following questions will be addressed: What is pathfinding? Why is pathfinding useful or necessary? What is a graph? What is a weighted graph? What is a directed graph? What are the traditional types of pathfinding methods? How are the traditional types of pathfinding methods implemented? Who is E. Think of a simple case where you have a large field and a tractor that needs to go from one side to the other. We will begin our AI studies together with one of the most important areas of artificial intelligence: pathfinding. This topic will serve as a good lead-in for Decision Making because in a sense. But recall our earlier emphasis on the appearance of intelligence. The tractor starts on one side and proceeds in a straight line to the other side. if I am ‘here’ and I want to go ‘there’. When the tractor reaches the other side. up. it stops. If the field had ditches to avoid. Did it use a pathfinding algorithm? Of course! It may be very simple and rudimentary. Without pathfinding. what is the next step I should take? Should I go left.3 Getting Started Now that we have had a quick overview of the various types of artificial intelligence and some ground rules have been established. but it found a straightline path from one side of the field to the other.? What step should I then take after that? And so on. we have instilled the NPC with some very basic. down. “go there”. this particular algorithm would not have been the best choice. At first you might think that this does not really seem to be artificial intelligence. pathfinding represents the simplest decision making process of all. autonomous entities would not be able to get from place to place. That is. from the player’s perspective the NPC would appear to have some particular purpose as he wandered by. right.1. we will waste no time in getting started. etc. 7 . So by providing the means to get from A to B. Even if your game engine simply selected random points in the world at random times and said to an NPC.4 Introduction to Pathfinding Pathfinding is a critical component in just about every game on the shelves. How would the tractor get across in that case? That question is one of many we will learn the answer to as we progress in this course.

Pathfinding in the gaming world typically means finding the shortest path. or going in circles multiple times before reaching the destination. B F A D C G E Figure 1. Depending on how we choose to interpret this value. The interesting thing about a graph is that there are actually an infinite number of paths from any point on the graph to any other point on the graph.1 Graphs and Pathfinding Pathfinding is ultimately about the traversal of a graph. 8 . one path will be deemed more cost-effective to traverse than another. It would not make sense to have an avatar running back and forth between points. a graph is simply a set of points connected by paths between them (see Figure 1. For our purposes. This type of graph is referred to as a weighted graph (see Figure 1.2).1 A graph can contain any number of points (also called ‘nodes’) as well as any number of connections between those points. so we will generally want to choose that path over the alternative(s). Graphs can also have costs associated with traveling a particular path between points.1.4. A cost is a value that indicates an implied relative expense for choosing one path over another.1).

B F 1 A 1 D 1 1 C 5 1 3 2 G E Figure 1. there is the concept of a directional graph.3 Like a weighted graph.2 Weighted graphs are identical to un-weighted graphs in all respects. This weight might be fixed or it might even be a function.3). a directional graph can contain weights on the paths between its nodes.B F 1 A 1 D 1 1 C 5 1 3 2 G E Figure 1. as the entity cannot always go back from the direction it came. Lastly. These types of graphs can be more complex. except that there are weights/costs associated with the paths between points. In directional graphs. 9 . each path between the points can be considered to be one-way (see Figure 1.

Figure 1.1. A to B to D to F to G to E. In the case of the directed graph (Figure 1. terrain geometry is built using a uniform grid of polygons. Let us imagine that we are traversing the unweighted graph in Figure 1. S. and we want to get from A to E. we will define our graph as a regularly spaced grid of points where one can travel N. we would choose the path A to B to D to F to E. as it is the least expensive path in terms of cost.2). or NW (Figure 1. we would choose the path. 10 . simply marking it as impassable.1. But first we need to redefine our graph. SE. To make it more expensive to travel to a given node. in the case of the weighted graph (Figure 1. However. How we determined that those paths were the least expensive is covered in the next topic.5 Graph Traversals Pathfinding is simply a shortest distance graph traversal. NE. SW. E. and this lends itself well to such a representation.3).4).4 represents a map which can be found in many top-down real time strategy games because such games typically take place over vast expanses of terrain. W. We would want to travel from A to B to D to E because this path is the most direct route. The graphs we have already seen are a bit contrived. Units can move in any of the cardinal directions. so let us change our design to something more like a map. as this is a fairly common graph layout in games.4 A graph such as the one in Figure 1. In most games. For now. a cost can be associated either with the node itself or the path to the node. To prevent movement to a particular node requires only the removal of the node from the graph.

it is very difficult to read in that form. Impassable grid squares will be black. So first. 1. These methods all have one thing in common. we will look at graphs as shown in Figure 1. they do not look ahead to find a good path to the goal. From now on. The system becomes more complicated when obstacles in the environment must be navigated around.Figure 1.4 is the primary type of graph we will be examining in this course. they will become darker gray. let us examine the most common methods of avoiding obstacles in non-look-ahead iterative traversals. The green dot represents where we start (our origin) and the red dot represents where we wish to go (our destination). and an easier way to look at it.5 Although the graph displayed in Figure 1.1 Non-Look-Ahead Iterative Traversals With this new graph to navigate. 11 . let us briefly talk about some of the methods that might be used to get from origin to destination. As grid squares become increasingly more expensive to travel through. They will make their decision based solely on their current position and the position/direction of the goal.5. The concept itself is simple: take one step at a time towards the goal.5. Valid paths of travel are in all of the cardinal directions. but which typically have pitfalls.

2 Obstacle Tracing Figure 1. 1.1.5.6). If an obstacle is encountered. try to step around it. If the obstacle is too large/long (i.1.1. This method encounters serious problems if a cul-de-sac is encountered as it only takes a single step back (see Figure 1.1 Random Backstepping Figure 1. 3 or more squares long centered on the current location).5.6 The simplest method is to take one step at a time in the direction of the goal.7 12 . take a step back in a random direction.e. and try again.

tracing to the left would have succeeded. a common solution is to detect if the path taken traces across the path again. but it can require much CPU time doing it.5. 13 .1 Breadth First Search Figure 1. 1. To prevent this method from entering infinite loops.8 One of the most fundamental graph traversal methods is Breadth First Search. Another method is to trace to the right until the path is crossed. This is a robust method which will always find the shortest path. and if an obstacle is encountered. In the case of this graph. let us take a look at some methods that plan the entire path before taking a single step.2 Look-Ahead Iterative Traversals Now that we have seen some of the methods that can be used to traverse a graph without looking ahead.2.5.7). trace around it to the right.Another method is to move one step at a time in the direction of the goal. 1. this does not make the method any more successful. However. This method encounters problems in complicated graphs. as it can get caught in a cycle where it repeats (see Figure 1. then trace to the left until the path is crossed. This method finds the shortest path in an un-weighted graph by iteratively searching the neighbors of the start position until it reaches the end position (see Figure 1.8).

but it may not be the shortest. 1.9 Another method which is very similar to the Breadth First Search is the Best First Search.2.3 Dijkstra’s Method Figure 1.10 14 . This method will always find a path if there is a path to be found. but it chooses the neighbor with the perceived best chance of having a path first (see Figure 1. It sacrifices the shortest path for the speed in which it finds a path using a heuristic.5. This method iteratively searches all the neighbors of the start node of an un-weighted graph.1.2 Best First Search Figure 1.5.2.9).

it searches deep into the graph first.3 Look-Ahead Recursive Traversals Some graph traversal methods are most easily implemented via recursion. This is a useful method but it is not the fastest method when dealing with large graphs. The most popular of these methods is the Depth First Search traversal. This method finds the shortest path of a weighted graph by keeping track of the cost to every node (see Figure 1. Dijkstra and now called Dijkstra’s method.2.4 A* Method Figure 1. and increasing the depth until the goal is found. is a very robust method of traversing graphs. 1. 1. This can be a very time consuming traversal. This method can have problems if the depth of the search is not limited.11). This method will be the center of discussion in the next chapter.11 One of the most efficient pathfinding methods available is known as A* (A-star). and dangerous in large graphs. This method is very powerful as it allows extra knowledge about the graph to be leveraged in the heuristic. created by E. Instead of searching all the neighbors as in other methods.5.10). due to recursive depth.5. Many times this is handled by limiting the depth by guessing the distance to the goal. via a heuristic. This method is a very robust weighted graph traversal that makes use of heuristics to find the goal in a timely manner (see Figure 1. 15 .Another method.

if (next == goal) return true.1 The method outlined in Listing 1. 1. When we are done.getRandomNeighbor(). 16 . while (next. Node next. Node goal) { Node n = start.1 Random Backstepping The Random Backstepping (or Random Bounce) method is simple in its execution. we will return true if we arrived at the goal node. Node n = start. it moves a step at a time towards the goal. while(true) { next = n. The Algorithm bool RandomBounce(Node start. Node next. } Listing 1. Now let us take a closer look at these obstacle avoidance strategies to better understand them. Node goal) Let us start with the declaration itself. } return false.1 is straightforward.blocked) next = n.6 Non-Look-Ahead Iterative Methods. and then read on for more detailed discussion.getNodeInClosestDirectionToGoal(goal). We will provide a start node and the goal node at which we are attempting to arrive. n = next. In Depth We have discussed performing pathfinding one step at a time and mentioned some methods for dealing with navigating around obstacles when they are encountered. it chooses a random direction in which to move and tries moving toward the goal again.1. however. this method will fail to get out of deep cul-de-sacs. bool RandomBounce(Node start. View it in its entirety.6. Though simple and elegant. and false if we fail. If it runs into an obstacle.

but it will always return the best neighbor node to this node which will put us closer to the goal. and where it wants to go. while(true) This method will run until we find a solution.blocked) next = n. some locals are defined to keep track of the starting node and the next node to which we plan to go.2 Obstacle Tracing Obstacle Tracing is exactly like the Random Bounce method in its means for getting to the goal.6. if we fail at finding a solution. we would exit if it was discovered that all of our neighbor nodes are blocked. This is done until a neighbor node is found which is not blocked. Here is where the obstacle avoidance is applied. a randomly selected neighbor to this node will be selected and tested. This process is continued until we reach our goal. or wait until it crosses a line from the start to the goal again before attempting to approach the goal. The idea is to take a step at a time towards the goal. 1. even if it may be a little boring. as it does not keep track of the path it took. and if the step we wish to take is blocked. This is very different from any of the algorithms that we are going to implement in this course. n = next. In many cases this is adequate for some games. It also never gives up in its search for the goal. After a valid next node is returned. A more robust method would change the direction in which it traces. it will be set to our current node and iteration continues. The node is initialized to be the starting node which was passed in. An enhancement to this algorithm might be to add some kind of maximum iteration count which is checked periodically so that it does not continue to fail forever. The difference is that it attempts to trace around an encountered obstacle until it can head toward its goal again.getRandomNeighbor(). then we successfully made it to the goal. 17 . it just knows where it is. if (next == goal) return true. If the next node returned is the goal node. The method getNodeInClosestDirectionToGoal() is graph specific. next = n. pick a random direction and go in that direction instead. If the next best node that leads us towards the goal is blocked. Presumably. we will give up after some number of iterations rather than cycling forever as this particular loop does. while (next.getNodeInClosestDirectionToGoal(goal).First. Presumably.

getNodeInClosestDirectionToGoal(). Node next. and false if we fail. bool Trace(Node start. } return false.getNodeInClosestDirectionToGoal(goal). next = n. but it will always return the best neighbor node to this node. Review the code in its entirety and read on for more in-depth discussion. The start node which was passed in will be initialized as our starting node. n = next. we will give up after some number of iterations rather than continuing forever as this loop does. Node n = start. Just as with the Random Bounce method. while(true) Just as with RandomBounce. 18 . } Listing 1.2 is identical to the random bounce method with the exception of what it does when its desired node is blocked.getLeftNeighbor(next). We will return true if we arrived at the goal node.The Algorithm bool Trace(Node start.blocked) next = n. a start node and the goal node at which we are trying to arrive will be given. getNodeInClosestDirectionToGoal() is graph specific. this method will run until a solution is found. while (next. Just like Random Bounce. if (next == goal) return true. which will put us closer to the goal. if (next == goal) return true. Node goal) Let us start with the declaration itself.2 The method outlined in Listing 1. Some local variables are defined to keep track of the current node. Presumably. and the next node we plan to go to. while(true) { next = n. Node goal) { Node n = start. Node next.

19 . and if we passed that line twice while tracing. This ensures the path chosen will be effective.7 Look-Ahead Iterative Methods. but it will always return the node which will take you to the left of the input node. we can try to trace the other way. it is set to our current node and iteration is continued. In Depth We mentioned a variety of look-ahead iterative methods which plan the entire route from the starting point to the goal in advance. UNABLETOREACHGOAL } WALKSTATETYPE. the shortest. or just give up.7.If the next node returned is the goal node. and in most cases. we could give up or try something other than tracing (maybe resorting to a little random bouncing). as well as an implementation example for each.blocked) next = n. We will keep looking to the left until we get a node that is not blocked. the neighbor of this node which will take us to the left of the node passed in will be selected. We could make this routine a little more robust by checking to see where we started tracing and trace all the way around until we returned to where we started. We could also calculate the line between our start point and our end point. class MapGridWalker { public: typedef enum WALKSTATE { STILLLOOKING. 1. After a valid next node is returned. then we successfully made it to the goal. Presumably. The method getLeftNeighbor is graph specific.1 A Note on Implementation Examples Both implementation examples we are going to discuss are taken from the code provided in the course projects. This is where the Obstacle Tracing method differs from the Random Bounce method. 1. REACHEDGOAL.getLeftNeighbor(next). while (next. typedef std::vector<std::string> stringvec. Rather than grabbing a random neighbor. Let us look at these methods in more detail. If we did return to our start position. n = next. we would exit if we found ourselves in a condition where there was no way out.

} stringvec heuristicTypesSupported() stringvec empty. and has not encountered any problems as yet. WALKSTATETYPE iterate() = 0. you would want to find the entire path in one pass rather than iterating repeatedly. MapGridWalker(MapGrid* grid) { m_grid = grid. UNABLETOREACHGOAL } WALKSTATETYPE. } virtual std::string getClassDescription() = 0. we will only discuss the contents of the iterate() method and any heuristic functions that apply to the pathfinding algorithm. CRect gridBounds) = 0. void setMapGrid(MapGrid *grid) { m_grid = grid. }. typedef enum WALKSTATE { STILLLOOKING. Of course. } MapGrid *getMapGrid() { return m_grid. Let us discuss a few of the elements of this class in more detail. REACHEDGOAL means the algorithm has reached the goal and a path has been created. void reset() = 0. }. REACHEDGOAL. If the default constructor is used. MapGridWalker(MapGrid* grid) { m_grid = grid. in most games. both of which take the grid upon which it will walk as a parameter. } protected: MapGrid *m_grid. objects derived from MapGridWalker will do their traversals one step at a time during the iterate() method. drawState() is called to draw the current state of the traversal. virtual virtual virtual virtual virtual virtual { void drawState(CDC* dc. For the sake of brevity. } virtual ~MapGridWalker(). Listing 1. bool heuristicsSupported() { return false. UNABLETOREACHGOAL is returned when the algorithm cannot build a path from the start and goal nodes given. } The class supports a default constructor as well as a constructor.MapGridWalker(). 20 .3 In order to make the chapter demo display the path as it was being discovered. STILLLOOKING represents that the algorithm is still searching for the goal. The WALKSTATE enumeration will provide information on the progress of the algorithm in its search for the goal. After they iterate each step of the traversal. MapGridWalker(). return empty. bool weightedGraphSupported() { return false. the grid must be set separately.

This method does not care about weighted graphs. 1. The class has a virtual destructor so that derived classes can properly clean up their resources if delete is called on a MapGridWalker pointer. return empty. This allows the UI to visualize the progress of the algorithm. the interface for which heuristics are supported are given as strings for the UI. virtual std::string getClassDescription() = 0. The drawState method allows the class to draw its current progress into the given device context within the bounds given. as it finds the shortest path in steps from start to finish. This method resets the algorithm so it can start again. This method returns a description of the class for the UI. bool heuristicsSupported() { return false. }. This method will perform one iteration of the graph traversal and return its state. virtual virtual virtual { bool weightedGraphSupported() { return false. bool BreadthFirstSearch(Node start. virtual void drawState(CDC* dc.7. virtual void reset() = 0. void setMapGrid(MapGrid *grid) { m_grid = grid. Node goal) { 21 . } These accessors provide access to the map grid which the walker is traversing. where each neighbor is visited before its siblings are.2 Breadth First Search The Breadth First Search algorithm is a simple traversal of the graph.virtual ~MapGridWalker(). } MapGrid *getMapGrid() { return m_grid. The largest problem with this algorithm is encountered with large graphs -this traversal can take a very long time. The iterate method is the primary interface to the class. virtual WALKSTATETYPE iterate() = 0. } These methods inform us if the given class instantiation can support weighted graphs or heuristics. CRect gridBounds) = 0. Additionally. } stringvec heuristicTypesSupported() stringvec empty.

It will return true if it finds a path to the goal. } while (n.enqueue(start). child. if (n == goal) { makePath(). Our implementation of this algorithm (which we will address later) takes place on the inside of the while loop so that we can inspect each iteration.hasMoreChildren()) { child = n. as well as a couple of nodes to keep track of where we are currently.dequeue(). bool BreadthFirstSearch(Node start. Node n. This algorithm will find the entire path before returning. As mentioned before. start. Node goal) First. if (child.parent = NULL.parent = NULL. open.Queue open.isEmpty()) { n = open.enqueue(start). child. A queue is needed to hold the nodes which we plan to visit. We make sure to set the parent pointer of the starting node to NULL since this is where we started. } } return false. unlike the non-look-ahead methods. as well as all of the other methods we will discuss henceforth.4 Take a moment and examine the algorithm in Listing 1. and false if it does not.getNextChild(). n.4.visited()) continue. open. Notice how it ends in the event that it fails to find a path. Node n. start. } Listing 1. this method. the method expects a starting node and goal node. Queue open.enqueue(child). return true. 22 .setVisited(true). child. builds the entire path before it takes a single step. while(!open. Let us look at the method in more detail. and which child we are about to visit.parent = n. open.

at which point we will abort the loop. Remember. 23 .visited()) continue.enqueue(child). child = n.Our queue is primed by adding the start node to it. n. If this child has been visited already. while(!open. we will iterate across all the children of the current node. Then. we will make the path and return success. This child’s parent is set as the current node so that we know how we reached it. We will mark this child as visited so that we do not visit it again. if (child.parent = n. This is the first node which we will visit since we are starting at this node. while (n.getNextChild(). if (n == goal) { makePath(). we will skip it. the child node will be added to the queue to be visited later. a neighbor of this node has this node as a neighbor. the next node from the queue is returned and set as our current node.setVisited(true). The current child is set as the next child of the current node. Thus. n = open. open. child. return true. } If the current node is the goal node. we cannot reach the goal node from the start node.isEmpty()) We will iterate through every node in the queue until we find the goal.dequeue(). Once inside the loop. If the queue becomes empty and the goal was not found. it will try to visit this node unless it is specified that it has already been visited before.hasMoreChildren()) Next.

. This last step is of utmost importance.. if(m_n->m_x < (m_grid->getGridSize() .To summarize. // we found our path. y = m_n->m_y-1. a node is removed from our queue. int y) { // if the node is blocked or has been visited. // no path could be found Listing 1. y). thereby enforcing the breadth first traversal. y) == MapGridNode::BLOCKED || m_nodegrid[x][y]. we are guaranteeing that each node we add to the queue will be checked in the order they were visited. 24 . If the children are not tested for prior visitation. as we will see later). Iteration occurs until our queue is empty or until a path is found. as each neighbor of the first node also has the first node as its neighbor.size() > 0) { m_n = (MapGridNode*)m_open. each of our children is added to the queue if they have not been visited before. by using a queue.1)) visitGridNode(x. and tested to see if it is our goal. m_n->setVisited(true). if(m_n->m_y > 0 && m_n->m_x < (m_grid->getGridSize() . Using a stack would make it depth first (with some other modifications. marked as visited. return UNABLETOREACHGOAL..front(). y = m_n->m_y. } } return STILLLOOKING.getVisited()) return.5 void BreadthFirstSearchMapGridWalker::visitGridNode(int x.. Then. we will never leave the first node. y). y. This allows a path to be built and also shows how we arrived to our current location. During each step of the iteration. MapGridWalker::WALKSTATETYPE BreadthFirstSearchMapGridWalker::iterate() { if(m_open. x = m_n->m_x+1.. int x. x = m_n->m_x+1. // add all adjacent nodes to this node // add the east node.1)) visitGridNode(x. m_open. So basically they would go about adding each other to the queue ad infinitum! Also.. first a queue is built and our starting position placed onto it. // The other directional checks go here. early out if(m_grid->getCost(x.pop(). // but that would take a tremendous amount of space // add the north-east node. if(m_n->equals(*m_end)) return REACHEDGOAL. As each node is added to the queue. we also ensure that we set the child’s parent to be the node we grabbed from the queue.

// we are visitable m_open. so we do some border checking on the grid to ensure we have not overstepped the edge. Next we check each of our neighbors..push(&m_nodegrid[x][y]). The next node from the queue is returned and set as visited. It will return the state of the current iteration in order to inform the application whether the algorithm is still looking for a path.front(). } Listing 1. if(m_n->equals(*m_end)) return REACHEDGOAL.m_parent = m_n. we visit the node. successful status is returned. and if this is true. In the demo code.1)) visitGridNode(x. has found a path. Let us go over this implementation. x = m_n->m_x+1...6 Listing 1. m_n->setVisited(true). y). m_n = (MapGridNode*)m_open. This means a path to the goal node has been found.5 and Listing 1. m_open. If the new node returned from the queue is the goal node. MapGridWalker::WALKSTATETYPE BreadthFirstSearchMapGridWalker::iterate() The iterate() method begins on the inside of the while loop from our algorithm snippet. a grid represents our graph. The setVisited method on the node simply sets a Boolean flag.pop().. if(m_n->m_x < (m_grid->getGridSize() .6 contain the important parts of the implementation of the breadth first search as found in our demo. 25 . or has failed to find a path. int y) The method visitGridNode visits the grid node specified at x and y. if(m_open.size() > 0) First we will check to see if the queue is empty. // we found our path. If it is. m_nodegrid[x][y]. y = m_n->m_y. // add all adjacent nodes to this node // add the east node. there is no valid path from the start node to the goal node. void BreadthFirstSearchMapGridWalker::visitGridNode(int x.

we cannot reach the goal from our current location. the queue is checked first to determine whether it is empty. it is added to the queue.7. we return STILLLOOKING to indicate further iteration is required to find the goal. while(!open. If so. check to see if the node is blocked or visited. Node n. m_open. they are accessible. we return and iteration stops. The visitGridNode() method wraps up the parts of the algorithm that take care of all the things which need to happen when a node is visited. and test to see if we are at the goal. Node goal) { PriorityQueue open.parent = NULL. To summarize. and if so. and we have not visited them yet. This method also has the same disregard for weighted graphs. If the goal has not been reached. aborts early. it returns without visiting the node. After all of the neighbor nodes are visited. First. This is a good method.enqueue (start).m_parent = m_n.isEmpty()) { 26 . We then return STILLLOOKING so that iteration will continue in the next time-slice. return STILLLOOKING.3 Best First Search The Best First Search is an optimized Breadth First Search in that is uses a heuristic to choose which nodes to traverse next instead of just traversing them in a sequential order. If the node is not blocked or visited.if(m_grid->getCost(x. but it might not always find the shortest path to the goal. child. Path length will be primarily dependent on the appropriateness of the heuristic chosen. m_nodegrid[x][y]. and sets the parent so we can see how we arrived. and the parent of the visited node is set to be the current node in order to track how we arrived there. bool BestFirstSearch(Node start. If the node can be visited. If the node is either blocked or visited. If it is. y) == MapGridNode::BLOCKED || m_nodegrid[x][y]. the method pushes the node onto the queue. we remove the first node in line from our queue.push(&m_nodegrid[x][y]). start. Otherwise. open. as it only cares about the number of nodes needed to traverse from start to finish. provided that they exist (we live on a 2D grid and therefore edges occur).getVisited()) return. as it is much faster than the Breadth First Search. It checks to see if the node is blocked or visited. we add each of our neighbor nodes. 1.

visited()) continue. below. child.3. } } } while (n. the estimate is reasonably accurate. this heuristic underestimates the distance to the goal. they require floating point precision. Often. dy) The Max(dx. The best first search uses a priority queue that is keyed on the perceived cost to the goal. dy) method uses the maximum of the x distance and the y distance to the goal.2 Euclidean Distance The Euclidean distance method uses the standard Euclidean formula to determine the length of the vector from the node to the goal. open.7) is any different than the one we just discussed. left of. Listing 1. child. if (n == goal) { makePath(). If your costs are integers. you will lose precision and your estimate will be more inaccurate.7. you are probably wondering how this method (Listing 1. return true. The magic is in the queue type we use.n = open. 27 .7 At a first glance.dequeue().parent = n. This allows the method to start traversing in a direction towards the goal before it would start investigating nodes that take us away from the goal.setCost = findCost(child.3. Let us take a moment to discuss a few common heuristics. the only other difference is the cost heuristic. 1.enqueue(child). Aside from the use of a priority queue. This formula is: d = (x g − x n ) + ( y g − y n ) An important point to 2 2 remember when dealing with square roots is that.getNextChild(). not only are they expensive. if (child.hasMoreChildren()) { child = n. 1.7. If the node position is diagonal to the goal. } return false. If the goal is directly above. goal). the estimate becomes less accurate. or right of the node (in a grid environment such as ours).1 Max (dx.

or right of the node (in a grid environment such as ours). y = m_n->m_y. x = m_n->m_x+1. // no path could be found void BestFirstSearchMapGridWalker::visitGridNode(int x.1)) { visitGridNode(x. int y) { // if the node is blocked or has been visited. If the node position is diagonal to the goal.isEmpty()) { m_n = m_open. m_n->setVisited(true). the estimate is reasonably accurate. // but that would take a tremendous amount of space // // add the north-east node. Like the Max(dx.3 Manhattan (dx + dy) The Manhattan (dx + dy) method uses the x distance added to the y distance to the goal. x = m_n->m_x+1.. y.dequeue(). } int x. if the goal is directly above.1)) visitGridNode(x. left of. return UNABLETOREACHGOAL.. if(m_n->m_x < (m_grid->getGridSize() . the estimate becomes less accurate. y) == MapGridNode::BLOCKED || m_nodegrid[x][y].3.. early out if(m_grid->getCost(x.. // // The other directional checks go here.1. MapGridWalker::WALKSTATETYPE BestFirstSearchMapGridWalker::iterate() { if(!m_open. y).7. y). if(m_n->equals(*m_end)) { // we found our path.. return REACHEDGOAL.. y = m_n->m_y-1. dy) method.8 return STILLLOOKING. // add all adjacent nodes to this node // add the east node. if(m_n->m_y > 0 && m_n->m_x < (m_grid->getGridSize() . below.getVisited()) 28 . Often this method overestimates the distance to the goal. } } } Listing 1.

// add all adjacent nodes to this node // add the east node.m_cost = goalEstimate(&m_nodegrid[x][y]). the iterate method does the work. y = m_n->m_y. the first thing to check for is an empty queue. If it is. } Next we determine if our current node is.m_parent = m_n. m_open. whether it found the goal.enqueue(&m_nodegrid[x][y]). we cannot get to the goal from the start position. MapGridWalker::WALKSTATETYPE BestFirstSearchMapGridWalker::iterate() Like all of our implementations. We then visit all of our neighbors. return REACHEDGOAL. just as we did in the Breadth First Search method. m_n = m_open. } Listing 1. if(m_n->m_x < (m_grid->getGridSize() . and returns information telling us whether it needs to be called again because it is still searching. 29 . x = m_n->m_x+1. If it is empty and we have not found the goal. we have a visitGridNode method that does the work of visiting the node for us. Again.dequeue().. Next we take the first item off the priority queue and use that as our current node. m_nodegrid[x][y].. we return that we have reached our goal.isEmpty()) Just like the Breadth First Search. m_n->setVisited(true). We also mark it as visited so we do not try to visit it again. if(!m_open. the goal. The node’s cost is calculated only if it is added to the queue. in which case it is set via the goalEstimate() function.. in fact. y).. // we are visitable m_nodegrid[x][y].return. Let us walk through the code and discuss it in more detail. or whether it cannot find the goal. This function implements one of the heuristic methods we discussed above.1)) visitGridNode(x.9 The above implementation is nearly identical to the prior algorithm with the exception of m_open being a priority queue keyed on the heuristic goal estimate. if(m_n->equals(*m_end)) { // we found our path.

void BestFirstSearchMapGridWalker::visitGridNode(int x, int y)

As before, it takes an (x, y) coordinate of the node it is to visit on our grid.
// if the node is blocked or has been visited, early out if(m_grid->getCost(x, y) == MapGridNode::BLOCKED || m_nodegrid[x][y].getVisited()) return;

It checks to see if the node is blocked or already visited, and if either condition is true, it returns without visiting the node.
// we are visitable m_nodegrid[x][y].m_parent = m_n; m_nodegrid[x][y].m_cost = goalEstimate(&m_nodegrid[x][y]); m_open.enqueue(&m_nodegrid[x][y]);

If this node can be visited, it sets the parent of the child node as the current node, determines the cost of this node per our heuristic estimate as discussed above, and adds the node to the priority queue. The priority queue automatically sorts the node into its proper place in the queue.
return STILLLOOKING;

After we visit all of our neighbor nodes, we return that we need more iteration to find the goal.

1.8 Edsger W. Dijkstra and his Algorithm
Edsger W. Dijkstra was born in 1930 in The Netherlands. He was one of the first to think of programming as a science in itself and actually called himself a programmer by profession in 1957. The Dutch government did not recognize programming as a real profession, however, so he had to re-file his taxes as “theoretical physicist.” He won the Turing Award from the Association for Computing Machinery in 1972, and was appointed to the Schlumberger Centennial Chair in Computer Science at the University of Texas in 1984. He also is responsible for developing the prized “shortest-path” algorithm that has been integral to many computer games. E. Dijkstra’s shortest path algorithm is so useful and well-known, that it has simply been dubbed the “shortest path algorithm.” It is so popular that if you were to mention pathfinding to most programmers, they would assume you were speaking of this particular algorithm. Interestingly enough, E. Dijkstra’s algorithm varies a bit depending on where you look it up.

30

1.8.1 Three Common Versions of Dijkstra’s
Let us analyze three common versions of the Dijkstra’s shortest path algorithm in a little more detail. First the algorithm will be shown, and then an example will be walked through for each of the versions.

1.8.1.1 Version One
procedure dijkstra(w, a, z, L) L(a) := 0 for all vertices x ≠ a do L(x) := ∞ T := set of all vertices // T is the set of vertices whose shortest distance // from a has not been found while z ∈ T do begin choose v ∈ T with minimum L(v) T := T – {v} for each x ∈ T adjacent to v do L(x) := min{L(x), L(v) + w(v, x)} end end dijkstra
Listing 1.10

Listing 1.10 shows a version of Dijkstra’s algorithm where it finds the shortest path from a to z. In this algorithm, w denotes the set of weights where w(i, j) is the weight of the edge between point i and j, L(v) denotes the current minimum length from a to v. This particular algorithm does not track the actual path from a to z, just the length of the path. The algorithm works by first initializing L for all vertices, except a, to a very large value. It then chooses a vertex with the shortest length, and removes it from the set of all vertices. Then for each adjacent vertex, it calculates the new minimum distance.

31

1.8.1.2 Version One Example
b 2 c

2 2 a d 4 3 1 7 e 4 3

1

z

6

f

5

g

Figure 1.12

For the walkthrough of this algorithm, let us use this simple graph (Fig 1.12) as our example. The vertices are marked a through z, and the cost for a given edge is labeled nearest the edge center.

b

c


2 2 a 4

2


1 3 z

0

d


3

4


7

e

1

6


f

5


g

Figure 1.13

When the algorithm begins, it initializes the lengths from a to all the other vertices to a very large value. It also places each of the vertices in a list.

32

b

c

2
2 2 a 4

2


1 3 z

0

d


3

4


7

e

1

6

1
f

5


g

Figure 1.14

In the first iteration, the algorithm naturally selects the a vertex, as it was initialized to 0 during initialization and is lowest. It is removed from the vertex list, and all of the vertices adjacent to a (b and f) have their L values calculated.
b c

2
2 2 a 4

2


1 3 z

0

d

4
3

4


7

e

1

6

1
f

5

6
g

Figure 1.15

In the second iteration, the algorithm chooses vertex f as it has the lowest cost, and it is removed from the vertex list. The adjacent vertices (d and g) have their L values calculated, and the algorithm moves on.

33

as it has the lowest cost. and then finally z. The algorithm would next pick d. the algorithm chooses the c vertex.17 In the fourth iteration. The z vertex is now the only vertex that has not had an L value calculated. and we now have all of the vertices in the graph with an L value. we have no way of knowing that path to take is a-b-c-z. When z is removed from the vertex list. b c 2 2 2 a 4 2 4 1 3 z 0 d 4 3 4 6 7 e 5 1 6 1 f 5 6 g Figure 1. without going back and looking. so it would be a good idea to keep track of this. 34 . The adjacent vertices (z and e) have their L values calculated. We do that in our algorithm.16 In the third iteration. and it is removed from the vertex list. the algorithm chooses vertex b. The adjacent vertices (d. e. the algorithm stops and it is seen that the shortest path from a to z is 5 units long. as you will see later.b c 2 2 2 a 4 2 4 1 3 z 0 d 4 3 4 6 7 e ∞ 1 6 1 f 5 6 g Figure 1. and c) then have their L values calculated. Of course. and it is removed from the vertex list.

Initialize the distance array via the rule if j = source distance[j] = 0 else if weight[source][j] != 0 distance[j] = edge[source][j] else if j is not connected to source by a direct edge distance[j] = Infinity for all j Initialize the path array via the rule if edge[source][j] != 0 path[j] = source else path[j] = Undefined Do Find the node J that has the minimal distance among those nodes not yet included Mark J as now included For each R not yet included If there is an edge from J to R If distance[j] + edge[J][R] < distance[R] distance[R] = distance[J] + edge[J][R] path[R] = J While all nodes are not included Listing 1. weight and included.11 The algorithm in Listing 2. path.1. 35 . The algorithm runs to completion when all nodes have been included.8. but it differs in that it tracks the actual path to the goal as well.3 Version Two Given the arrays distance. we have the shortest path. We could shorten the algorithm easily by changing the while loop to “while the goal node is not included” since once the goal node is included. It is similar to the former version in that the distance array is the L value. It also uses an array to mark vertices that have been chosen rather than removing them from the list.2 utilizes 3 arrays to do its work. initialize included[source] to true and included[j] to false for all other j.1.

18 for this version of the algorithm. The walk-through for this graph will use our algorithm starting at vertex 1.19 First we initialize our distance. All of the path array locations are set to 1 (where we started) and none of the other vertices are marked as included. and included arrays. 800 2 612 1 200 410 2985 310 5 400 3 1421 distance[2] distance[3] distance[4] distance[5] = = = = 800 2985 310 200 path[2] path[3] path[4] path[5] = = = = 1 1 1 1 4 included[2] included[3] included[4] included[5] = false = false = false = false Figure 1.1. 36 . path.1.18 Let us use the graph in Figure 1.8. We will walk through the iteration of the algorithm and examine the contents of the various arrays along the way.4 Version Two Example 800 2 612 1 200 410 2985 310 5 400 3 1421 4 Figure 1.

800 2 612 1 200 410 2985 310 5 400 3 1421 distance[2] distance[3] distance[4] distance[5] = = = = 800 2985 310 200 path[2] path[3] path[4] path[5] = = = = 1 1 1 1 4 included[2] included[3] included[4] included[5] = false = false = false = true Figure 1. which they are not. We then mark that vertex as included. 800 2 612 1 200 410 2985 310 5 400 3 1421 distance[2] distance[3] distance[4] distance[5] = = = = 800 1731 310 200 path[2] path[3] path[4] path[5] = = = = 1 4 1 1 4 included[2] included[3] included[4] included[5] = false = false = true = true Figure 1. which is vertex 5. and check to see if the distances from vertex 1 to vertex 5’s neighbors are smaller than the one already stored.21 37 .20 In the first iteration. we find the vertex with the smallest distance.

We also see that by traveling through vertex 2 to vertex 3. and then update our distances. In the fourth iteration. We also update the path to vertex 3 to indicate that travel through vertex 4 from the source is the shortest path.In the second iteration. 38 .22 In the third iteration. This is because the distance from vertex 1 to vertex 4 to vertex 3 is shorter than the distance from vertex 1 directly to vertex 3. all that is left is vertex 3. so we update the distance for vertex 3 as well as the path. we see that vertex 2 has the shortest distance. Nothing changes in terms of path or distance. We mark it as included. so we mark it as included. so we mark it included. we see that vertex 4 has the shortest distance and is not included. 800 2 612 1 200 410 2985 310 5 400 3 1421 distance[2] distance[3] distance[4] distance[5] = = = = 800 1210 310 200 path[2] path[3] path[4] path[5] = = = = 1 2 1 1 4 included[2] included[3] included[4] included[5] = true = false = true = true Figure 1. so we now have the shortest distance as well as the path to all vertices from vertex 1. it is shorter than traveling through vertex 4.

39 .Known ) { // Update W. a table is used to manage the weights.5 Version Three void Dijkstra( Table T ) { Vertex V. the distance to each vertex.1.12 This algorithm is very similar to the previous algorithm. as well as the path to each vertex.Dist To T[ V ]. This method also finds the shortest path to each vertex in the graph. for Each W Adjacent To V if( !T[ V ]. W ). T[ W ]. } } } Listing 1. while( true ) { V = Smallest Unknown Distance Vertex.1.Known = true. W.Dist + C ( V. It maintains a list of vertices that are known. and Vertex structures are used to store the related path data. decrease ( T[ W ]. The biggest difference is more of an architectural change. Rather than keeping data in arrays. if( V == Not A Vertex ) break.Path = V.8. T[ V ] .

v1 0 4 1 2 v2 ∞ 3 10 v3 ∞ 2 v4 v5 ∞ 8 4 6 ∞ 5 ∞ v6 1 v7 ∞ Figure 1. we will take a look at how things change when using a directed graph rather than a non-directed graph.1. Let us traverse this graph starting at v1.24 First we initialize all of the nodes to unknown. to 0. for each vertex. The graph above is a directed graph where travel is only allowed in the direction of the arrows. and the distances to infinity.6 Version Three Example v1 2 v2 4 1 3 10 v3 2 v4 v5 8 5 1 v6 4 6 v7 Figure 1. We also set the parent vertex.23 For this last example.1. 40 .8.

as it has the shortest distance so far.25 In the first iteration.26 In the second iteration. we mark the starting vertex as known. and update all of its neighbor’s distances. and v4.v1 0 4 1 2 v2 2 3 10 v3 ∞ 2 v4 v5 1 8 4 6 ∞ 5 ∞ v6 1 v7 ∞ Figure 1. We also set both parents to v1. v1 0 4 1 2 v2 2 3 10 v3 3 2 v4 v5 1 1 3 4 6 8 5 9 v6 1 v7 5 Figure 1. For those neighbor vertices it does set the distance for. we mark v4 as known. 41 . and update the distance members of v2. v4 makes itself their parent vertex as well.

we mark v5 as known. there are no neighbors that need to be updated. we mark v2 as known. and update all of its neighbor’s distances. In this case.27 In the third iteration. v1 0 4 1 2 v2 2 3 10 v3 3 2 v4 v5 1 8 4 6 3 5 9 v6 1 v7 5 Figure 1. 42 . no updates are needed. Again. and try to update neighbors.v1 0 4 1 2 v2 2 3 10 v3 3 2 v4 v5 1 8 4 6 3 5 9 v6 1 v7 5 Figure 1.28 In the fourth iteration.

we mark v7 as known. Now we are done.30 In the sixth iteration.29 In the fifth iteration. and no updates are needed. 43 .v1 0 4 1 2 v2 2 3 10 v3 3 2 v4 v5 1 8 4 6 3 5 8 v6 1 v7 5 Figure 1. The last iteration. v1 0 4 1 2 v2 2 3 10 v3 3 2 v4 v5 1 8 4 6 3 5 6 v6 1 v7 5 Figure 1. and update neighbors. we actually find a shorter route to v6. and update its distance and make v3 its parent vertex. we mark v3 as known. Again we find a shorter path to v6. so it is updated and v7 is made its parent vertex. we mark v6 as known. This time. and update its neighbors.

hasMoreChildren()) { child = n. child).parent = n. } while (n. We use a priority queue to sort our unvisited nodes in order of their cost.13 Our version of the algorithm is very much like the last two versions we studied.isEmpty()) { n = open.reenqueue(child).cost = 0. Listing 1. if (child. if (n == goal) { makePath(). if (open.cost = newcost. COSTVAL newcost = n. else open.parent = NULL.enqueue(child). n.getNextChild(). child. Node n.cost + cost(n. The biggest difference is how we pick which node to next traverse through.enqueue(start).8. if (!open. child. start.1.dequeue(). while(!open. 44 . open. That is. we will keep track of the shortest distance we have found thus far at each node and also keep track of which nodes we have visited. Node goal) { PriorityQueue open. child. start. Let us discuss this particular version in more detail since it is the version we will be using in our demo.visited()) continue.setVisited(true).contains(child)) open.2 Our Version of the Algorithm bool DijkstraSearch(Node start.contains(child) && child. We then grab the top one off of the queue and do our traversal. return true.cost <= newcost) continue. } } } return false.

We also set the cost to 0. Next we will initialize the queue by adding our start node to it since we will want to visit it first. and returns if it was capable of finding a path. Node n. If the queue empties before we find the goal. } For each iteration. We will start out by setting the parent of our starting node to NULL to denote it is indeed the start. Node goal) Like the other algorithms.isEmpty()) While the queue is not empty. start.setVisited(true). 45 . so we make it and return success. this one expects a start node and a goal node. As in Best First Search. child). and we will have a current node as well as the current child we are visiting of the current node. while (n.dequeue(). while(!open. we will grab a node off the queue and make it our current node. PriorityQueue open.bool DijkstraSearch(Node start. n. open. if (n == goal) { makePath(). We also mark this node as visited so we do not visit it again. n = open.cost + cost(n. a priority queue is used to keep track of the nodes we need to visit. child = n. we found the path. child. there is no path from the start node to the goal.cost = 0. return true. we will iterate through all the children of each node in the queue. If this node is the goal node.parent = NULL.getNextChild().hasMoreChildren()) We then iterate across each of the current nodes’ children.enqueue(start). start. COSTVAL newcost = n.

. Here is where the algorithm starts to differ from the other algorithms we have discussed so far.contains(child) && child.enqueue(child). if (child.8. we inform the queue that it needs to reinsert the child into its proper position now that its cost has changed.. child. 1.reenqueue(child). This allows us to keep track of the total cost it takes to get from the start node to every other node as we visit it. y.dequeue(). If we determine that we want to visit this child.isEmpty()) { m_n = m_open. If we have a cost for this child computed already and it is shorter than the cost we just found.3 The Implementation of Our Version MapGridWalker::WALKSTATETYPE DijkstrasMapGridWalker::iterate() { if(!m_open. we ignore this path to the child node since we have a better one already.parent = n. y = m_n->m_y.1)) 46 . child. we check to see if the cost we’ve computed previously for this child is less than the cost we just computed. } int x. if (open. if (!open. else open. we compute the cost from this node to the child and add it to the cost which this node has stored as the computed cost from the start node to it.cost <= newcost) continue. we do not visit it again.For each child. set its cost to be our computed cost. Like the other algorithms. but we have already determined that we need to visit it. if(m_n->m_x < (m_grid->getGridSize() . m_n->setVisited(true). we add it. But if we have not visited this child.contains(child)) open. This allows us to update the cost to this particular child if we found a shorter path to this child. return REACHEDGOAL. we set its parent to be our current node. if(m_n->equals(*m_end)) { // we found our path.visited()) continue. and if the queue does not already contain the child.cost = newcost. If the queue does contain the node. if we have visited this child node. // add all adjacent nodes to this node x = m_n->m_x + 1.

newcost = m_n->m_cost + m_grid->getCost(x. we are already in the queue // and we have a cheaper way to get there. if(m_grid->getCost(x. return UNABLETOREACHGOAL.remove(&m_nodegrid[x][y]). y).1)) visitGridNode(x. x = m_n->m_x + 1. // All other directions here.m_cost <= newcost) { // do nothing.14 return STILLLOOKING.contains(&m_nodegrid[x][y]).m_cost = newcost. } else { m_nodegrid[x][y]. int y) { int newcost.1. We grab the top node off our 47 . y). if(inqueue && m_nodegrid[x][y]. } else { m_open.. m_nodegrid[x][y]... if(!inqueue) { m_open. y) == MapGridNode::BLOCKED || m_nodegrid[x][y].enqueue(&m_nodegrid[x][y]).visitGridNode(x. if(m_n->m_y > 0 && m_n->m_x < (m_grid->getGridSize() . } Listing 1. Similar to the algorithms we discussed already.enqueue(&m_nodegrid[x][y]).getVisited()) return. void DijkstrasMapGridWalker::visitGridNode(int x. y = m_n->m_y .15 } } Here is the actual implementation from our demo.. } } Listing 1. y). bool inqueue...m_parent = m_n. inqueue = m_open. but that takes up too much space // add the north-east node. it makes use of the priority queue to keep our nodes sorted in order of cost. m_open.

isEmpty()) We begin by checking to see if the queue is empty. void DijkstrasMapGridWalker::visitGridNode(int x. Otherwise we begin another iteration. It returns a status of needing more iteration because it is still looking. we cannot find a path from the start to the goal. Again we make sure to stay within the bounds of our grid and let the visitGridNode method do the work. update all of its neighbors. if(m_n->m_x < (m_grid->getGridSize() . } We then check each of the current node’s neighbors. in fact. m_n = m_open. if(m_n->equals(*m_end)) { // we found our path. or if it cannot find the goal. y = m_n->m_y. m_n->setVisited(true). } If the current node is.getVisited()) return. // add all adjacent nodes to this node // add the east node. whether it found the goal. We also mark that node as visited so that we do not visit it again.1)) { visitGridNode(x. x = m_n->m_x + 1. the goal node. MapGridWalker::WALKSTATETYPE DijkstrasMapGridWalker::iterate() As in all our implementations. if(m_grid->getCost(x. see if it is traversable. if(!m_open.. If it is. y) == MapGridNode::BLOCKED || m_nodegrid[x][y].. 48 .dequeue(). y). we have reached our goal and return success. y) coordinate and visits the corresponding grid node. the iterate method starts inside the while loop of our algorithm snippet.queue. We grab the next node off the queue and make it our current node. return REACHEDGOAL. and continue on until we find the goal node.. int y) This method takes an (x. Let us go over our implementation of the algorithm in more detail..

newcost = m_n->m_cost + m_grid->getCost(x. and its cost to the cost we computed for it. If we determine we have a cheaper way to get to the child node. m_nodegrid[x][y]. If it is in the queue. inqueue = m_open.m_cost = newcost.m_parent = m_n. After we visit all of the neighbor nodes of the current node.enqueue(&m_nodegrid[x][y]). m_nodegrid[x][y]. we check to see if the node in question is already in our queue. if(!inqueue) { m_open. } If the child node is not in the queue. } If we are already in the queue.contains(&m_nodegrid[x][y]).enqueue(&m_nodegrid[x][y]).First it checks to see if the node in question is blocked or already visited.. it returns and does not visit the node. we remove it and add it again so it can be put in its proper place.remove(&m_nodegrid[x][y]). Also. m_open. if(inqueue && m_nodegrid[x][y]. we ignore the node since we already have a cheaper way to get there.. Next it computes the cost to this child node via the current node. return STILLLOOKING. we simply add it. and the new cost we computed is greater than the cost the child node already has. If it is. y). } else { m_open.. we set its parent to the current node.. 49 . we return STILLLOOKING to indicate that we need more iterations to find the goal.m_cost <= newcost) { // do nothing. we are already in the queue // and we have a cheaper way to get there.

child. there are some look-ahead pathfinding methods that are most easily implemented with recursion. Let us take a look at this algorithm. 1. bool DepthFirstSearch(Node node.visited() || d > child. if (!isTowardsGoal(node. This method also has a tendency to wrap around unless we constrain it to moving towards the goal. goal)) continue. return true. goal. if at all possible. d = node. and unless the depth to which it searches is constrained. child.getCost(child).1. The prime example we discussed is the Depth First Search. it will search to an infinite depth in an attempt to find its goal.getNextChild(). int depth. Node goal. The Depth First Search method is recursive in nature.cost = d. } } return false.9.16 child. } Listing 1. int length) { int d. The method has a few caveats.hasMoreChildren()) { child = node. depth+1.dist + node.visited = false. } if (depth < MAXDEPTH) { while (node.visited = true.cost) continue. and discuss it in detail.parent = node. if (node == goal) { makePath(). child.9 Look-Ahead Recursive Methods As discussed earlier. in which siblings are visited before neighbors.1 Depth First Search The Depth First Search algorithm is a simple traversal for weighted or non-weighted graphs. child. if (DepthFirstSearch(child. child. if (child. return true.cost)) 50 .

and it would have looked much like the others except for its use of a stack rather than a queue. we will iterate across all the passed in node’s children. if (!isTowardsGoal(node. We might have implemented this method using iteration. and length parameters while the goal will remain the same. depth. child. otherwise we will return false to say we did not find the goal. child = node. we make the path and return our success. Here we do a little trickery to keep the algorithm from doing loops. we search further. if (node == goal) { makePath(). gloal)) continue. Use of recursion allows us to leverage the call stack rather than maintaining our own stack. a goal to get to.After looking over the algorithm in Listing 1. For each child. int depth. 51 . while (node. if (child.getNextChild(). using recursion for this method is much more elegant. but it will return true if the passed-in node is closer to the goal and false if it is not. int length) The recursive method DepthFirstSearch takes a node to search from. Let us go over this algorithm in a little more detail. We check to see if the child helps us to get towards the goal.cost) continue. d = node.getCost(child). we will compute the distance to this child by adding our passed in node’s pre-computed cost to the cost of getting to the child node. you should notice it is recursive in nature rather than iterative. However. we do not traverse it since it might take us on a crazy.dist + node. winding path. The true return value will trigger a full recursive unroll to get us out and back to the initial caller of the method.visited() || d > child. if (depth < MAXDEPTH) If we have not exceeded our depth. the depth to search to. and the current cost. return true.hasMoreChildren()) If we have not exceeded our depth. Node goal. If the child does not take us closer to the goal.16. The implementation of isTowardsGoal is graph specific. bool DepthFirstSearch(Node node. } If the node passed in is the goal. Each call to this method will change the node.

cost = d. we also skip this child. we start by calling DepthFirstSearch() and pass the start node. winding paths that lead nowhere. the algorithm will create curly. we iterate through each child. This will unroll the recursive stack back to the initial caller.If the child has been visited already or the child’s cost is cheaper than the computed cost. we recursively call DepthFirstSearch on that child. incrementing depth and passing the cost to the child.visited = true. but we are sure to try them again if a search to a given depth fails. child. if our depth is less than the max depth we wish to search to.parent = node. we mark the child as unvisited again. Next we set the child’s parent to be the node passed in. and the cost to the child is less than the child’s current remembered cost. we found the goal and return immediately. This is so that we do not visit the same node more than once on the way into the graph. child. and pass in the child’s cost as the length. mark the child as visited. since we might need to go through it via another depth traversal. and unmark them on our way back out. we make the path and return all the way out of the recursive stack. return true. If so. child. One might also attempt to calculate a beginning MAX_DEPTH using a heuristic goal estimate and implement the iterative deepening from that starting point so as to reduce the number of deepening iterations. goal. increment the depth. It is important to be sure that the cost to the child is better than the last cost. it is important to mark nodes as visited as we traverse into the graph. child. Also. 52 . An added improvement that could be made is to iteratively increase the MAX_DEPTH value to enable searching deeper into the graph until we find a goal. To summarize. a depth of 1. and a length of 0. If the child is in the direction of the goal. and set its cost to be the cost we computed. The algorithm would first check the node passed in to see if it is the goal. Here is another tricky bit. if (DepthFirstSearch(child. child. After the recursive call.cost)) We then recursively call the method again using the child node as the node to pass in. and that we are moving in the direction of the goal. the child has not been visited. the end node.visited = false. Otherwise. Otherwise. depth+1. If this returns true.

According to our original definition of artificial intelligence. we saw that the means for improving the efficiency of the search and the ability to circumnavigate obstacles involved more complex decision making criteria (such as the various heuristics we mentioned). this certainly fits the bill. what they are. while this may not be the pure Decision Making AI that we will learn about later on. it looks like the entity “figured out” how to get there in the shortest or quickest way possible. One thing you hopefully recognized is that even the simplest pathfinder requires decision making. In the next chapter we will expand our understanding of pathfinding by looking at more complex pathfinding methods such as A* and hierarchical pathfinding. We talked about graphs. the entity certainly looks like it knows what it is doing and is therefore exhibiting some manner of intelligence. Keep these thoughts in mind as you work your way through the rest of the course. they are both right. on screen. 53 .Conclusion In this chapter we have discussed pathfinding at its most basic. To the player. We also examined single step path traversals. as well as iterative and recursive methods of pathfinding. Finally. As we progressed. After all. even when that decision was as simple as “we hit a barrier. Again. you can probably understand why some programmers tend to lump everything together into a single catch-all AI category (which we made efforts to define at the outset) while others might consider this just a branch on a larger tree. so try moving in some random direction to get around it”. and why they are important in pathfinding. These latter methods determine optimal paths through the graph to the goal. the job of the pathfinder is to make an entity move from point A to point B in a manner such that. we looked at some specific implementations for some of the common algorithms used in pathfinding. In a sense.

.

Chapter 2 Pathfinding II 55 .

In the case of pathfinding we know that the idea is to use our knowledge of the environment to create algorithms that determine the best way to travel from place to place. we will discuss more advanced pathfinding techniques used in games and see how to apply them. we will discuss the chapter demo in detail and the design stratagems employed in its development. the end result will be entities that maneuver around obstacles in the game world as they attempt to reach a given destination. we have examined some fundamental types of pathfinding and how it relates to games. We mentioned that the two subcategories that we were going to focus on in this course are decision making and pathfinding since they are the two most common and critical AI components in the majority of games. At the very least. 56 . this was how we defined artificial intelligence. we know that the goal in both cases is to use algorithms to produce behaviors that appear intelligent to the end user. Since those destination points will be updated quite frequently in real-time (based on decision making techniques we will learn about later in the course). Primarily we will discuss A* and its common advantages and disadvantages as well as how heuristics can be used to produce better results. Additionally. We will also discuss methodologies for pathfinding in non-gridded environments such as we find in many 3D games (although our chosen implementation will not come until later in the course. Finally.Overview In the last chapter we learned about some of the important AI subcategories that game developers will encounter in their projects. From the player’s perspective. After all. Along the way learned that even these two seemingly very different concepts are somewhat related to one another. In this chapter we will answer the following questions: What is A*? What are some of the advantages and disadvantages of A*? What are heuristics and how can they be used to assist A*? How can A*’s behavior be modified with different heuristics? What is hierarchical pathfinding and how can it be used? What are some of the methodologies for extending pathfinding systems for use in non-gridded environments? What is the Algorithm Design Strategy? What is the Grid Design Strategy? 2. with the ability to deal with weighted graphs from Dijkstra’s.1 A*: The New Star in Pathfinding A* is a more recent development in the arena of pathfinding algorithms. It is an extremely versatile algorithm which many of today’s games use for their core pathfinding needs. the illusion of autonomous intelligent entities is fostered and maintained. It combines the power of heuristics from Best First Search to limit its search time. we will discuss ways of simplifying the pathfinding problem with hierarchical pathfinding. So far. after we have discussed decision making in detail). In this chapter.

isEmpty()) { n = open.add(n).cost.contains(child)) && child.parent = n. and is the value by which A* sorts the nodes it will search. } while (n. Above we have a fairly generic implementation of A*. Note that it searches like Dijkstra’s. you should see that it is very similar to Dijkstra’s and Best First Search combined into one algorithm.requeue(child).f = child. child. } closed. if(closed. but uses the heuristic estimate to limit the search as in Best First Search. COSTVAL newg = n.enqueue(child). with lowest f values being searched first. } } return false. 57 . g.1 How A* Works bool AStarSearch(Node start. child.g <= newg) continue.2. start.getNextChild().h. child. child. and h. if ((open. child. It makes some assumptions about the type of container classes to use. but otherwise it is fundamentally how A* works.contains(child) || closed. while(!open. The f value is the sum of h and g. else open.contains(child)) open.parent = NULL.hasMoreChildren()) { child = n. return true. List closed.contains(child)) closed.g = newg. Let us go over this algorithm in a little more detail. which is typically the estimated cost from the node to the goal.remove(child) if(!open.g + child.enqueue(start). if (n == goal) { makePath(). The g value is the current true cost to get to the node.g + child. Node n. After reviewing it.dequeue(). A* calculates three values for each node: f.h = GoalEstimate(child). open. Node goal) { PriorityQueue open. The h value is the heuristic estimate value.1.

and a current child node. Like the other algorithms. and we prime the open priority queue by adding the start node to it since that is where we begin.isEmpty()) Like the other algorithms. we compute a new actual cost based on the current node’s actual cost and the cost to the child. Node goal) Like the algorithms we discussed in the previous chapter. we will grab a node off the queue. we also iterate through the queue until we find the goal or the queue empties. List closed.parent = NULL. start.g + child. we make the path and return our success.cost. there is no path from the start node to the end node. n = open.bool AStarSearch(Node start. and return whether we found a path or not. A* has two lists to keep track of.dequeue(). while (n. child = n. We will set the parent of the start node to NULL since we know it is the start. PriorityQueue open. while(!open. Node n. Unlike our previous algorithms. The closed list is a list of nodes we have already searched. if (n == goal) { makePath(). 58 . open. This will allow us to go through our candidate nodes in the order that makes the most sense. we will want a current node. The open list is a priority queue just like in Djikstra’s and Best First Search. but might need to examine again at a later time. return true.getNextChild().enqueue(start). child. For each child. } For each iteration. We then check to see if it is the goal node.hasMoreChildren()) We then iterate across all the children of our current node. and if it is. If we do not find the goal before the queue empties. we get a start node and a goal node. COSTVAL newg = n.

we requeue it so that it gets placed in the right spot in the queue with its new total cost. we skip this child since we already have the shortest path to this child in the queue. child. set the child’s estimated distance to the goal using our heuristic (just like in Best First Search). if the closed list contains the child.parent = n.if ((open. If we determine that we need to visit this child. and the child’s actual cost is less than the newly computed cost. For each child. If there is a node in the closed list to which a shorter path is found. the node with the lowest f value is checked to see if it is the goal. closed.g + child. Then. we start off with two lists. If it is in the open list.requeue(child). we ignore this child.h = GoalEstimate(child). if(closed. 59 . If it is. we place the current node in the closed list so we do not visit it again. Otherwise. we make the path and return. and check to see if the child is already in a list. If it is not. child. we remove it from the list. To summarize. and the f value is calculated. while the open list is not empty. if the child is not in the open queue already. The open list is the list of nodes which needs to be searched and the closed list is the list of nodes which have already been searched.contains(child)) && child. After iteration through all of the node’s children. Next. as well as whether its cost is less expensive. set the computed actual cost to the child. Then. it is added.f = child.contains(child)) closed. child. the node to the closed list is added. its position in that list is updated.g = newg. the g value is set to the new g value we calculated. If it is. Also. we iterate through all of its children.contains(child)) open. if the child is in the closed list. the h value is set to the heuristic estimate.remove(child) if(!open. we determine a new g value. If the child is not in the open list. The algorithm starts by putting the start node in the open list. the parent node is set. open and closed.g <= newg) continue. child.enqueue(child).add(n). After we visit all of the children of the current node. it is updated and moved to the open list again. else open.contains(child) || closed.h. and set our total cost value to be the sum of the actual cost to this node plus the estimated cost to the goal. unless we find a cheaper way to get there. If either the queue or the closed list contains the child. we add it. If so. we need to remove it since we want to visit it again. we set the child’s parent to the current node.

the open and closed lists in A* can be inefficient when dealing with very large graphs. sort the list.2. Indeed. Stacks provide easy insertion and removal. There are ways (using recursion) to totally eliminate both the open and closed lists. but we do not use the significant amount of memory that the open and closed lists use. If the methods used to locate nodes in the lists. 60 . and then determine how to get across each region in a subsequent step. insert nodes. Some versions use queues. the memory requirements to store the nodes in these lists would increase dramatically. In addition. remove nodes. In this case you will begin with a search using the larger regions. So you should definitely experiment with different containers to find out which one works best in your particular implementation. If the estimate is greater than the true distance to the goal. but it is always better to use the simplest method when the simplest method works well. Another method is to limit the number of nodes we will store in the open list. Moreover. The important point is to select a container class which can quickly locate nodes. The first approach to optimizing A* is to improve the storage method for the open and closed lists.2 Limitations of A* A* is a wonderful pathfinding algorithm but it is not without limitations. or on a large outdoor map you might divide it into larger squares and go from region to region. In a dungeon you might go room by room. dropping the candidates with the highest f values if we reach our max open node count. it is not guaranteed to find the best path. Another optimization is to consider the search from a higher level and perform smaller searches for each step along the way. However. A* is a strong contender to keep in mind for your pathfinding needs. provided the heuristic estimate from any given child to the goal is never greater than the real distance to the goal. we do not need a closed list and this will save some processing time and memory. kd-trees. but we cannot have the best of all worlds. Remember our KISS principle from the first chapter! 2.1. etc. By doing this. the algorithm will be very slow. Priority queues are a good choice because they keep the selection of the next best node simple because it is always at the front of the list. A* will always find the shortest path to the goal.1. We will still encounter problems as we did with the Depth First Search. and in the case where the list is not ordered. others use stacks. and remove nodes from the lists are inefficient. Some final optimizations to consider relate to the open and closed lists in particular. some use heaps. This is very much akin to the spatial partitioning techniques you learn about in the Graphics Programming training series here at the Game Institute. you should be able to reuse many of the data structures and algorithms (quad-trees. They all have their benefits.3 Making A* More Efficient A* can be made more efficient so that it becomes a solid choice for our game needs. the algorithm will produce results which are not optimal.) that you learn about in Graphics Programming Module II in order to accomplish this objective. and hash maps provide quick node location. add nodes to the lists. while still others use hash maps.

if(m_n->m_y > 0 && m_n->m_x < (m_grid->getGridSize() .1)) { visitGridNode(x. } // Check the rest of the directions here // see the code for details // add the north-east node. } m_closed. early out if(m_grid->getCost(x.dequeue().1)) { visitGridNode(x. if(m_n->equals(*m_end)) { return REACHEDGOAL. // we are visitable newg = m_n->m_g + m_grid->getCost(x.. // add all adjacent nodes to this node // add the east node. return UNABLETOREACHGOAL. y. } int x. 61 .. } } void AStarMapGridWalker::visitGridNode(int x. x = m_n->m_x+1.2 Our Version of the Algorithm MapGridWalker::WALKSTATETYPE AStarMapGridWalker::iterate() { if(!m_open. return STILLLOOKING. y). y) == MapGridNode::BLOCKED) return..enqueue(m_n).isEmpty()) { m_n = (AStarMapGridNode*)m_open. y). y = m_n->m_y-1.2. int y) { int newg. if(m_n->m_x < (m_grid->getGridSize() . // if the node is blocked or has been visited. x = m_n->m_x+1.. y). y = m_n->m_y.

isEmpty()) We begin by checking to see if our open queue is empty.if( (m_open.m_parent = m_n. if(m_closed. } 62 . // remove it if(!m_open. else { // update this item's position in the // queue as its cost has changed // and the queue needs to know about it m_open. or requires more iterations.contains(&m_nodegrid[x][y])) && m_nodegrid[x][y]. we are already in the queue // and we have a cheaper way to get there.contains(&m_nodegrid[x][y])) m_closed.. m_n = (AStarMapGridNode*)m_open. If the queue is empty. It also returns the state of the graph traversal. We grab the top node off our queue.remove(&m_nodegrid[x][y]).m_g = newg.m_g + m_nodegrid[x][y]. m_open.. } else { m_nodegrid[x][y]. if(m_n->equals(*m_end)) { return REACHEDGOAL. and continue on until we find the goal node. see if it is traversable. we cannot find a path from the given start node to the goal node.dequeue(). it makes use of the priority queue to keep our nodes sorted in order of cost. our iterate interface begins inside the outer while loop of our algorithm snippet.remove(&m_nodegrid[x][y]). m_nodegrid[x][y].enqueue(&m_nodegrid[x][y]).. Let us take a closer look at this implementation. m_nodegrid[x][y]. } } } Here is the actual implementation from our demo. informing the caller if it has found the goal. if(!m_open.m_g <= newg) { // do nothing.contains(&m_nodegrid[x][y]) || m_closed.contains(&m_nodegrid[x][y])) m_open.m_h = goalEstimate( &m_nodegrid[x][y] ). Similar to Dijkstra’s Method. m_nodegrid[x][y]. MapGridWalker::WALKSTATETYPE AStarMapGridWalker::iterate() As with our other implementations.. is unable to find the goal.m_f = m_nodegrid[x][y].m_h.enqueue(&m_nodegrid[x][y]). update all its neighbors.

and the new cost is higher than the cost already computed for this child node. x = m_n->m_x+1.contains(&m_nodegrid[x][y])) && m_nodegrid[x][y]. If the grid node is not blocked. // if the node is blocked or has been visited. y). we call upon visitGridNode to do the work of visiting the neighbor node. First. y) coordinate. int y) As in previous implementations. we compute the new actual cost to the node via the current node. Otherwise. newg = m_n->m_g + m_grid->getCost(x. // add all adjacent nodes to this node // add the east node. we grab the node with the lowest perceived cost to the goal and make it our current node. Again we check to ensure that the node we want to visit is within the bounds of our grid. y) == MapGridNode::BLOCKED) return. m_nodegrid[x][y].m_f = m_nodegrid[x][y]..m_h.m_parent = m_n. we will want to visit this node. Also. we estimate the distance to the goal using our goal estimate (exactly like we did in Best First Search). If this node is the goal. m_nodegrid[x][y]. we return success.m_g <= newg) If the open queue or the closed list contains the node already. if( (m_open. 63 . We also set its actual cost to be the newly computed actual cost. early out if(m_grid->getCost(x. m_nodegrid[x][y]. visitGridNode does the work of visiting the grid node at the given (x.m_g + m_nodegrid[x][y].contains(&m_nodegrid[x][y]) || m_closed. its parent is set to be the current parent so that we know how we got here. it determines if the grid node to be visited is blocked.m_g = newg. y). Once it is determined that the child node needs visiting.m_h = goalEstimate( &m_nodegrid[x][y] ). y = m_n->m_y. it returns without visiting the node. if(m_n->m_x < (m_grid->getGridSize() .1)) { visitGridNode(x. we skip this node since we already have a path to this child node which is shorter. and if it is.For each iteration. Next.. void AStarMapGridWalker::visitGridNode(int x. } We then visit each neighbor of the current node. m_nodegrid[x][y].

enqueue(&m_nodegrid[x][y]). we check to see if the closed list contains this node. This will keep us from visiting it again unless we find a shorter path to it. we compute the total cost for this node by summing the actual cost with the estimate to the goal cost. If the queue does not contain the node. m_open. Now that we have obtained a deeper understanding of how A* and our implementation of A* works. return UNABLETOREACHGOAL.enqueue(&m_nodegrid[x][y]).Lastly. return STILLLOOKING. we return UNABLETOREACHGOAL to ensure we cease iterating since we will never find the goal.remove(&m_nodegrid[x][y]). we remove it since we plan to visit it again. m_closed. we requeue it into its proper position by removing it and adding it back. we check to see if the open queue already contains this node. Now that we have obtained a deeper understanding of how A* and our implementation of A* works. if(!m_open. we return STILLLOOKING to ensure we get more iterations so we can find the goal node. If it does. let us take a moment to walk through our implementation of the algorithm with an example graph from our demo. we add the current node to the closed list.contains(&m_nodegrid[x][y])) m_closed. After we have visited all of the current node’s neighbors. Now that we have obtained a deeper understanding of how A* and our implementation of A* works. If our open queue was empty. else { // update this item's position in the // queue as its cost has changed // and the queue needs to know about it m_open. 64 . Finally.contains(&m_nodegrid[x][y])) m_open. } Last. let us take a moment to walk through our implementation of the algorithm with an example graph from our demo. we add it. let us take a moment to walk through our implementation of the algorithm with an example graph from our demo. // remove it After we have computed our costs.enqueue(m_n).remove(&m_nodegrid[x][y]). If it does contain the node. if(m_closed.

3) is cheapest (it is also free).2) (2. (3. but pathfinding algorithms do not know that. 5) (1. 4) (3. the choice is simple. the heuristic shows that going to (1. Note that A* will arrive at the goal in 15 iterations rather than spend lots of time searching the expensive areas. (1. regardless of which node we choose. 3) (3. We see from our estimate that.1) Closed List (1.Graph Open List (1. Since (2. 5) or (2. 2) (1. 4) As we search (2. 3) (2. 3) (1. 2) (2. dy). 5) (1.1) Notes Here we have an example map where we want to get from corner to corner. 4)’s neighbors.2) was on the top of the list because it is free. 1) (1. (2.1) (1. 4). 2) (1. 3) (2.2) In the first iteration. but the cost is zero for (2. 4) was added last it goes on top in this implementation. 3) (2. In the second iteration. 5) is our best bet (as they are free). 1) (1. The path is clear-cut. Each step has still taken us closer to the goal so our heuristic has not done much for us yet. 4) and (1. 1) (1. (3. we see that either (3.2) (2. 4) (2. 4) (1.3)’s neighbors are put on the open list. 65 . 5) becomes the clear winner as it is added after (2. 2) (2. you should see how the heuristic will provide for the solution sooner than Dijkstra’s would. 5) (2. 1) (1. By making the path free. (1. and it is placed in the closed list. 3) (2. 4) (1. 5). The heuristic estimates were the same for all three possible nodes using Max(dx. the estimate is the same.

(2. 6). 4) (4. 4) (4. (1. 4) (4. 6). 4) and (4. 2) (1. (2. 4). 6) (3. 5) (2. 5) As we search (3. 4). 3) (2. We are now potentially headed upward. 5) (2. 5). (1. (3. so we check it next. (3. we have some nodes in our open list with better heuristic values so we try them first. (5. 6). 6). 6). (4. (1. 1) (1. 6). 4). (4. 6). (4. (5. 5) did not have any better ways to go. (5. (5. Notice our open list is getting very long now. 4). 5) (4. 5). 3). (3. 3). 5). 3). 5) We backtrack a bit and see that (4. 5) We still find no better paths here. (4. 2) (1. (2. 4) Here we see that our heuristic is going to make us backtrack a bit. (4. we see that we again have a tie between (4. (5. 3) (3. (2. 3). (5. (3. (5. 4) gets added on top because it was searched last. 4). 5). 5) (1. 5) (4. 6). (3. (2. We see we have nodes in our open list which do not go away from our goal so we search them first. (1. (1. 6) (1. 5) (4. 2) (1. 3). 3) (2. 4) (3. 2) (1. (5. 4) (3. (4. (1. 4).(4. (5. 3). There is one more node to check in our open list before we can get back to where we were before. 5) (1. (3. 4). 66 . 6). 3). 4). 1) (1. 4) (3. 5) (1. 4) (4. 3). (5. 6) (2. 5). (1. 6). 1) (1. 6). 3). Again. 5). and “away” from our goal. (3. (3. 3) (2. (4. 5)’s neighbors. 4) (3. (1. (5. 5). 6). 1) (1. (3. 4) (3. 3) (2. 5) (4. 4). 5) (1.

4) Our heuristic helps by declaring that (6. 5). (7. 6). (1. 5). 2) (1. (3. (1. 3). 5). 6). (4. 6). 3). 6).(5. (1. (6. (1. (5. 4). 4). 6). (5. We go back to (5. 3). 4). (5. (4. We search there next. 3). (6. (1. (6. (3. We have (6. (5. 3) as it is closer to the goal. 6). 1) (1. 6). (5. (3. 6). (6. (3. 3). 3) in the next iteration in order to see if we can get moving towards the goal again. (4. 3) (4. 3). 2). 4). 1). 4) was added last so we try it next. (2. 3). 6). 6). 4). 5). 5) (6. 5) (1. (2. (6. 4). 3). (3. 2). 5) (4. 4). (5. (1. 2). (7. 5). 3). (2. (7. (2. (2. 5). (5. 5) (1. 4). 3) (4. (5. 67 . 5) (1. (1. 6). (1. (3. (6. 5). (5. 4). 6). (6. (5. 4). 3) which we can try. 5). (3. 5) is a better choice than (6. (1. (3. 1). We see that our next best step is (6. (1. 4). 4). 6). 5). (4. 5). (2. 2). 4). (5. 2). 2). (4. (5. 3). 4). 6). (3. (1. (2. 3). 4). 5). (5. 1). 6). 6). (1. 6). (7. (1. 6). 2). 4). (4. 2). 5). (1. (3. 2). 6). (1. 4). 5) (2. (7. (1. 2). 2). (3. 3). 6). 4). (7. 5) (1. 4) There is still nothing better here. 2). (2. (5. (6. 3). (5. 5). (4. 6). 4) (4. (6. 3). 3). (3. (4. (5. (3. 5). (2. 3) (4. 3). (1. (4. 4) and (6. 4). (3. (6. 5) Now we are moving nicely towards the goal. (3. (4. (4. (4. 5). (1. (5. (5. 4). 6) so we move in that direction. 3) Here we are headed back along our path again. (4. 5). 6). (6. (7. (2. 4). (6. 3) (2. 4) (3. (4.

(7. 3). 3). (2. 7) is chosen as it was added last via the search. 6). 5). (8. 3). 4). 2. 5). 2). 4). 6). 4). 4). 3). (3. 4). 5). (3. 2). 5). (1. 1). (6. (3. 4). (7. 5). 6). (7. (7. 3). 3). 6). (6. (5. 7). (7. (6. 5). 3). 6). (7. (1. 5). 3). (6. (4. (2. 7). 6). 5) (8. 8). 6). 7) and (6. (2. (6. 2). For example. 5). (5. 6). 6). With the appropriate heuristic. 7). A* may even be used to solve the little puzzle game where you push the numbers around until they are in sequence. (1. accelerations) to determine the quality or suitability of a node.3 Heuristics: A Few Examples Let us take a moment to discuss heuristic estimates with A* and the multitude of ways they can be used. and which nodes it chooses to search first. 6). so we go there next. 2). (5. (4. (1. 3). (6. (4. 5). We see that (8. (5. 4). (4. 3). (3. (1. 4). 2). (7. the heuristic can be as simple as a cost estimate to the goal. 4). the list becomes very large very quickly. (6. 4). 4). A* is simply a Breadth First Search without the heuristic. Remember that Max(dx. (2. 4). 8) (7. This can result in a lot of ties as you can see from this example. (6. 7). which is used to modify and optimize its search. 7). (3. (5. 3). or it can be the cost estimate to the goal coupled with performance penalties or bonuses for traveling on a specific type of terrain. 5). 4). 8) (7. 8) is closest to the goal (and is the goal). only the one that is closer of the two. (5. 3). 4). (4. 5). 2). (2. (4. (6. 6). (5. (4. A* has even been used with performance heuristics to 68 . 1). 5). (6. (1. (4. 5). 7). 6). We simply quit and make the path. (3. 6). (4. (4. (1. (6. 5). The greatest advantage of A* is its heuristic estimate. (3. (7. (3. 4). (2. 6). (1. An efficient container class for this list is highly recommended for optimal use on larger graphs. 2). dy) will not pick a node that is both closer in x and y. (6. (5. (1. (8. 2). 4). 3). (7. (1. (6. 6). 7) Our implementation does not remove the last node from the open list after we arrive at the goal. 7) We are nearing the end. 7). At its heart. (6. (8. 2). momentum. (6. 8). (3. (1. (7. The next iteration ends the search. 3). (5. (2. 6) We now have a choice between (7. (1. 6). (4. (2. 5) (1. (8. (7. (3. 4). (6. (4. Notice how many nodes are still in the open list. (1. (1. (5. 5) (8. (5. (6. (7. 5). 5). (7. (5. 2). you can control how A* performs its search. (4. 4). (3. (5. (6. 8). 5). 6). 7). 4). 3). 6). 7). 6). (6. (4. 6). 3). (1. 6). 6). You can use the state of the object (velocity. 8). 4). 5). 3). (2. 3). (5. 4). Even with a small grid. 1). (1. (1. (3. 2). (4. (5. (6. 2). (7. But with the heuristic. (5. (7. (5. 7).(7. (5.

Desert Desert terrain can be anything from steppes to rolling dunes. dy)) and define how each unit is effected by a given terrain type.4. The power of the heuristic is not to be underestimated. We will then define a heuristic which will determine a cost multiplier for the standard heuristic estimate (such as max(dx.determine the shortest yet safest way to do load balancing on multi-server networked systems. While not as demanding as jungle terrain. as well as various types of underbrush. This terrain is typically not restrictive to vehicles but may be a bit more costly for those on foot due to heat. some units and some basic terrain types will be defined. 2. 2. Forest Forest terrain consists of regions of land lightly-to-densely covered with coniferous and deciduous trees. 69 . Plains Plains terrain consists of flat land to gently rolling hills covered with rowed crops and grasses.4 A Simple Real Time Strategy Game Design In order to examine how A*’s behavior can be modified by the heuristic. let us consider a very simple real time strategy game design. To start. It is very demanding and nearly impossible to traverse other than by foot. This terrain is fairly forgiving and is easily traversed by most modes of transportation. larger vehicles are unable to traverse this terrain due to tree spacing and underbrush.1 Terrain Types Jungle Jungle terrain consists of varied elevations mixed with dense tropical vegetation.

Streams and small bodies of water contained within a terrain are not included as they are typically easily fordable or avoidable without much added cost. They are typically winding in nature which may make them unsuitable for exclusive use. especially heavy ones.Foothills Foothill terrain consists of rolling hills through small canyons. Travel through this terrain is slow but possible. This terrain is considered impassible to all but those on foot and. Water Water terrain is any type of body of water. They provide for very easy travel. lake. 70 . and nearly impossible for large vehicles. even so. The terrain is typically very rocky with uneven ground making travel difficult for most. is still very difficult to traverse. though they may not go in the direction desired. or ocean. Swamp Swamp terrain consists of wetlands and fairly dense undergrowth. Mountains Mountain terrain consists of very steep slopes and rocky uneven ground. The ground itself is soft which mires vehicles. or paved surfaces which are wide enough for large vehicles to travel along. gravel. Trail Trails are dirt paths which are sufficiently cleared for those on foot or small vehicles to use for traveling more quickly. whether a river. though possibly only one lane at a time. Roadway Roadways are dirt.

Hovercraft Hovercraft are vehicles that move around by riding a cushion of air produced by a large fan or turbine blowing down toward the ground. They can traverse all types of terrain aside from water with little impediment. or other similar encumbrances. We will have Infantry. Our two types of units are Jeeps and Armored Personnel Carriers.4. The two tracked vehicles we have defined are tanks and mobile base units. This also limits their maneuverability. Wheeled Vehicles. shoulder mounted rocket launchers. Infantry Infantry are soldiers traveling on foot. Mobile base units are very large in size rendering them incapable of traversing forested areas or foothills. Light infantry carries light backpacks and arms. and using normal roadways. This gives them tremendous amounts of traction and surface area allowing heavier chassis. Due to such heavy equipment. rendering them incapable of using trails. they are capable of traversing terrains such as lightly forested areas. thereby restricting them from swamps. deserts. Wheeled Vehicles Wheeled vehicles are vehicles that use wheels on axels. and Hovercraft. jungles. and swampy terrain. and the use of trails. as well as general footpaths and trails. defined by their primary mode of motion. let us define four separate types of units. mountains. thereby making them capable of moving around easily. foothills. as well as using roadways. Heavy infantry carries large backpacks. Tracked Vehicles Tracked vehicles are vehicles that maneuver by use of linked treads running over many sets of wheels.2. plains. APC’s. We then have two more specific unit types for each generic type of unit. and limiting their ability to function in the varied terrain of foothills. with exception of the Hovercraft type. The air is trapped by a skirt around the vehicle which makes close 71 . they are incapable of traversing jungles. are heavier and larger vehicles. Jeeps are the smaller and lighter of the two and are more capable of traversing dense terrain. although trails. deserts.2 Units For now. plains. on the other hand. Tracked Vehicles. Being on foot gives them excellent maneuverability. Being mostly lighter and smaller vehicles. and foothills. roadways and other non-varied terrain are preferred. Tanks are capable of traversing forested areas.

4.0 100.2 100.0 1. we will store the cost weight of traversing each type of terrain.0 100.0 2.0 100.0 1.0 100.0 1. Ideally.0 100. This cost weight will be used as a multiplier of the heuristic estimate.5 1.2 1. Unit\Terrain Jungle Forest Plains Desert Foothills Mountains Roadway Trail Light 3.3 Terrain Type vs.0 Infantry Swamp Water 1.0 100.0 100.0 100. Unit Type Weighting Heuristic In the example real time strategy game we are designing. Below we have a table showing which units are capable of traversing which terrain types. it would prefer terrain types that are more suited to stealth. we could specify a unit task and the task would choose the type of terrain best suited for the task.0 100. thereby allowing it to move about.1 1.0 1.3 1.0 1.0 100.contact with the ground and forms a seal.0 1.0 72 .0 1.0 1. A check means the unit is capable of traversing the terrain (although possibly at great cost).5 1.0 1. Below we have a table of proposed unit type terrain modifiers.0 1.0 1.0 1.2 1. We will now define a Terrain Type versus Unit Type weighting heuristic. the vehicle is immobilized until the seal can be reformed.0 100.8 1.0 100. we defined seven different types of units and ten different types of terrain.0 100.0 1.0 100.2 1.0 1.0 100. This is completely arbitrary but should give you an idea of how it would work. For each unit.5 3. Unit\Terrain Light Infantry Heavy Infantry Jeep APC Tank Mobile Base Hovercraft Jungle Forest Plains Desert Foothills Mountains Roadway Trail Swamp Water 2. This seal keeps the air trapped under the vehicle.0 100.8 2.1 Heavy Infantry Jeep APC Tank Mobile Base Hovercraft 100.3 2.5 100.3 100. you would adjust these numbers to make the units move in the fashion desired.0 1. An x means the vehicle is incapable of traversing the terrain at any cost. If the seal is broken.0 100.2 1.0 100.0 100.1 1.0 100.1 1.0 100.0 100.0 100.2 1.5 100. which limits its movement to areas without dense vegetation or uneven terrain.3 2. If we wanted to be even more advanced.0 1.0 1.0 100. Any single node cost over a limit will be considered blocked and impassible terrain.0 100.0 1.0 100.0 100. For instance.0 1. if a unit were in reconnaissance mode.

Repeat this process until it does not make sense to break up the sub-areas any further. where each texel equals a node. The algorithm is then modified to find its way from the source to the goal via the lower resolution sets. Manhattan. For each subarea.Using our newly defined set of weights. Once done. we can define our heuristic estimate function. weights for each unit for each terrain type.4. But for now. The concept is simple in theory.ini file) or you can hard-code the values directly into your application. and a heuristic to use those weights. Start by breaking down the gaming area into sub-areas. you need to create several units and instruct them to wander from place to place. where h’(n) is our duly appointed heuristic estimate. Let us look at a few examples to see how this works. and let the heuristic decide the nodes on which to travel. We will use whichever heuristic estimate function we choose (Max. We will need to apply similar weights to the actual costs of the nodes for the terrain and for the given units. Euclidean). 2. or else the heuristic estimates will be significantly overestimated. 2. and not much more difficult in practice. Finally. you will have the beginnings of a real time strategy game of your very own! The artificial intelligence required to make our units perform intelligently is a topic to be addressed later in this course. an . The best choice in this case is probably going to be a grid-oriented set of points that allow travel in all of the cardinal directions as well as diagonally. A 2D bitmap texture might serve you nicely here. Start by setting the weights for travel between each node to be 1 for all nodes. This gives us the function: h(n) = h ′(n) ⋅ Wu . Storing the weights can be done using a text file (e. then via the higher resolution sets included in the path found across the lower resolution sets. This does not always provide the absolute best path. perhaps using a paint program if a texture is your map method of choice. Bear in mind that modifying only the heuristic will not make the system work as if by magic. you have a set of units. You now need to define your map. a set of terrains.t . you should be able to put together something simple.g. but it helps reduce a larger problem into several smaller and more manageable ones. and W is the matrix of weights for each unit on each terrain. break it up into further sub-areas. 73 . You will have to apply the weights to the nodes themselves as we traverse them because the actual cost to the node is what drives A* to search for a short path.5 Simplifying the Search: Hierarchical Pathfinding Hierarchical pathfinding is an approach to pathfinding which attempts to reduce the number of nodes the pathfinding algorithm has to consider when building a path. You will then assign a terrain type to each node. and multiply it by our weight for this unit on this terrain.4 Defining the Map At this point.. if you were going to start assembling your game. resembling the features discussed here.

and into Nevada. We first find the state our starting and ending cities are in. and from there to Utah. Suppose we wanted to take a trip from Chicago to Las Vegas. one at a time.1 shows the need to go from Illinois. through Nebraska.1 Let us say we have a map of the United States. and find our way from state to state. We then determine our path across each of the states we know need to be crossed. You can see how this limits the scope of our search. Figure 2. going from county to county. and we want to find a path from one city to another. It would be terribly inefficient to calculate a path using all of the roads of all of the states. and finally to Nevada. and from there to Colorado. and from there to Nebraska.1 A Map of the US Figure 2. through Colorado. We will break up the United States first into states and then into counties.2.5. We begin by determining the best way to get to Iowa first. through Iowa. and also allows us to spread the processing of the search across many game cycles. through Utah. as we do not need to worry about getting across the next state until we have already traversed the prior state. 74 .

But before you can concentrate on finding a path from your current location to the door. we find out how to get across each of those rooms.2. 75 . we would first find out which levels we would need to get through.2.2 Another example is a dungeon with multiple rooms and levels.5. you might think that it would make sense to figure out how to exit the room before worrying about how to get all the way across the dungeon. In some implementations. Regardless of how you implement the close distance navigation. you might decide to simply let the pathfinder work its way back to the correct door and then use only the game engine’s collision system to navigate from the current position to the selected door.2 A Dungeon Figure 2. the larger point here is the breakdown of navigation tasks from low resolution to high resolution. for example. If we wanted to get from one room in one level to another room in another level. In most cases however. If each room were to have various obstacles and pillars around which we needed to navigate. a pathfinder is be used even for this task (albeit in conjunction with the collision engine). you first have to figure out which door is the most appropriate one to start from. Take. The dungeon could be broken up into levels first. Finally. Figure 2. Then we would determine which rooms we needed to pass through for each of those levels. and then broken down further into rooms on each level. This dungeon has many rooms.

76 . and then determine which smaller squares would need to be crossed from that point. and so on.2. we could divide the map into sixteen squares.5. We would then divide each of the sixteen squares into sixteen squares again.3 A Real Time Strategy Map For a real time strategy game map. so you might wish to apply some of that knowledge to tackling this problem. This method is very much like the quad-tree method you will study in the Graphics Programming course series here at the Game Institute. and repeat this breakdown until we had squares of small enough size that we could quickly traverse them. We would then traverse the largest squares to determine which ones needed to be crossed.

2. While there are many ways to accomplish this. a grid is superimposed over the gaming area.1 Superimposed Grids One solution is to make the non-gridded world a gridded world.6.6 Pathfinding on Non-Gridded Maps In the world of gaming. we can create grids for each level. Once you acquire the closest point to the goal on your larger pathfinding graph. Yet we still must find our way around the maps. it is typical to “acquire” the larger pathfinding graph by getting to the closest node. Usually in non-gridded environments. and the higher level pathfinding is only used when trying to go longer distances. next step closer techniques are utilized to move around locally while dodging around and fighting. you fall back to next step closer techniques to access the actual goal position. it cannot always be assumed that we are dealing with maps that are based on grid systems. and then pathfinding to the closest node to your destination. and define entry and exit points to move from one level to another. In these cases. 77 . To do this. let us cover some of the most common methodologies for traveling from one place to another in worlds where there are no pre-defined natural grids. which is where our pathfinding will be done.2. For multi-level systems.

78 .6. Coarse networks result in jagged paths and zig-zagging.2. These systems are also referred to as waypoint networks. We will look at waypoint networks in more detail a little later in the course as they will be our method of choice for 3D world navigation. the finer the movements your entities will have. These points are then used by the pathfinding algorithm to determine where you can walk. The finer the network of points. and draw lines from each point to every other point such that the lines do not cross through any obstacles. The idea is to place points around the obstacles.2 Visibility Points / Waypoint Networks Visibility points are a common way of determining where obstacles are in order to avoid them.

this method can be computationally expensive. and the pathfinding method simply uses gradient descent or the Newton-Rhapson method to find the lowest cost and travel in that direction.2. this method surrounds obstacles with cost fields. These types of systems tend to be more expensive since the radial basis function contains an ex (exponential) expression. like the Radial Basis method. The algorithm can then travel in the direction that induces the least amount of cost. Cost fields are typically implemented using continuous functions.4 Cost Fields Similar to the radial basis method. The problem with this method is that it can get caught in local minima.3 Radial Basis Radial Basis functions are functions that look similar to a normal distribution curve. 79 . Moreover. requiring some sort of agitation method to get back out. Centering one of these functions on each of the obstacles allows the pathfinding algorithm to determine distances to obstacles and incur higher cost as it gets closer to an obstacle.6.6. 2.

6. the larger polygons can be dynamically tesselated and the ones they are standing on removed so that they can be circumnavigated.5 Quad-Trees This method is a combination of hierarchical pathfinding and the grid method. and build an adjacency list for each one. Once the entity moves off the polygon. and will not be demonstrated in this course. The area is cut into quads. In this way. For more information about creating quad-trees. and then each of those quads is cut into quads. Our preference for 3D world navigation in this course will be waypoint networks and we will discuss those techniques a little later in our studies. please consult the Graphics Programming Module II course available here at the Game Institute. it can be determined which floor polygon can be traversed to reach another traversable floor polygon.2. Walls and other obstacle polygons will not be included in the adjacency list. as other moving entities cross the polygons. 2. The largest quad that can be formed without crossing a boundary of an obstacle is then created and stored.6.6 Mesh-Based Navigation For 3D worlds consisting of polygonal data. This is a tricky method. The centers and corners of the squares are used as route points. graphs can be built from the mesh data itself. Additionally. Recursion occurs to some depth which limits the number of nodes. it can be re-added into the adjacency lists and remerged if all of its pieces are available again. but it allows the use of world geometry as your pathfinding graph. This is a fairly complex system. The idea is to define the polygons in your world that are ‘floor-walkable’. and requires a fair amount of work on the part of both the artist/level designer and the programmer. 80 . but also provides fine pathfinding near obstacles.

as well as add new types of MapGridWalkers. whether or not heuristics are supported.2 MapGridWalker Interface The class hierarchy is designed around the base class MapGridWalker. In our demo. there are two classes which support weighted graphs. the algorithms need to be designed so that an iteration function is repeatedly called which updates the display between steps. This class provides the basic interface common to all MapGridWalker types. rather than finding the entire path in a loop. what types of heuristics are supported. This architecture provides for the ability to add heuristics. UNABLETOREACHGOAL } WALKSTATETYPE.7 Algorithm Design Strategy In order to allow the application to show each step of the pathfinding algorithm’s decision making process. typedef std::vector<std::string> stringvec. 2.7.7. simply and easily. thereby allowing quicker or slower playback of the graph search. and two classes which support heuristics.2. REACHEDGOAL. but it is easy enough to modify the approach to generate either the complete path in one pass or to do n iterations before returning.1 Class Hierarchy MapGridWalker BreadthFirstSearch MapGridWalker BestFirstSearch MapGridWalker Dijkstras MapGridWalker AStar MapGridWalker 2. 81 . It tells us if weighted graphs are supported. Of course. class MapGridWalker { public: typedef enum WALKSTATE { STILLLOOKING. your actual game implementation may not have this one step at a time design requirement. and so on. This method is useful because it also allows the update rate of the algorithm to be changed dynamically.

This allows walker specific drawing to be done in an object oriented fashion. } MapGrid *getMapGrid() { return m_grid. The virtual drawState() function allows the specific implementation of the MapGridWalker to draw its current state into a Windows Device Context (CDC) using the bounds of the window provided in the rect gridBounds. MapGridWalker defines the WALKSTATE enumeration. } protected: virtual void visitGridNode(int x. CRect gridBounds) = 0. WALKSTATETYPE iterate() = 0. STILLLOOKING informs the application whether it must call iterate again to keep looking for the goal. UNABLETOREACHGOAL informs the application that the walker has failed to find a path to the goal and further calls to iterate will not make any progress. bool weightedGraphSupported() { return false. }. } stringvec heuristicTypesSupported() stringvec empty. which provides the application with an understanding of the walker’s progress. } MapGridWalker supports a default constructor as well as a constructor which supplies a MapGrid upon which to walk. int y) = 0. } virtual ~MapGridWalker(). void reset() = 0. return empty. REACHEDGOAL. bool heuristicsSupported() { return false. UNABLETOREACHGOAL } WALKSTATETYPE.MapGridWalker(). }. MapGrid *m_grid. Let us go over it a bit at a time. MapGridWalker(MapGrid* grid) { m_grid = grid. The MapGridWalker interface is fairly straightforward. virtual void drawState(CDC* dc. CRect gridBounds) = 0. MapGridWalker(). 82 . Accessors to set the MapGrid are also provided so the default constructor can be used. virtual virtual virtual virtual virtual virtual { void drawState(CDC* dc. } virtual std::string getClassDescription() = 0. typedef enum WALKSTATE { STILLLOOKING. void setMapGrid(MapGrid *grid) { m_grid = grid. REACHEDGOAL informs the application that the goal has been reached and iterate need not be called again. MapGridWalker(MapGrid* grid) { m_grid = grid. virtual WALKSTATETYPE iterate() = 0.

but may be overloaded in the specific class to return true if the walker supports the use of heuristics. virtual void visitGridNode(int x. The virtual reset() method resets the walker’s state to the start node and reinitializes its map grid to mark all nodes as not visited. virtual void reset() = 0. 83 . } MapGrid *getMapGrid() { return m_grid. } There are some accessor methods to obtain the specific walker class’s name (to populate the pathfinding method dropdown). This vector is used by the application to populate the heuristics dropdown. void setMapGrid(MapGrid *grid) { m_grid = grid.The virtual iterate() method makes one iteration of the walker’s algorithm before returning a value corresponding to its current state (using the enum). and set or get the MapGrid object which the walker will navigate. The virtual method visitGridNode allows derived MapGridWalkers to visit the grid node at the given coordinate in its own specific fashion. }. but the specific implementation may return true if the walker supports navigation of weighted graphs. } The virtual heuristicsSupported() method also defaults to false. int y) = 0. virtual bool heuristicsSupported() { return false. The virtual weightedGraphSupported() method defaults to false. but may be overloaded by a walker that supports heuristics to provide a vector of strings that contain the descriptions of the heuristics supported. virtual stringvec heuristicTypesSupported() The heuristicTypesSupported() method returns an empty vector of strings in the default implementation. virtual bool weightedGraphSupported() { return false.

visited state. MapGridNode objects also keep track of state information for the walker class such as current traversal cost.8 Grid Design Strategy MapGrid MapGridNode MapGridPriorityQueue MapGridCell AStarMapGridNode std::queue <MapGridNode*> The MapGrid is simply a two-dimensional array of MapGridCell objects. since A* requires more cost state variables to be stored than the other methods discussed. } void setCost(const int cost) { m_cost = cost. These special-case MapGridNodes may also be used for other special-case walkers to allow for extensibility. 84 . GridCell(int cost). The MapGrid class also keeps track of the start and end indices into the grid for the walkers to use in their search.2. which is used to walk MapGrids. their derivation from MapGridWalker. The walkers themselves build a PriorityQueue or a STL queue which contains MapGridNode objects. inline int getCost() const { return m_cost. that the walkers themselves must be rewritten to use different map types. and so forth. } private: int m_cost. hence. GridCell &operator=(const GridCell& rhs). 2. GridCell(const GridCell& copy). which is a nested class of MapGrid.8.1 MapGrid Interface class MapGrid { public: class GridCell { public: GridCell(). Walker specific MapGridNodes can easily be subclassed from MapGridNode as in the instance of the AStarMapGridNode. Each of these cells has a cost associated with it for moving into them. Note however. MapGridNode objects keep track of which MapGridCell they represent by storing the row and column index into the MapGrid. The MapGridNode is segregated from the MapGrid itself so that the grid can be discarded and replaced with another type of map environment.

}. GridCell &operator=(const GridCell& rhs). inline int getCost() const { return m_cost. which contains only the cost needed to enter that cell during a traversal. The MapGrid class has various accessors for setting and getting the cost of a given GridCell. } int getCost(int x. The MapGrid itself consists of a two-dimensional array of GridCell objects. GridCell(const GridCell& copy). const int cost). The contained class GridCell has a default constructor which initializes the m_cost variable to 1. } setEnd (int x. It contains a nested class GridCell. virtual ~MapGrid(). the start node. } void setCost(int cost) { m_cost = cost. 85 . GridCell(int cost). MapGrid(int gridsize). int y. int m_startx. There is also a copy constructor for deep copies. m_endx. int getGridSize() const { return m_gridsize. } Accessors provide access to the m_cost variable of the GridCell. int m_gridsize. m_endy = y. Let us talk about it in a little more depth. int y) { m_endx = x. int y) { m_startx = x. int &y) const { x = m_endx. m_starty.}. m_endy. } getStart(int &x. int &y) const { x = m_startx. The assignment operator helps prevent copy constructs and is also useful for general assignment. GridCell(). y = m_starty. The MapGrid class is fairly simple. MapGrid(int gridsize). as well as a constructor which assigns the cost passed in. void setCost(int x. and the row and column indices to the start and end nodes. } private: GridCell **m_grid. y = m_endy. virtual ~MapGrid(). m_starty = y. void void void void setStart(int x. private: int m_cost. int y) const. The cost value is encapsulated and requires the accessors to gain access. and the end node. } getEnd (int &x.

m_cost = cost. and the goal position can all be obtained and modified via these accessors.The MapGrid itself provides only a constructor. MapGridNode *parent. The class owns a two dimensional array of GridCell objects which it allocates during construction.} void setVisited(bool visited) { m_visited = visited. m_endy = y. } getEnd (int &x. bool operator>(const MapGridNode &rhs). } MapGridNode(const MapGridNode &copy). 2. int cost) { m_x = x. } getStart(int &x. } setEnd (int x. It is aware of the size of this array. int y. } virtual virtual virtual virtual MapGridNode &operator=(const MapGridNode &rhs). y = m_starty. bool operator==(const MapGridNode &rhs). // destructor virtual ~MapGridNode() { m_parent = NULL. int y) { m_startx = x. m_visited = visited. m_starty = y. int m_gridsize.2 MapGridNode Class class MapGridNode { public: // constructors MapGridNode() { m_cost = m_x = m_y = 0. m_endy. which expects a grid size used to construct a two dimensional array of GridCell objects. m_parent = NULL. the start position. const int cost). } The class provides various accessors to obtain the data contained within the class. m_parent = parent. m_starty. m_endx. } 86 . m_y = y.} MapGridNode(int x. int y) { m_endx = x. int y) const. m_visited = false. // accessors void setParent(MapGridNode* parent) { m_parent = parent. int y. int &y) const { x = m_endx. private: GridCell **m_grid. void void void void setStart(int x. bool visited. y = m_endy. int &y) const { x = m_startx. int getGridSize() const { return m_gridsize. The size of the grid. the cost of a given node. bool operator<(const MapGridNode &rhs). } int getCost(int x. The destructor is virtual in order to allow potential inheritance and proper polymorphic destruction.8. int m_startx. and knows about the start and end positions for the pathfinding system. void setCost(int x.

int cost) { m_x = x.} MapGridNode(int x. The MapGridNode class provides a default constructor which initializes all of the values to null defaults. The MapGridNode class is the interface via which the walkers navigate the MapGrid object. m_visited = visited. bool m_visited. Let us take a closer look at this class. m_y. a cost value (which may be interpreted by the specific walker class however it needs). Notice that get and setCost() are virtual. m_visited = false. // the coord of the grid cell int m_cost. These classes are created and placed into Queue objects which the walkers use to decide which nodes are to be traversed next. MapGridNode() { m_cost = m_x = m_y = 0. allowing subclasses (such as AStarMapGridNode) to provide their own cost metrics.bool getVisited() const { return m_visited. virtual ~MapGridNode() { m_parent = NULL. The pointer to the parent node is extremely important as this node is the only way we can know how the walker class got to this node. } The destructor is virtual to allow for correct polymorphic destruction of derived classes. There is also a constructor which takes all the parameters necessary to build the node in place and a copy constructor for deep copies. const static int BLOCKED. we are able to find our way back from the ending node to the starting node to build our path. The MapGridNode base class provides indices into the MapGrid for their location. // helpers bool equals(const MapGridNode &rhs) const { return ((m_x == rhs.m_y)). int y. and also store the current traversal costs. } MapGridNode(const MapGridNode &copy). By traversing these pointers in a linked list fashion.m_x) && (m_y == rhs. as well as for general assignment usage. } // members int m_x. MapGridNode *parent. m_parent = NULL. An assignment operator is provided to help prevent overuse of copy construction. bool visited. virtual MapGridNode &operator=(const MapGridNode &rhs). MapGridNode *m_parent. These classes contain the intermediate state information needed by the walker classes to properly navigate the MapGrid. virtual int getCost() const. virtual bool operator==(const MapGridNode &rhs). a visited flag. m_parent = parent. m_cost = cost. } virtual void setCost(int cost). and a pointer to the parent node. m_y = y. }. 87 .

8. MapGridNode* dequeue(). bool equals(const MapGridNode &rhs) const { return ((m_x == rhs. The cost metrics are virtual to allow derived classes to have their own cost metrics. The other operators are for convenience. delete m_tail. and a pointer to its parent for completed path traversal.3 MapGridPriorityQueue Class class MapGridPriorityQueue { public: MapGridPriorityQueue(). bool isEmpty() { return m_size == 0. and cost metrics. virtual bool operator>(const MapGridNode &rhs). 88 . m_y.virtual bool operator<(const MapGridNode &rhs).m_y)). } virtual void setCost(int cost). virtual int getCost() const. its visited status. } void makeEmpty(). Various accessors are provided to obtain and manipulate the class data such as current parent. MapGridNode *m_parent. bool m_visited. delete m_tail->m_node. STL requires the < operator to use this object in sorted containers. its cost. const static int BLOCKED. Various comparators are defined for allowing comparisons to be made. The node’s data consists of its coordinate in the MapGrid. delete m_head->m_node. bool contains(MapGridNode *node) const. } bool getVisited() const { return m_visited. 2. ~MapGridPriorityQueue() { makeEmpty(). a constant to represent blocked cost. void remove(MapGridNode *node).} void setVisited(bool visited) { m_visited = visited.m_x) && (m_y == rhs. } The equals method helps derived classes perform default comparisons while adding their own in derived comparator operators. delete m_head. visited state. void setParent(MapGridNode* parent) { m_parent = parent. } void enqueue( MapGridNode *node ). // the coord of the grid cell int m_cost. int m_x.

These methods determine if the list is empty and empty the list. unsigned int m_size. } QueueNode(MapGridNode *node) { m_node = node. ~MapGridPriorityQueue() { makeEmpty(). This method inserts a MapGridNode into the list and orders it by its cost. m_next = m_back = NULL. and frees the head and tail nodes. 89 . which contains a pointer to a MapGridNode. the node must be removed from the list. delete m_tail->m_node. respectively. The next method is the remove( ) method. It is critically important to note that this class does not re-sort the list if a node pointed to by this list has its cost changed. delete m_head->m_node. }.private: class QueueNode { public: QueueNode() { m_node = NULL. The MapGridPriorityQueue provides only the default constructor which initializes the list as empty. MapGridPriorityQueue(). This method removes a MapGridNode from the list if it is in the list. QueueNode *m_tail. It keeps the list sorted in order of the MapGridNode’s cost via the getCost() method of MapGridNode. QueueNode *m_back. m_next = m_back = NULL. } The destructor empties the queue. The MapGridPriorityQueue class is a linked list of QueueNode objects. Let us take a look at this class in more depth. Also. and reinserted. }. QueueNode *m_next. This method searches the list for a MapGridNode and returns true if the node is in the list. QueueNode *m_head. The second method is the contains( ) method. delete m_tail. delete m_head. } MapGridNode *m_node. void makeEmpty(). The MapGridPriorityQueue class provides a few methods which are important to note. and keeps the nodes with the cheapest cost on the top of the list. The first is the enqueue( ) method. there are the isEmpty( ) and makeEmpty( ) methods. In these instances.

MapGridNode *m_node. bool isEmpty() { return m_size == 0. unsigned int m_size. It has a default constructor which assigns its data to null. The remove method simply removes the node from the queue. The enqueue method adds the node to the queue and places it in sorted order based on its cost. QueueNode *m_back. m_next = m_back = NULL. The QueueNode contains a pointer to the node it is holding. MapGridNode* dequeue(). QueueNode() { m_node = NULL. bool contains(MapGridNode *node) const. 90 . QueueNode *m_next. and a copy constructor which copies the node to which it points. and a tail pointer to the back of the queue. QueueNode *m_head. but it does not place it in the queue. The contains method searches the queue for the node and returns if the node is contained in the queue. void remove(MapGridNode *node). a head pointer to the front of the queue. void enqueue( MapGridNode *node ). } QueueNode(MapGridNode *node) { m_node = node. } The contained class QueueNode is the actual data which the MapGridPriorityQueue contains. as well as pointers to the next queue node and the previous queue node.The makeEmpty method empties the queue and properly frees the memory as necessary. You will see all of these members in action when you explore the source code to the pathfinding lab project. The dequeue method removes and returns the node which has the lowest cost in the queue. } The isEmpty method returns if the queue is empty. The MapGridPriorityQueue stores a size. m_next = m_back = NULL. QueueNode *m_tail.

It contains a CMainFrame which is the window frame itself that contains the CPathfindingFormView.9 MFC Document/View Architecture and Our Demo CPathfindingApp CMainFrame MapGrid C3DView CPathfindingFormView CMapGridDoc MapGridWalker MFC has an interesting architecture called Document/View. those of you who have taken the C++ Programming courses here at Game Institute are in a good position to begin your investigations into MFC. and the CMapGridDoc. In our application we have a CPathfindingApp which is the controlling entity. The core idea is that you have a single Document object containing all information relating to a single instance of the application’s data (such as a Word document. In our case the document contains a MapGrid and one of each of the MapGridWalker objects. as it is outside the scope of this course. the C3DView. Note: We will not discuss in too much detail how MFC does its work in this fashion. Every View has a pointer to its Document so that it can get data from the document to display itself. Perhaps exposure to MFC here in this course will inspire you to look further. or in our case. but not difficult to learn. The 3DView and CPathfindingFormView can then be updated to display the correct results based on what information is in the CMapGridDoc. We then let the view update the MapGrid and instruct the MapGridWalker to do its iteration via the CMapGridDoc. Excel spreadsheet. However. It is a powerful system that allows you to quickly assemble all sorts of useful Windows applications.2. It is a large API. a MapGrid) and any number of View classes meant to display that data in some fashion. given your current level of Windows programming experience. 91 . It associates the view and the document via a DocumentTemplate object.

2. The Find Path button starts the app searching for a path using the currently selected method and heuristic (if any). We capture mouse clicks via the CPathfindingFormView (which can be done using MSVC’s class wizard). and do a hit test against the grid. as it is outside the scope of this course. the finish. select a pathfinding method. The set start and end buttons enable setting the start and end points of the grid when selected. the start. We check if the method selected supports heuristics and. Lastly. The Heuristic Method drop down is also disabled or enabled and populated based on the selected method. It uses A* to find the shortest path to the goal and make a sandy road along the path. The Weight textbox and spinner are only enabled if heuristics are enabled.9. we enable the drop down and populate the list from the specific walker. and generates a small terrain for the 3DView. and make it go. Whichever grid square we click is the one we modify (provided the user clicked one). we check to see if the method supports weighted graphs and enable or disable the grid costs buttons as appropriate. the update rate. If you would like to know more about the 3D aspects of this course. When the selection changes.1 The Form View Panel Our panel is designed so that we can click on the grid and make our map. the Generate Terrain button takes the current grid. if so. We will not discuss in detail how it does all of that. The buttons for the grid costs are enabled and disabled based upon which pathfinding method is selected in the Pathfinding Method dropdown. The update rate scroller updates the frequency of the timer which drives the iterate calls when the app is in Find Path mode. it is suggested that you explore the Graphics Programming courses offered here at the Game Institute (Module I. Chapter Seven in this case). any heuristics that might be appropriate. 92 .

While our flocking system will not directly implement graph-oriented pathfinding as we have done in these last two chapters (although it will take advantage of it later in the course). Of course. make sure that you understand the demonstration that accompanies this chapter. There is much there that can be applied. In the next chapter we will begin to transition between pathfinding methods and decision making.2. In this chapter we were able to introduce one of the most powerful algorithms available to us: A*. mostly within a more localized area. 93 . You are encouraged to start bringing some of those tools to bear as you work your way through this course. It is well worth your time to try implementing some of the hierarchical methods we discussed using the source code you obtained during your studies in the Graphics Programming series. before you try any of these projects.10 Conclusion This brings us to the end of our core pathfinding discussions. it will still provide a form of environment navigation. We talked about how it works and how we can improve it if we find performance becoming a concern. More on this in the next lesson. We also talked about a number of hierarchical pathfinding methods that can prove to be useful when dealing with large maps or non-gridded worlds. For example. try to get your animated characters (Graphics Programming Module II) to traverse various gridded scenes (which you can create easily in GILES™). You will see that during the development of a behavioral system called flocking. we will begin to blur the line between these disciplines and bring elements of ideas from both camps to bear. Or perhaps try your hand at implementing something using the RTS design we presented earlier in the lesson. both with respect to spatial partitioning and to rendering on the whole.

.

Chapter 3 Decision Making I: Flocking 95 .

Group/squad behavior is based on a similar concept. Each behavior may or may not be influenced by entities that are nearby. We also touched on the different types of artificial intelligence. They are also helpful when simulating movement for other entity groupings.” That is. we looked at how to implement some of the more common pathfinding methods used in today’s games.1. let us set forth a few standards which we will be using in our examples. we have decided to begin with a method known as flocking. there is a set of behaviors which each entity in the group applies to determine its movement. they will be referred to as entities in this text. Flocking is actually an interesting mix of the two concepts whereby decisions are made about how to perform pathfinding based on what the rest of the flock is doing. classification. This discussion will set the stage for other forms of decision making that we will look at in the next chapter. 3. In this chapter we answer the following questions: What is flocking? What are the common components of flocking? How are the common components of flocking implemented? 3.1 Behavior Based Movement Flocking is classified as “behavior based movement. We will examine each of these behaviors in detail shortly. There are four key movement behaviors typically applied in flocking systems. where groups of entities cooperate with one another to make decisions and navigate the environment. as we will see later. Using these four behaviors will result in fairly realistic representations of flocks of birds or schools of fish. decision making. which we mentioned included pathfinding. and life systems.Overview In the previous chapters. First. We can also refer to a flock as a group. and each will determine the movement for an entity which is appropriate for the particular behavior. For instance. This is done in order to make it easier to extend the terminology to group/squad based behavior later in the course.1 Introduction to Flocking Flocking is a very popular and commonly used AI grouping concept that has existed in computing for a long time. a “move up” behavior would simply apply a movement vector that takes the entity straight up. Flocking systems are often used to simulate the behaviors of schools of fish or flocks of birds. While the individual entities participating in a flock are also known as boids in some algorithms. The next type of artificial intelligence we will discuss in the course is decision making. 96 . In transitioning from pathfinding systems to decision making systems.

The empty blue triangles are other entities which our entity can perceive.Figure 3. The yellow filled triangle in the center represents the entity of interest.2). The green entities are outside our entity’s perception range. Note that if desired we can limit the perception of our entity to a cone rather than a full 360 degree area. even though they belong to its group.2 The radial lines on the underlying polar grid will show the relative positions of all other entities with respect to our entity. Figure 3.1 shows a group of entities represented as colored triangles. In Figure 3.2. and will be ignored by our entity. the red section of the polar grid indicates the area in which our entity can perceive the other members of its group. 97 . if we wanted to reduce our entity’s field of view to 144 degrees (the top two arcs).1 Figure 3. then the four blue entities outside the cone range would become green (Figure 3. For instance. Its movement choices will be examined and it will be referred to as “our entity” in this section.

Let us continue on and look at those next. we have a group of entities. let us now take a more detailed look at the behaviors. The entity above and to the left (connected by the red line) is too close according to our specifications. 98 . In the figure on the right. Thus.2 Separation The separation behavior strives to keep the entities in a group from clumping too closely together without letting them drift too far apart. we choose the closest entity to our entity. 3. so they are ignored. all of the entities are further away than the second circle. and move towards it (as indicated by the red arrow). so we pay attention to them. In this case. The blue entities are within our bounds of knowledge. With these standards set. In the figure on the left. there are no entities which fall within the distance specification to our entity. It follows that any entity within the bounds of the red circle is too close. In both of these cases.1. while the green entities are too far away to concern us. In fact. These can be used to easily visualize the distance from our entity to the other entities.Note the concentric rings in the polar grid. the desired move vector is summed together with the rest of the desired move vectors from the other behaviors. as shown by the red arrow. Let us say that in our separation example we want a distance of two grid spaces between each entity. The separation behavior has a particular interest in this information since it is influenced by distance. the separation behavior strives to keep the separation between entities a certain fixed distance. so we calculate a vector that takes us away from this entity.

the fish that will likely survive need only swim faster than his brethren. In real schools of fish for example. Examples for schools of fish include predator fish that will eat them. boats and their whirling propeller blades. That is. Our entity finds that it is too close to this fish (in this case. the entity is paying attention to objects and/or events beyond just itself and its group. if a shark is chasing after two fish. 99 . they move away from it. The idea is that the entities would sense things other than their group mates. this desired move vector is summed up with the other behaviors’ desired moves. and if the thing they sense is not something they want to get too close to. and conveys the impression that the group as a whole is a larger entity than the predator. 3. This vector will be added in with the rest of the desired movement vectors from all the other behaviors. In the figure on the right.3. The idea is that while we do not want the entities to run into each other. we want them to be in close proximity because this provides a degree of safety. As in the case of the separation behavior. and the green entities are those outside that range.3 Cohesion The cohesion behavior tries to emulate the behavior in flocks of birds and schools of fish where they tend to “stick together”. the blue entities are those in the group that are in our range of interest. there is a secondary safety measure which is based on lowering the probability of being eaten if the entity is not alone.1. In the ocean. Again.4 Avoidance The avoidance behavior is the behavior that directs the group to move away from things they do not want to be in proximity to or come into contact with. sensing a predator fish at all would be considered too close). size mostly determines who will be eaten by whom. If that concept fails. We then have our entity move towards this position. so the behavior determines a desired movement vector in the direction of the red arrow. they stay together because it presents a larger combined front to predators. For example. The cohesion algorithm collects all of the entities within our interest range and computes the collection of entities’ average position. we are concerned with the yellow filled entity in the center of the polar grid. Take another look at our group of entities. the large red triangle might represent a predator fish. and other obstacles which they cannot swim through.1. This behavior is quite interesting because it differs from the others in that it includes concepts like observation and environmental awareness.

Over time. But it can also result in parts of the group veering off in much different directions since we only pay attention to the entity closest to us. averages their heading. For instance. there are other behaviors that can be helpful. This is computationally less expensive.1.1. 3. The alignment behavior gathers up the entities in our range.3. Consider the figure on the right. all of the fish in a school tend to swim in the same direction. and sets our entity’s desired heading to be more like the average heading. In most cases these behaviors simply maintain the last heading the entity had and add some random variation. a cruising behavior is a useful behavior. because sometimes (mostly due to the avoidance behavior making them run away from something) an entity may be separated from its group and forced to fend for itself. but not exactly. A cruising behavior decides what direction the entity would go if that entity were alone. 100 . There are times where this behavior actually would get called upon even when there are group mates. since we do not have to search through all of our nearby group mates and average up their headings.5 Alignment The alignment behavior attempts to keep all of the entities in the group aligned in approximately the same direction. this method will gradually adjust our entity’s direction so that it will be in line with the alignment of the rest of the group. Another way the alignment behavior could work would be to take the nearest entity’s heading rather than an average heading of all the nearby group mates (see figure on left).6 Other Possible Behaviors While the four behaviors discussed previously are the mainstay of flocking implementations. This should not be done instantly. since it is not realistic that such a change occurs immediately. Most of the entities are headed in the general same direction. As in the real world.

There are schools of blue gill and large mouth bass swimming about. the demo provided for this chapter consists of several schools of fish.9 shows the collaboration diagram for the cEntity class. 101 . However. Each cEntity also maintains a connection to its rendered representation as well. even if it is the only one in the group. Figure 3. hungry and ready to eat. This behavior will be covered in more detail shortly. Other implementations might call it the ‘boid’ in a flock. The player takes on the role of a northern pike. and provide an interface through which the various parameters of each of the behaviors can be adjusted to see the results. 3. The cEntity class is basically a single thing in a group. they will not disappear as if they have been “eaten” (that is left to you to add!). but we do not need to go into details about that.2. but the basic premise is that. All of the cGroups are managed by the cWorld object. the behavior puts in a request for a movement towards the center of the sphere.9 The first thing to understand is how everything is laid out in the Flocking Demo. and they will react to the player if he gets too close. Every cEntity is in a cGroup. 3.1 Design Strategies Figure 3. If the fish are touched. as soon as the entity leaves the sphere. The main goal is to provide a means by which a flocking implementation can be seen. of which there is only one.2 The Flocking Demo In keeping with the spirit of traditional flocking.Another example of an extra behavior is the “keep in sphere” behavior used in our demo. as long as the entity is within the bounds of the sphere. the behavior has no input.

The algorithm does not need to maintain state since the entity can do that.11 102 . As you will notice in Figure 3. The behavior itself gets a reference to the current entity when it is applied.10 Every cEntity has a list of cBehavior objects which it uses to determine how it wants to move around. It does not own these behaviors but shares them with all of the other entities. not all of the behaviors are group related. and the Separation behavior make use of the group information. Only the Alignment behavior.2. 3. to make the behavior shareable.10. the Cohesion behavior.2 MFC and our Demo Figure 3. We will go over each of these classes in detail shortly. The rest of the behaviors solely depend upon the entity being acted upon.Figure 3.

mGroups. the world takes on the responsibility for ensuring that the groups render themselves. Add(cGroup &group). There is only one of these in existence at any given time. Remove(cGroup &group). The group list at construction time is empty. They all share the same set because the behavior itself does not change. cWorld(void). Later we will see that the cEntity objects do not own the behaviors. Render(LPDIRECT3DDEVICE9 pDevice). } Iterate(float timeDelta). one for each of the behaviors. does have some things of interest.11. void void Add(cGroup &group). It holds all of the groups and is responsible for iterating them during each time step. the C3DView holds the rendering engine which keeps track of the cWorld object and its associated entities. If you desired to create your own behaviors. ~cWorld(void).3 Our Implementation class cWorld { public: virtual void void tGroupList virtual void virtual void protected: tGroupList }. As you can see in Figure 3. ~cWorld(void). however. In this particular application. &Groups() { return(mGroups). There is a CFlockingDemoApp which has a current document (CFlockingDemoDoc) and some associated views (CFlockingDemoView and C3DView). The destructor is virtual to allow for more specific derived classes to be polymorphically destructed properly.2. the CFlockingDemoView keeps track of a collection of pages. This allows the property panels to modify the behaviors globally for all cEntity objects.As in the Pathfinding Demo. Additionally. Above we have the declaration for the cWorld object. the Flocking Demo makes use of MFC. virtual cWorld(void). The destructor ensures that all of the groups are properly freed. Remove(cGroup &group). A default constructor is provided which initializes the group list properly. Let us go over this in a little more detail. The CFlockingDemoView. This view contains a tab control which has all the property panels for each of the behaviors. the document does not hold anything. Now let us examine the actual implementation of our demo. 3. this is where you will want to link them up to the interface to have your own property panel. 103 .

Render(LPDIRECT3DDEVICE9 pDevice). When a new group is added to the cWorld object. It will delete any of the groups in this list. this comes with a potential penalty as they could be eaten before they get their turn. This allows external objects to iterate across all of the items that exist in the world if necessary. } The cWorld also provides an accessor to the list of groups it contains. class cGroup { public: virtual void void tEntityList virtual void virtual void protected: tEntityList cWorld }. The iterate method advances time by the time delta for each of the groups. If the group is removed. cGroup(cWorld &world). tGroupList &Groups() { return(mGroups). so the group added need not be destroyed externally. so once a group is added to the world. The world owns the list of groups that it holds. &mWorld. 104 .The cWorld object provides the ability to add and remove groups from its list. responsibility is relinquished and the group must be freed manually. Remove(cEntity &entity). ~cGroup(void). typedef vector<cGroup*> tGroupList. mEntities. It passes responsibility to each of the individual groups to make sure their contents are rendered correctly. } Iterate(float timeDelta). Add(cEntity &entity). Since this is done in an iterative fashion. &Entities() { return(mEntities). you do not need to externally destroy it as the world assumes that responsibility. it takes over responsibility for its lifetime. tGroupList mGroups. however. However. virtual void Iterate(float timeDelta). virtual void Render(LPDIRECT3DDEVICE9 pDevice). The avoidance behavior makes use of this method. The cWorld is responsible for ensuring that each of the groups is rendered when the time comes. the items that are last in the list gain the most benefit as they have watched others go before themselves and know more about the true state of the world at the end of a time slice.

The cWorld reference is made available so the cGroup can gain access to all of the other cGroup objects contained in the cWorld object. The cEntity objects contained in the mEntities vector are all owned by the cGroup object. class cEntity { public: cEntity ( cWorld &world. The cGroup is responsible for ensuring that all of the entities owned by it are rendered when the time comes. ~cGroup(void). virtual void Iterate(float timeDelta). This class holds one collection of cEntities that will be acting together.Here we have the definition of the cGroup class. &mWorld. Similar to the cWorld. the cGroup takes ownership of the cEntity objects added to its list. and will be destroyed upon destruction of the cGroup object. as it needs a reference to the cWorld object that owns it to build its held reference. virtual cGroup(cWorld &world). void void Add(cEntity &entity). The destructor is virtual to allow for correct polymorphic destruction in the event that a new type of cGroup is derived. This method will be called by the cWorld object that holds the cGroup. The iterate method passes time for all of the entities in the group’s list. If the cEntity is removed from the list. Remove(cEntity &entity). This method will be called by the cWorld that owns the cGroup. virtual void Render(LPDIRECT3DDEVICE9 pDevice). unsigned type. tEntityList &Entities() { return(mEntities). 105 . It also holds a reference to the cWorld which owns it so that it can gain access to the list of all of the other cGroup objects. The cGroup object is responsible for its list of cEntity objects and will destroy them when the destructor is called. tEntityList cWorld mEntities. The cGroup object provides no default constructor. } The group provides access to its list of entities so that the other entities may know of their group mates. the responsibility of releasing the cEntity is relinquished and must be done externally.

UpdateEnemyVisibility(void). Orientation(void). moveYScalar. SetVelocity(const D3DXVECTOR3 &vel). float &dist). Set3DRepresentation(RenderLib::CObject*object). Render(LPDIRECT3DDEVICE9 pDevice). DesiredSpeed(void). RemoveBehavior(cBehavior &beh). SetCurrentGroup(cGroup *group). EntityType(void). &VisibleEnemies(void). float float float float float float float senseRange. mPosition. moveXScalar. MaxSpeed(void). SetPosition(const D3DXVECTOR3 &pos). maxSpeed. virtual virtual void virtual void void unsigned unsigned unsigned tEntityDistList tEntityDistList void void void void D3DXVECTOR3 void D3DXVECTOR3 void D3DXQUATERNION void D3DXVECTOR3 void float float protected: void void bool cGroup* tBehaviorList cWorld unsigned unsigned unsigned RenderLib::CObject D3DXVECTOR3 D3DXVECTOR3 106 . mBehaviors. FriendMask(void).). EnemyMask(void). mEnemyMask. desiredSpeed. SetVelocity(const D3DXQUATERNION &orient). SetFriendMask(unsigned mask). &VisibleGroupMembers(void). VisibilityTest(cEntity &otherEntity. AddBehavior(cBehavior &beh). mEntityType. *mObject. moveZScalar ~cEntity(void). Iterate(float timeDelta). DesiredMove(void). mCurrentGroup. maxVelocityChange. mVelocity. mFriendMask. Velocity(void). UpdateGroupVisibility(void). SetDesiredMove(const D3DXVECTOR3 &move). Position(void). &mWorld.

mMaxVelocityChange. each cEntity is one fish. Next. cEntity ( cWorld &world. First. }. mSenseRange. and keeps the entity moving smoothly. it takes a bitmask type to identify what it is. If the cBehaviors are the work horses of the flocking system. This helps prevent instant directional changes and jumps. float senseRange. All of the inline implementations were stripped because they are trivial and take up unnecessary space. unsigned type. mMoveZScalar. vertical change. the cEntity is the coach driver. mMaxSpeed. it takes a reference to the world so that it can gain access to the list of all of the other groups in the world. so the y scalar gets decreased to clamp the amount of vertical movement. mMoveXScalar. In our demo. Any entity farther away than this will be completely ignored. Fish tend to swim left and right more than up and down. float moveYScalar. mVisibleEnemies. This is useful for clamping specific types of movement. for instance. The cEntity constructor takes quite a few parameters. Next is max speed and desired speed. mVisibleGroupMembers. This is ultimately the ‘thing’ that the behaviors will actually be moving around. float moveZScalar ). Refer to the code for the full version. float moveXScalar. mDesiredMoveVector. mMoveYScalar.D3DXQUATERNION D3DXVECTOR3 float float float float float float float tEntityDistList tEntityDistList mOrientation. This value represents how far the entity can see other entities. This value clamps how quickly the entity can change speeds. In our demo. This is so that the non-player fish know to run away from the player fish. while the desired speed is the speed the entity prefers to travel. 107 . mDesiredSpeed. there is a player type and a non-player type. float maxVelocityChange. Next there is a max velocity change. The last three scalar values adjust the amount of movement in each of the cardinal directions. The max speed is the absolute maximum speed the entity can travel. Next there is a sense range. float desiredSpeed. float maxSpeed. Here we have the cEntity class.

The Iterate method applies each of the behaviors which the cEntity has and is the core of the implementation. 108 .y *= mMoveYScalar. but they are actually called during the course of iteration so we will talk about them as we come across them. if (velChange > mMaxVelocityChange) { D3DXVec3Normalize(&mDesiredMoveVector. mVelocity *= mMaxSpeed.z *= mMoveZScalar. The destructor for cEntity is virtual in order that derived types can be correctly destructed. mVelocity.end(). mVelocity. Thus. There are a few other notable methods which the cEntity class exposes. } D3DXVECTOR3 vec. &mDesiredMoveVector).clear(). it is highly recommended that you take the Game Institute course C++ Programming for Game Developers Module I. mDesiredMoveVector *= mMaxVelocityChange. beh->Iterate(timeDelta.x *= mMoveXScalar. mVisibleGroupMembers. mVisibleEnemies. } mVelocity += mDesiredMoveVector. for (it = mBehaviors. because it does not own anything. UpdateEnemyVisibility(). &mVelocity). it++) { cBehavior *beh = *it. UpdateGroupVisibility(). float speed = D3DXVec3Length(&mVelocity). mVelocity. the base class cEntity does not free anything. void cEntity::Iterate(float timeDelta) { mPosition += mVelocity * timeDelta. If you do not understand the reason for this. The cEntity does not own the behaviors it uses. } float velChange = D3DXVec3Length(&mDesiredMoveVector).virtual ~cEntity(void).clear(). This is mentioned for every destructor to emphasize its importance. tBehaviorList::iterator it. *this). It also does not own the render object it uses since the scene graph owns that.begin(). it != mBehaviors. if (speed > mMaxSpeed) { D3DXVec3Normalize(&mVelocity. so it does not free them. virtual void Iterate(float timeDelta). Let us take a look at the actual implementation of this method.

thepair). sqrtf(vec. UpdateGroupVisibility().z*vec.begin(). float yaw = atan2f(vec. 0. This would have to be either the first or absolute last action. Let us look at how that is done.D3DXVec3Normalize(&vec.z). pitch. } } } } 109 . a general picture will appear. for (it = entities.0f). mPosition += mVelocity * timeDelta. mVisibleGroupMembers. ++it){ cEntity *e = *it. we clear our visibility lists. void cEntity::UpdateGroupVisibility() { if (mCurrentGroup){ tEntityList &entities = mCurrentGroup->Entities(). thepair). } Again. float pitch = atan2f(-vec. The first action of this method is to update our position using the current velocity. cEntity*> thepair(dist.x.clear(). it will be first. dist)) { // keep this list sorted pair<float. &mVelocity). In our case. // skip ourselves if (e == this) continue. As we examine each line of code. We clear these lists every frame and build them anew.begin(). mVisibleGroupMembers.end().end().x)). tEntityList::iterator it. What we are doing here is numerically integrating velocity to get our position using standard Euler integration. float dist. D3DXQuaternionRotationYawPitchRoll(&mOrientation. if (VisibilityTest(*e. mVisibleGroupMembers. e). mVisibleEnemies. Next. We use the time delta passed in to avoid frame dependence. Refer to the code for the unedited version. The next thing we do is update our group visibility list. it != entities.y. The cEntity class keeps track of which other entities it can see in its group as well as any enemies it can see.clear(). vec. yaw. comments were stripped from this listing to compact it as much as possible.x*vec.z + vec.insert(pos. tEntityDistList::iterator pos = upper_bound(mVisibleGroupMembers.

float dist. otherwise.begin(). for (it = entities. While all entities should be in a group. thepair). This method iterates across each of the members of this entity’s group.Here we have the implementation of UpdateGroupVisibility. This method could easily be modified to provide a cone of vision rather than a simple distance check.insert(pos. checking pointers is always a good idea. If it is. tEntityList::iterator it. mVisibleGroupMembers. // keep this list sorted pair<float. are they close enough that we can "sense them" D3DXVECTOR3 distVec = otherEntity. tEntityList &entities = mCurrentGroup->Entities(). it != entities. We simply determine if the entity in question’s position is within our sense range. e). } The visibility test is very straightforward. we perform a visibility test. float &dist) { // simple test for now. return(false). and determines if they can be seen by this entity.end(). mVisibleGroupMembers. cEntity*> thepair(dist. we can see it. if (mCurrentGroup) First we confirm that we have a group.begin(). we cannot.end(). dist)) If the current entity is not this entity. since we do not consider it for visibility. tEntityDistList::iterator pos = upper_bound(mVisibleGroupMembers. we skip it. If the entity we are currently iterating across is this entity. 110 . cEntity *e = *it. and begin iterating through them. thepair). if (dist < mSenseRange) return(true). dist = D3DXVec3Length(&distVec). bool cEntity::VisibilityTest(cEntity &otherEntity.Position() . ++it) Next.Position(). // skip ourselves if (e == this) continue. if (VisibilityTest(*e. we obtain the list of entities in our group.

UpdateEnemyVisibility(). cGroup *group = *git. 111 .begin(). ++it) { cEntity *e = *it. it != entities.Groups(). e). for (it = entities. we update our enemy visibility list. for (git = groups. the list of groups is obtained from the world. and iteration begins.After we determine an entity to be visible. void cEntity::UpdateEnemyVisibility() { tGroupList &groups = mWorld.begin().insert(pos. mVisibleEnemies. tGroupList::iterator git. thepair). if (VisibilityTest(*e. // skip friendly groups if ((e->EntityType() & EnemyMask()) == 0) break. and iterates over all of the entities in all of the groups in search of enemy entities. ++git) First.begin().begin(). dist)) { // keep this list sorted pair<float. tGroupList &groups = mWorld. tEntityList &entities = group->Entities(). After we have updated our group’s visibility list. tEntityDistList::iterator pos = upper_bound(mVisibleEnemies. since it has a notable difference. float dist. git != groups. tEntityList &entities = group->Entities(). thepair). for (git = groups. Notice that the vector is kept sorted. tGroupList::iterator git.Groups(). } } } } The UpdateEnemyVisibility method obtains the list of groups from the world. This allows us to easily acquire the closest or farthest entity in the group.end(). and add it to the visible members vector. git != groups. we keep track of its actual distance to us in a pair.end().end(). mVisibleEnemies. ++git) { cGroup *group = *git. cEntity*> thepair(dist. Let us take a look at how that is done. tEntityList::iterator it.end(). We can also perform O log2 n searches on the list to find a specific entity if we desire.

we determine the length of the desired move vector and ensure it is not bigger than our max velocity change. we begin some post processing on our desired move vector to bring it in line.end(). This is purely an optimization.begin(). tEntityDistList::iterator pos = upper_bound(mVisibleEnemies. and set its length (by scaling it) to be the maximum velocity change.insert(pos. e). the list of entities within the group is obtained. First. mVisibleEnemies.begin(). we iterate across our list of behaviors and iterate each one. On the assumption that all of the entities within the same group are of the same type (which is safe for this demo. } Once we have updated our visible friends and enemies lists. } After we have applied all of our behaviors. float velChange = D3DXVec3Length(&mDesiredMoveVector).tEntityList::iterator it. it != mBehaviors. dist)) { pair<float. ++it) For each group. it != entities. mVisibleEnemies. If it is. thepair). for (it = entities. beh->Iterate(timeDelta. we normalize our desired move vector. if ((e->EntityType() & EnemyMask()) == 0) break. we clamp the vector’s length to be that of our max velocity change.end(). if (VisibilityTest(*e. *this). tBehaviorList::iterator it. } For all those entities determined to be enemies. cEntity*> thepair(dist. Here is the important part. for (it = mBehaviors. but is not necessarily always the case).end(). it++) { cBehavior *beh = *it. In effect. but it is important to note that the masks of the entities in question are compared with this entity’s mask to determine whether it should be considered an enemy. if (velChange > mMaxVelocityChange) { D3DXVec3Normalize(&mDesiredMoveVector. thepair). mDesiredMoveVector *= mMaxVelocityChange. if the first entity in the group is not an enemy. we do the same visibility test we did for the group mates. We will go over the implementations of the individual behaviors shortly. and that list is iterated over. and keep track of the distances to the enemy in a sorted list (just as we did with the visible group members list).begin(). &mDesiredMoveVector). the entire group is not an enemy and can be ignored. float dist. 112 .

&mVelocity).z*vec.z *= mMoveZScalar. float pitch = atan2f(-vec. } SetGain(float gain) { mGain = gain. We then apply our Cartesian move scalars. D3DXVECTOR3 vec. we have a new velocity and orientation with which to apply for the next iteration to get a new position. we add our desired move vector to our current velocity vector. This will nudge our current velocity vector in the direction of the new desired move. } Next. and if that length is greater than the max speed. cBehavior(void) : mGain(1. we reduce the amount of vertical movement (y axis) so as to produce more lifelike fish movement. Gain(void) { return(mGain). vec.y *= mMoveYScalar.mVelocity += mDesiredMoveVector. D3DXVec3Normalize(&vec. we normalize the velocity vector and scale the result by our max speed. we perform some trigonometric calculations to convert our velocity vector into a quaternion to hold our orientation.0f).z + vec. Afterwards.x)). mVelocity *= mMaxSpeed. float yaw = atan2f(vec. 113 . Lastly.y. 0. Next. float speed = D3DXVec3Length(&mVelocity). } Name(void) { return("Base Behavior"). In the case of our demo. there is one last class definition to investigate.0f) {} ~cBehavior(void) {} Iterate(float timeDelta. sqrtf(vec.x. class cBehavior { public: virtual virtual void float void virtual string private: float }.x *= mMoveXScalar. &mVelocity). we effectively clamp our velocity to be within the bounds of our maximum speed. yaw. mVelocity. mVelocity. if (speed > mMaxSpeed) { D3DXVec3Normalize(&mVelocity. mVelocity. We do this by first getting the length of our velocity vector. cEntity &entity) = 0. D3DXQuaternionRotationYawPitchRoll(&mOrientation. pitch.x*vec. } mGain.z). Before we discuss the movement behaviors.

cBehavior(void) : mGain(1.Here we have the base class cBehavior. } The Name method is used by the user interface to properly name the tab of the tab control. Now that we have a good understanding of the framework of the application. The user interface only allows fractional gains. 114 . The destructor is virtual to allow for correct polymorphic destruction of derived classes.0 which is full effect. The pure virtual iterate method takes a time delta for the amount of time that has passed since the last call to this method. } SetGain(float gain) { mGain = gain. The default is 1. Let us go over it in detail. cEntity &entity) = 0.0. The gain value is used to determine the weight upon which the result of the behavior is applied to the entity in question’s desired move. while 0.0f) {} virtual ~cBehavior(void) {} The default constructor initializes the gain of the behavior to 1. let us take a look at the implementation of the behaviors implemented in the demo. float void Gain(void) { return(mGain). and 2. virtual string Name(void) { return("Base Behavior"). float mGain. virtual void Iterate(float timeDelta.0 would be double effect. The mGain variable is used to modify the degree to which the behavior will modify the desired behavior of the entity in question. This is the interface through which all of the behaviors do their work. and the entity to which to apply the behavior’s movement decisions.5 would be half effect. } The class provides accessors to allow the gain to be modified post construction.

if (separationPercentage < mMinSeparationPercentage) separationPercentage = mMinSeparationPercentage.3. and then clamp it to these values. float maxPercent). virtual void Iterate(float timeDelta.empty()) return. The min percent and max percent values are the minimum and maximum separation percentages that will be applied to limit large changes. mSeparationDistance. float distanceToClosestGroupMember = groupMembers. the Iterate method takes the time passed since the last iteration. 115 . The goal of the behavior is to ensure all of the entities in the group always stay exactly this distance away from each other. D3DXVECTOR3 desiredMoveAdj = nearestGroupMember. The separation behavior takes three parameters upon construction.front().2. For the sake of brevity. cSeparationBehavior(float sepDist. and the entity to which to apply the movement. The algorithm will determine the actual separation percentage. cEntity &nearestGroupMember = *groupMembers. As with the base class. float minPercent. float separationPercentage = distanceToClosestGroupMember / mSeparationDistance. The first is the separation distance that the behavior will try to maintain between entities in the group.second.Position().4 Separation class cSeparationBehavior : public cGroupBehavior { public: cSeparationBehavior(float sepDist. float minPercent.first. mMinSeparationPercentage.Position() – entity. cEntity &entity). float maxPercent). Here we have the declaration for the Separation Behavior as it exists in our demo. See the code for the unabridged version. cEntity &entity) { tEntityDistList &groupMembers = entity. virtual void protected: float float float }. virtual ~cSeparationBehavior(void). cEntity &entity). This applies to entities too close as well as too far away. mMaxSeparationPercentage.VisibleGroupMembers().front(). the accessors have been removed from this listing. Iterate(float timeDelta. if (groupMembers. void cSeparationBehavior::Iterate(float timeDelta. Let us take a closer look.

first. Let us go over this in detail.empty()) return. } entity. We then get the pre-computed distance to that group member. if the group is empty.if (separationPercentage > mMaxSeparationPercentage) separationPercentage = mMaxSeparationPercentage. we get the nearest group member which. } The comments have been stripped out for brevity.Position(). Next we compute the separation percentage as the ratio between the actual distance to the closest group member. Since this is a group based behavior.SetDesiredMove(currentDesiredMove). we grab the list of group members. First. mMaxSeparationPercentage) = mMaxSeparationPercentage. &desiredMoveAdj). See the code for the unabridged version. currentDesiredMove += desiredMoveAdj * Gain(). we return. We also compute a vector from this entity’s position to the closest group member. float distanceToClosestGroupMember = groupMembers. cEntity &nearestGroupMember = *groupMembers. float separationPercentage = distanceToClosestGroupMember / mSeparationDistance. if (distanceToClosestGroupMember < mSeparationDistance) { D3DXVec3Normalize(&desiredMoveAdj. we cannot do anything.front(). desiredMoveAdj *= -separationPercentage. currentDesiredMove += desiredMoveAdj * Gain(). and the desired separation distance. 116 .Position() – entity. Next.second. D3DXVECTOR3 currentDesiredMove = entity. Thus. &desiredMoveAdj). desiredMoveAdj *= separationPercentage.DesiredMove(). tEntityDistList &groupMembers = entity. on account of sorting.front(). if (groupMembers. } else if (distanceToClosestGroupMember > mSeparationDistance) { D3DXVec3Normalize(&desiredMoveAdj.VisibleGroupMembers(). D3DXVECTOR3 desiredMoveAdj = nearestGroupMember. if (separationPercentage < separationPercentage if (separationPercentage > separationPercentage mMinSeparationPercentage) = mMinSeparationPercentage. should be at the front of the group members list.

We grab the entity’s current desired move. we set our entity’s desired move to be the newly computed desired move. 117 . This helps settle us into the range we want rather than overshooting all the time. then we are too far away. If you are very close. Why do we scale our desired movement vector by our separation percentage? The idea is based on the law of diminishing returns. If we were accurate about the separation distance. D3DXVECTOR3 currentDesiredMove = entity. currentDesiredMove += desiredMoveAdj * Gain(). desiredMoveAdj *= separationPercentage. and scale it by our separation percentage. and we want a vector going the other way. and add the result to the current desired move for the entity.SetDesiredMove(currentDesiredMove). We would set the desired move to the local value we had cached so nothing would change. Why negated? The vector we computed was from this entity to the closest member. we normalize the vector from this member to our closest member. and scale it by our negated separation percentage.We clamp the computed separation percentage to our minimum and maximum separation percentages. we scale our desired move adjustment by our gain. } If the distance to the closest member is less than our desired separation distance. if (distanceToClosestGroupMember < mSeparationDistance) { D3DXVec3Normalize(&desiredMoveAdj. Thus. you move faster to get there. } If the distance to the closest member was greater than the separation distance. &desiredMoveAdj). you move very slowly. desiredMoveAdj *= -separationPercentage. so we negate it. We then scale the desired move adjustment vector by our gain. entity. we normalize the vector from this entity to the closest member. else if (distanceToClosestGroupMember > mSeparationDistance) { D3DXVec3Normalize(&desiredMoveAdj. currentDesiredMove += desiredMoveAdj * Gain(). and add the result to the current desired move. we are too close. &desiredMoveAdj). If you are a great distance from the desired separation distance. Lastly. Thus.DesiredMove(). then both of the if statements would fail. Finally.

front(). // head away from the enemy if (nearestEnemyDist < mAvoidanceDistance) { D3DXVECTOR3 desiredMoveAdj = entity. void cAvoidanceBehavior::Iterate(float timeDelta.first. float nearestEnemyDist = enemies.empty()) return. Let us go over this behavior in detail. 118 .DesiredMove(). The avoid distance is the distance at which the threat to be avoided will actively be avoided.3. desiredMoveAdj *= mAvoidanceSpeed.we find the closest visible enemy.front().VisibleEnemies(). Iterate(float timeDelta. virtual ~cAvoidanceBehavior(void).Position(). the avoid distance.SetDesiredMove(currentDesiredMove). cEntity &entity). &desiredMoveAdj). As with all of the behaviors. The comments have been removed from this listing for brevity. Let us go over this implementation.second. The avoidance behavior needs only two parameters. D3DXVECTOR3 currentDesiredMove = entity. cEntity &nearestEnemy = *enemies.5 Avoidance class cAvoidanceBehavior : public cBehavior { public: cAvoidanceBehavior(float avoidDist. if (enemies. virtual void Iterate(float timeDelta. the Iterate method takes the time since the last iteration.2.Position() – nearestEnemy. See the code for the full listing. } } The implementation is fairly straightforward -. float speed). cEntity &entity) { tEntityDistList &enemies = entity. cAvoidanceBehavior(float avoidDist. mAvoidanceDistance. // move away currentDesiredMove += desiredMoveAdj * Gain(). mAvoidanceSpeed. D3DXVec3Normalize(&desiredMoveAdj. and a speed. float speed). and run the other way. virtual void protected: float float }. and a reference to the entity to which the movement is to be applied. cEntity &entity). entity. while the speed is the rate at which the entity will flee from the threat.

first. Lastly.Position(). Then we grab our current desired move vector. D3DXVECTOR3 desiredMoveAdj = entity. we get the list of visible enemies.front(). and add the result to the current desired move. if (enemies. entity.SetDesiredMove(currentDesiredMove). First we compute a vector from the enemy to us.DesiredMove(). We then normalize our enemy-to-us vector. then we get the first one in the list which has been sorted. &desiredMoveAdj).tEntityDistList &enemies = entity. if (nearestEnemyDist < mAvoidanceDistance) If that distance is less than our avoidance distance. D3DXVECTOR3 currentDesiredMove = entity. apply our gain. If there are visible enemies. First and foremost. we run away.VisibleEnemies().front().empty()) return. // move away currentDesiredMove += desiredMoveAdj * Gain(). float nearestEnemyDist = enemies.Position() – nearestEnemy. cEntity &nearestEnemy = *enemies. If there are no visible enemies. scale it by our movement speed. desiredMoveAdj *= mAvoidanceSpeed. D3DXVec3Normalize(&desiredMoveAdj. and we fetch the distance to that enemy.second. 119 . we do nothing. we set the current desired move to our newly computed desired move.

end().the turn rate. The cohesion behavior takes a single parameter -. it != groupMembers. groupCenterOfMass += e->Position(). 0.2.size(). entity. Iterate(float timeDelta.0f). D3DXVec3Normalize(&desiredMoveAdj. ++it) { cEntity *e = (*it). the comments have been removed from this listing for brevity. Let us look at this implementation.VisibleGroupMembers(). &desiredMoveAdj). cEntity &entity). See the code for the full listing. virtual void protected: float }. cEntity &entity) { tEntityDistList &groupMembers = entity.empty()) return. The Iterate method takes the time passed since the last iteration. // compute center of mass of the group D3DXVECTOR3 groupCenterOfMass(0. cCohesionBehavior(float turnRate). The turn rate is the maximum rate at which the entity will change direction in order to head towards the average center of the group. } 120 .3. virtual void Iterate(float timeDelta. for (it = groupMembers.begin().entity. As usual. mTurnRate. Let us go over this in detail. cEntity &entity). currentDesiredMove += desiredMoveAdj * Gain(). // move towards the center of the group D3DXVECTOR3 desiredMoveAdj = groupCenterOfMass .6 Cohesion class cCohesionBehavior : public cGroupBehavior { public: cCohesionBehavior(float turnRate). void cCohesionBehavior::Iterate(float timeDelta.0f. tEntityDistList::iterator it. } groupCenterOfMass /= (float)groupMembers. desiredMoveAdj *= mTurnRate. virtual ~cCohesionBehavior(void). if (groupMembers. D3DXVECTOR3 currentDesiredMove = entity.second.DesiredMove(). and a reference to the entity for which to generate a movement.Position().SetDesiredMove(currentDesiredMove). 0.0f.

First. D3DXVec3Normalize(&desiredMoveAdj. for (it = groupMembers. If there are no visible group members. Finally.begin(). D3DXVECTOR3 groupCenterOfMass(0. we set our entity’s desired move to our newly computed desired move. 0.entity. we iterate across the visible group members list.size(). We normalize the us-to-center vector.Position(). desiredMoveAdj *= mTurnRate. as there is no group center of mass.VisibleGroupMembers(). entity.empty()) return. it != groupMembers. } groupCenterOfMass /= (float)groupMembers. And obtain our current desired move.0f. 0. if (groupMembers. and computes the group’s average center of mass. and scale it by our maximum turn rate.DesiredMove().end(). we return.0f. D3DXVECTOR3 desiredMoveAdj = groupCenterOfMass . or center of perceived mass. Next. We then divide out the number of group members that were visible to obtain the average group position. tEntityDistList::iterator it. tEntityDistList &groupMembers = entity. currentDesiredMove += desiredMoveAdj * Gain().The cohesion behavior iterates across the list of visible group members.0f).second. the list of visible group members is obtained. D3DXVECTOR3 currentDesiredMove = entity. &desiredMoveAdj). Next we apply our gain and add the result to our current desired move. We then compute a vector towards that center. 121 . and sum up the positions of each member.SetDesiredMove(currentDesiredMove). groupCenterOfMass += e->Position(). ++it) { cEntity *e = (*it). It then generates a vector towards this center.

empty()) return.VisibleGroupMembers(). the alignment behavior takes only a single parameter. cEntity &entity). Iterate(float timeDelta.second.2. virtual ~cAlignmentBehavior(void). It would be simple enough to make the modification to average all of the headings of all of the visible group members and use that rather than only the closest member. entity. D3DXVECTOR3 currentDesiredMove = entity. if (groupMembers.3.front(). we elected to use the heading of the nearest group member to align an entity. cAlignmentBehavior(float turnRate). cEntity &entity) { tEntityDistList &groupMembers = entity. currentDesiredMove += desiredMoveAdj * Gain().7 Alignment class cAlignmentBehavior : public cGroupBehavior { public: cAlignmentBehavior(float turnRate). // match the heading of our closest group member D3DXVECTOR3 desiredMoveAdj = nearestGroupMember. Look to the code for the full listing. &desiredMoveAdj). the turn rate. } In this implementation. D3DXVec3Normalize(&desiredMoveAdj. the Iterate method takes the time passed since the last iteration and the entity upon which to add desired move. cEntity &entity). Let us examine this implementation.Velocity(). virtual void protected: float }. Like the cohesion behavior. desiredMoveAdj *= mTurnRate. cEntity &nearestGroupMember = *groupMembers. You can try this as an exercise if you wish. The turn rate parameter limits the rate at which the entity will change direction in an effort to match heading with its visible group mates. Let us go over this behavior in detail. void cAlignmentBehavior::Iterate(float timeDelta.DesiredMove(). mTurnRate. The comments have been removed from this listing for brevity. rather than an average of all of the visible group members. 122 .SetDesiredMove(currentDesiredMove). virtual void Iterate(float timeDelta. As in all the other behaviors.

front().VisibleGroupMembers(). we set this entity’s desired move to be our newly computed desired move. we get the list of visible group members. desiredMoveAdj *= mTurnRate. and scale it by our turn rate. cEntity &nearestGroupMember = *groupMembers. currentDesiredMove += desiredMoveAdj * Gain(). We then obtain our current desired move. If that list is empty. The velocity embodies the direction which the member is facing.tEntityDistList &groupMembers = entity. Next we normalize the velocity obtained from our closest group member.SetDesiredMove(currentDesiredMove). &desiredMoveAdj). First.Velocity(). D3DXVECTOR3 currentDesiredMove = entity.second. Finally. This is now our desired move adjustment. Next we obtain the closest group member by virtue of our sorted list. we return. 123 . if (groupMembers. D3DXVECTOR3 desiredMoveAdj = nearestGroupMember.empty()) return. We apply our gain.DesiredMove(). We then obtain the velocity of our nearest group member. and add the result to the current desired move. entity. D3DXVec3Normalize(&desiredMoveAdj. We cannot align without other members to align with.

8 Cruising We have now covered all of the normal behaviors used in flocking applications. mMaxRateChange. but there are more behaviors yet to investigate in our demo. mMinRateChange. float randMoveZChance. float randMoveYChance. Last. mRandMoveZChance. The first is the Cruising behavior which. float minRateChange ). Iterate(float timeDelta. The first three are percent chances the entity will decide to move in one of the cardinal directions. cCruisingBehavior ( float randMoveXChance. which is the minimum amount the entity will decide to move in the direction chosen.3. The cruising behavior takes quite a few parameters. float maxRateChange. mMinRandomMove. mRandMoveYChance. virtual void protected: float float float float float float }. cEntity &entity). float minRateChange ). we have the max and min rate changes. This is useful when it cannot see any of its group mates and needs to decide where to go. 124 . virtual ~cCruisingBehavior(void). Next. float minRandomMove. class cCruisingBehavior : public cBehavior { public: cCruisingBehavior ( float randMoveXChance. as mentioned earlier. float randMoveZChance. float maxRateChange. we have the min random move.2. float randMoveYChance. The comments have been removed from this listing for brevity. is the behavior which decides where the entity would go if the decision were based solely on that entity. Look to the code for the full listing. float minRandomMove. which limit the amount the entity is allowed to change direction. mRandMoveXChance. Let us go over this behavior in detail.

0f.MaxSpeed()). float currentSpeed = D3DXVec3Length(&entity. } The algorithm performs straightforward random picking of which direction to go next. float signum = (currentSpeed .virtual void Iterate(float timeDelta. void cCruisingBehavior::Iterate(float timeDelta.0f : 1. 0. 0.entity. entity.Velocity()). desiredMoveAdj *= mMinRateChange * signum. // add some random movement D3DXVECTOR3 desiredMoveAdj(0.DesiredSpeed()) / entity.0f).Velocity()). First.entity. Let us have a look at the implementation. We then determine what percentage of the entity’s desired speed we have achieved.0f. else if (randmove < mRandMoveYChance) desiredMoveAdj.y += mMinRandomMove * signum. float percentDesiredSpeed = fabs((currentSpeed .entity.DesiredSpeed()) / entity.entity. // clamp rate changes if (percentDesiredSpeed < percentDesiredSpeed if (percentDesiredSpeed > percentDesiredSpeed mMinRateChange) = mMinRateChange. 125 .DesiredSpeed()) > 0? -1.DesiredMove(). float percentDesiredSpeed = fabs((currentSpeed .0f : 1.MaxSpeed()). cEntity &entity). D3DXVECTOR3 currentDesiredMove = entity. mMaxRateChange) = mMaxRateChange. D3DXVec3Normalize(&desiredMoveAdj. we determine how fast we are going at the moment.0f. float signum = (currentSpeed .x += mMinRandomMove * signum. as a ratio of its max speed. float randmove = (float)rand() / (float)RAND_MAX. cEntity &entity) { // determine how fast we are going vs how fast // we would like to be going float currentSpeed = D3DXVec3Length(&entity. and applies it.DesiredSpeed()) > 0? -1. The Iterate method takes the time passed since the last iteration. else if (randmove < mRandMoveZChance) desiredMoveAdj. and a reference to the entity upon which the cruising shall occur. currentDesiredMove += desiredMoveAdj * Gain().0f.SetDesiredMove(currentDesiredMove). &desiredMoveAdj). if (randmove < mRandMoveXChance) desiredMoveAdj.z += mMinRandomMove * signum.

We multiply by our signum for some extra randomness.z += mMinRandomMove * signum. float randmove = (float)rand() / (float)RAND_MAX. else if (randmove < mRandMoveZChance) desiredMoveAdj.y += mMinRandomMove * signum. Finally. We then normalize our randomly generated move vector. and store off a signum multiplier. Now we decide which direction we want to move by generating a random number. else if (randmove < mRandMoveYChance) desiredMoveAdj. desiredMoveAdj *= mMinRateChange * signum. We grab our current desired move.Next we establish whether we are going faster or slower than our desired speed. apply our gain. Next. and comparing it against our percent chances per cardinal direction. if (percentDesiredSpeed < percentDesiredSpeed if (percentDesiredSpeed > percentDesiredSpeed mMinRateChange) = mMinRateChange. By doing this. we clamp the percent desired speed to our rate limits. the default is to limit the chance to start going up or down in the y dimension to keep the motion more realistic. we can easily flip the direction of any directional vectors we create. D3DXVECTOR3 currentDesiredMove = entity. currentDesiredMove += desiredMoveAdj * Gain(). 126 . &desiredMoveAdj). we set our entity’s desired move using our newly calculated desired move.SetDesiredMove(currentDesiredMove). In the case of the demo. D3DXVec3Normalize(&desiredMoveAdj. and add the result to the current desired move. and scale it by our minimum rate change to get us going in the direction desired. if (randmove < mRandMoveXChance) desiredMoveAdj.x += mMinRandomMove * signum. mMaxRateChange) = mMaxRateChange.DesiredMove(). This will prevent us from stopping instantly or jumping to full tilt from a standstill. entity.

and a radius for the sphere’s radius. entity. which takes the time from the last iteration and the entity to keep within the sphere.entity. Let us see how it does this. float radius). This behavior uses the Iterate method. Look to the code for the full listing. cEntity &entity).DesiredMove().3. if (dist > mRadius) { D3DXVECTOR3 desiredMoveAdj = toCenter / dist. The comments have been removed from this listing for brevity. } } 127 . Let us go over this in detail. virtual ~cStayWithinSphereBehavior(void). &desiredMoveAdj).9 Stay Within Sphere The last behavior implemented in this demo is the Stay Within Sphere behavior. D3DXVec3Normalize(&desiredMoveAdj. This behavior was written to prevent the necessity of having to teleport the fish to keep them around the player’s location. The stay within sphere behavior takes a center for the sphere. mRadius. desiredMoveAdj *= entity. mCenter. class cStayWithinSphereBehavior : public cBehavior { public: cStayWithinSphereBehavior(const D3DXVECTOR3 &center. D3DXVECTOR3 currentDesiredMove = entity.MaxSpeed(). Thus. cStayWithinSphereBehavior(const D3DXVECTOR3 &center.2. Let us take a look at the implementation. they will just turn around when they get too far away. Iterate(float timeDelta.Position(). virtual void Iterate(float timeDelta. cEntity &entity) { D3DXVECTOR3 toCenter = mCenter . float dist = D3DXVec3Length(&toCenter). void cStayWithinSphereBehavior::Iterate(float timeDelta. cEntity &entity). float radius). currentDesiredMove += desiredMoveAdj * Gain(). virtual void protected: D3DXVECTOR3 float }.SetDesiredMove(currentDesiredMove).

the distance from the entity to the center of the sphere is computed. It determines the distance from the center of the sphere to the entity. There is plenty to learn here so it would be best if you took the needed to time really examine the source code for the demo in detail. We apply our gain and add the result to the current desired move.DesiredMove().entity. We then get the current desired movement vector. At this stage we now have a fairly complete flocking demonstration. float dist = D3DXVec3Length(&toCenter). and divide out the distance to the center.SetDesiredMove(currentDesiredMove). entity. We want to be within the sphere. and scale it by the maximum speed of the entity. we are outside the bounds of the sphere.Position(). First. D3DXVECTOR3 desiredMoveAdj = toCenter / dist. if (dist > mRadius) If that distance is greater than the radius of the sphere. If we are outside the sphere. we first compute a desired move adjustment by taking the vector from the entity to the center of the sphere. Then we normalize the desired movement adjustment. currentDesiredMove += desiredMoveAdj * Gain(). it moves the entity towards the center of the sphere. desiredMoveAdj *= entity. Finally. 128 .The behavior is pretty simple. Let us take a closer look. &desiredMoveAdj). D3DXVec3Normalize(&desiredMoveAdj. D3DXVECTOR3 toCenter = mCenter .MaxSpeed(). If that distance puts the entity outside the sphere. we set our entity’s desired move to be the newly computed desired move. D3DXVECTOR3 currentDesiredMove = entity.

Alignment – Keeping the entity aligned in its orientation with the rest of the group (or at least its closest group mate) so as to keep it traveling in the same direction as the group as a whole. That buddy could use any number of these behaviors to stay within a certain distance and travel in roughly the same direction as the player. Your flock can be loose and somewhat randomly distributed over a given area or you can have it maintain tight unit formation. or even in outer space to manage your armada of various spacecraft. Even at a simple level you could imagine a flock consisting of only two members – the player and his buddy. thereby giving the appearance that the entity has a preference to be near the group. Cohesion – Keeping the entity in line with the center of mass of the overall group. Cruising – Giving the entity a will of its own when it has no nearby group mates to guide it. at sea to manage your naval forces.. You can probably imagine lots of scenarios where the behaviors we studied could be applied in game situations. you can apply these same ideas to groups of enemies as well (e.g. but if he has to stay within a given sphere around the player. The methods we discussed were: Separation – Keeping the entity at a given distance from all of its neighboring group mates. As our fish demo demonstrates. Lots of interesting ideas abound here and the results on screen can be very compelling. flocking behaviors are useful in any environment.g.g. which effectively sums up the desired movements from several behavioral algorithms to produce a final desired movement. as well as crowds of people. flocking makes use of Behavior Based Movement. a group of Orcs in an RPG game). and also makes decisions to move to desired destinations. It binds the other types of artificial intelligence together. Decision making itself is the core of artificial intelligence. which of course requires pathfinding. 129 . But even just for getting groups of entities to navigate around in the environment and maintain some semblance of cohesion (e. as it makes use of the classification systems to make its decisions. You can use it on land to gather your armies. your buddy might have to break away to defend himself. Just create a new behavior (e. To summarize. a grid-based concept for more rigid formations) and you are ready to go. schools of fish. If attacked. ultimately flee if the player decides to flee. in a real-time strategy game) this system can be very helpful. Avoidance – Keeping the entity out of harm from dangerous entities.Conclusion In this chapter we made the transition from pathfinding to decision making. This type of behavior is found in real life in flocks of birds. We have found that flocking can be a useful means to simulate real life behavior of groups which move together in a fashion where they are directly influenced by other nearby entities.. as it is the means for making something look or feel intelligent. And of course. herds of cows and other livestock. Flocking sets the stage for further exploration of decision making in that it is ultimately all about deciding where an entity wants to go. The squad-like behavior can be made much more effective with the addition of a robust decision making architecture for each entity (and even for the group itself) and some proper environmental awareness and navigation. Pretty soon you find yourself with some crude but convincing squad-like behavior. Obviously you could continue to extend even this simple simulation just by adding other buddies to your flock.

Finally. simulate them.In the next chapter. 130 . we will look at our finite state machine application. we will discuss one of the most flexible decision making systems available to the game AI developer: the finite state machine. we will examine scripting and how its use can extend our finite state machines. Moreover. and save them for use in our games. which allows us to create our own custom state machines.

Chapter 4 Decision Making II: State Machines 131 .

or keep you in check. when to duck. 132 . how many units should collect resources. We will talk about the most common uses of state machines and then delve into the state machine demo and its implementation. it will be easy for us to extend the concepts of making decisions about which way an entity wants to move into the more general concept of Decision Making. The most common are: Decision Trees State Machines Rule Base Squad Behaviors We will begin our current chapter by briefly discussing each of these types of decision making systems. the decision making system determines which buildings to build. which units to construct. It also determined where the characters wanted to move.Overview Having covered the topic of flocking in the last chapter. how to get itself out of check. State machines are very powerful forms of decision making systems with many different uses. and what types of resources to collect. In this chapter we answer the following questions: What is a finite state machine? What are finite state machines used for? What is a transition diagram? How is the finite state machine system implemented in our demo? What is scripting and how is it used in games? What is Python? How do I embed Python? What can I do with my newly embedded Python? Before we start answering these questions. let us start with some brief discussion of general decision making systems. and how to mount that attack. In the case of a game like Psi Ops: The Mindgate Conspiracy©. In a game like chess. Once done. It also determines which military units with which to attack you. As mentioned early in the course. and when to roll out of the way. we will move on to examine in detail the system that we will choose for our decision making needs: state machines. the decision making system allowed the enemies and bosses to determine which attacks to make and when. the decision making system determines which move to play next. generalized decision making is the aspect of artificial intelligence systems which provides entities with the appearance of intelligence. In a real time strategy game. There are quite a few types of decision making systems. The decision making system is the part of the artificial intelligence that takes all of the inputs and produces some action.

during every iteration.1 we have an enemy that first checks to see if he is currently attacking. If he was not attacking the player. A decision tree is basically a large set of nested if-then-else statements. Otherwise. If he is able to see the player. If so.1 In Figure 4. If he cannot. This can cause state flapping. An important thing to remember is that decision trees are completely stateless. and if he is not. entities appearing to exhibit indecision. he returns to his patrol path. he checks to see if he is tired of chasing the player. If so. Conditions are continually evaluated as you move down the tree. No Attacking? Yes On Patrol Path? No Yes See Player? No Yes Go To Patrol Path Patrol Path No Tired of Chasing? Yes Attack Player Chase Player Stop Attacking Stop Chasing Figure 4. he checks if he can see the player. he patrols on his path. the entire tree is evaluated again to come up with what the entity need to do. which is. he checks to see if he is on his patrol path. Thus. essentially. Eventually you end up in a leaf node of the tree where you have arrived at the decision concerning what you plan to do. 133 . If so.Decision Trees Decision Trees are one of the oldest forms of decision making techniques in games. he attacks. he chases the player. he stops attacking.

State Machines State machines. The red circle represents the initial condition. The character begins by walking his patrol. he will start patrolling again. Rule 1: Conditions: Player in View Decision: Attack Rule 2: Conditions: Player not in view. he immediately attacks him and continues to do so until he cannot see the player anymore. he starts chasing the player. Each rule is evaluated. on patrol path Decision: Patrol Path Rule 3: 134 . Patroling See Player? Attacking Reached Patrol Path? Se la eP r? ye Unable to See Player? Able to See Player? Going to Patrol Path Tired of Chasing? Chasing Figure 4. except it can contain some additional branching or conditions. and the rule that receives the highest score (or the first one that evaluates to true. also known as finite state machines (FSM). not chasing. in which case he goes back to attacking. it is somewhat like a jump table.1. In that sense. in which case he will start attacking him. or until he gets tired of chasing. unless he sees the player first. in which case he returns to his path. When he reaches his patrol path. the state machine remembers the last thing it was doing. take a different approach from decision trees. Rather than evaluating the entire decision process every time. and only evaluates decisions to see if it should leave the current state. however it may be implemented) determines the decision. At this point. Rule Base A rule base is similar to a decision tree.2 illustrates the state machine system under the equivalent circumstances as the decision tree example we just discussed in Figure 4. He will chase the player until he either sees the player again. When he sees the player.2 Figure 4.

depending on circumstances in the game of course). not on patrol path Decision: Go to Patrol Path Rule 4: Conditions: Player not in view. A squad behavior. Player Squad > 50%? No Yes No Player Squad > 25%? Yes Player Squad > 75%? Yes My Squad > 50%? No Yes No My Squad > 50%? Yes No My Squad > 50%? Yes No My Squad > 50%? Yes Supported Attack Open Attack Supported Attack Open Attack Covered Attack Supported Attack Flee Flanking Attack Figure 4. Most people think of squad behavior and associate it with the extensive amount of communication and cooperation required of people acting in squads in the military. at its heart. doles out fewer points for the lesser rules that match. and that leader makes decisions that control the rest of the group. chase time limit has not expired Decision: Chase Player Rule 5: Conditions: Player not in view. One of the entities is picked as the squad leader. Squad Behaviors Squad behavior is a hot topic in games. If both his squad and the player’s squad are near full strength. not chasing. he would decide to flee. chase time limit has expired Decision: Go to Patrol Path Above we have a simple rule base that describes the conditions for each rule and the decision of the rule when it gets the highest score from the scoring system. he would decide to attempt a flanking attack.3 presents an example of how a squad leader might decide to command his squad based on the relative strength between his squad and the player’s squad. is really just a more complex flocking system.Conditions: Player not in view.3 Figure 4. This is really only somewhat true with squad behaviors in games. chasing. and doles out even fewer points for the rules that partially match. whereas if his squad is weaker than the player’s. chasing. although it tends to be blown a bit out of proportion. Each member of the group then acts on the orders of the squad leader (or not. Alternatively he might decide to try a supported attack. using his 135 . The scoring system awards the most points to the most specific rule which matches.

If the squad leader commands a supported attack and the squad member is a sniper. No Open Attack? Yes No Supported Attack? Yes Attack No Cover Attack? Yes No Sniper? Yes Sniper? No Yes Attack from Cover Suppressive Fire from knee or prone. everyone would attack from a covered position. If the squad leader commands an open attack. while the machine gunners attack from the flanks. This is a pretty general overview of the concept. Snipe from Cover Attack from Flank Snipe from Cover Figure 4.4. but you can probably already imagine various scenarios that involve squad behaviors. We will revisit this topic a bit later in the course when we assemble our final demonstrations and bring all of our concepts together. In all other cases. 136 . every squad member obeys. but the general groundwork is visible here.4 Consider Figure 4. he snipes from cover. Otherwise he will lay down suppressive fire. Under other circumstances. if the squad member is a sniper. We still have a bit of ground to cover in this chapter before we can really begin to implement very interesting squad behavior. he snipes from cover. If the squad leader commands a cover attack. This is an example of the individual squad member’s decision tree. he might find conditions to be such that his men should find cover and attack from behind. Or perhaps he would command his men to attack out in the open.snipers to target entities of interest while his machine gunners lay down a blanket of suppressing fire.

we will be using transition diagrams to explain our examples. and all of the transitions between the states. A state machine. It is only possible to go from a given decision to the subset of all decisions for which this decision is applicable. Some Examples Figure 4. the most effective means to display the possible state transitions is via a transition diagram. Let us take a quick look at some example transition diagrams in order to acquire a firm understanding of them before moving on to some example uses of state machines. Unlike the decision tree we discussed earlier.5 137 . It also can specify the conditions under which a transition would evaluate to true. is able to restrict the possible states into which the climbing ladder state could enter to only the walk state. and thereby cause a state change. however. The machine starts in some state. A transition diagram displays all of the states that belong to the state machine.1 Transition Diagrams While state machines can be described by tables or a simple text description. Each node in the graph is a state. Throughout the rest of this chapter. it is not possible to go from one decision to any other decision. When one of the transitions evaluates to true. For instance. but the decision tree would not prevent it. walking. while the connections between them represent transitions. performs some pre-determined actions. It does not make much sense to be able to transition from climbing a ladder to walking on hands. and climbing a ladder. 4. finite state machines are a type of graph.1. After the actions are performed.1 Introduction to Finite State Machines At their core. we might have the final decisions of walking on hands. all of the possible transitions from the current state into other states are evaluated. and while in that state. the state machine changes state into the new state specified by the transition. in a decision tree.4.

which sets up the variables for tracking the desired start value and goal value. with the arrow pointing towards the destination state if the transition should evaluate to true. Figure 4. 138 . If x is equal to g. if x is greater than g. If x is less than g. given a value to start at x and a value to get to g. The ovals are the states and the lines with arrows are the transitions. This state machine implements a system where. let us move on to some actual uses of state machines as they pertain to games. which adds 5 to the x value of the system. we go to the Adder state where we add 5 at a time. It starts in the Init state. if x becomes equal to g we move to the Done state. we start out in the Adder state. In the case of Figure 4. then the system transitions to the subtractor state.6 The next example (shown in Figure 4.The transition diagram in Figure 4. If we passed g with x. we start backing up one at a time. we move to the Done state. we move to the subtractor state which subtracts 5 at a time. Now that we have seen how transition diagrams can help us visualize our state machines. the machine is started.5. we move to the Done state.6) represents a more complicated state machine. If the x value evaluates to greater than 5. it ultimately resolves x to g.5 is very simple. Otherwise. In either of the large accumulator states. The subtractor state then subtracts 1 from the value each time it is executed until the value is less than 0. Once x equals g. From there. in which case the system transitions back to the Adder state.

State machines can be used for all sorts of things including artificial intelligence. Let us take a look at some of the more common uses of finite state machines in game development. Animation One of the most common uses of state machines in games is animation control. such as blends between running and walking. Spend some time reviewing the machine. animation-controlling state machines are complicated by virtue of the number of animations and blends that games typically have for characters. walk. walking forward to strafing left or right. walking jumps. The transition lines specify the keyboard command which the player would give to move the character around. Consider the transition diagram on the next page. game state. Indeed. Also note that while it might appear that merely pushing A might drive you right through the transition state. The default state of this state machine is the idle state. and the character is simply standing around. there is no need to overcomplicate the already complex machine with this information. save file systems. and strafe. In fact. State machines provide a convenient means to define the appropriate transitions between animations as well as to keep track of the animation currently being played. and much more. networking engines. screen layout systems. However. The number of states in this example is pretty high because every animation has a blend in and out state which may or may not have an animation associated with it. Let us take a look at an example of how a state machine could be used to control the transitions between the movements of a character that can stand still. 139 . just about any place where you have a switch statement in C++.1. Typically. running jumps. animation control. turn. using the WSAD style of control. state machines are used all the time. the blending states might also have the condition that their blend be finished before the transition goes through.2 Uses of Finite State Machines In games. and talk about some examples of each application.4. there is probably some semblance of a state machine in action. Animation sets in games typically include all types of sub-animations. That is the state where the player is giving no input to the system. and so on. state machines are such useful systems specifically because they can be configured to handle a multitude of tasks.

140 .

and the mission progress is advanced as the goals are satisfied. typically in a specific order but not necessarily. note Figure 4. you fail the mission. Some goals need to be set. Consider a missionbased scenario.Game State Another very common use of state machines is for preserving game state information.7 For example. At each waypoint.7. Figure 4. and there are three separate waypoints which need to be visited while defeating all enemies. 141 . you must kill all the enemies before you can continue on. there is a critical teammate who must survive the mission. This is a very common way for games to use state machines. the reason why it is called game state has a background in state machine terminology. If at any point along the way you or your critical teammate perishes. In fact. In this mission.

If the slot we picked has a save in it already. or pick another slot. Initially. We can see how state machines can keep the number of potential options down to a minimum at each state. we check for sufficient space. there is a check to see if there is a memory card. a slot must be chosen to which we want to save. Figure 4. a request is made for one to be inserted. If there is not space. or insert a new card if we want to use a different memory card. save to the card.8 Figure 4. we can overwrite it. and if available. we try to free up space. and if not. 142 . If there is a memory card. while maintaining some memory about what was going on beforehand. In many cases.Save File System Save file systems are another common use of state machines. saving or restoring a file from disk can involve many steps which may or may not occur depending upon the situation of the save file. If that slot is empty.8 illustrates a possible save flow for a memory card based save system.

As discussed earlier in the chapter.Artificial Intelligence The last of the most common uses of state machine systems is for artificial intelligence. we can use the state machines to drive the individual characters as well as group leaders. What the AI decides to do is dependent upon what he is doing at the moment as well as what is going on around him. There is definitely room for improvement on the feature set of the tool.2 The State Machine Demo The State Machine Demo provides both an interface through which you see a state machine working. 4. Let us take a look at it and then put together some examples of state machines that could drive AI in different games. but it is a solid start for a generalized system where you can build custom state machines for any purpose in your game.9 143 . Let us take a look at the interface: Figure 4. as well as a means by which you can create and save state machines for use in your game. This is a very important distinction because other systems require that the current actions of the system be fed into the system externally or otherwise bolted on. A demo state machine system has been implemented for this chapter which will allow us to play with state machines in some depth. State machines are incredibly useful for artificial intelligence because of their inherent ability to “remember” information.

and transitions. saving the current state machine. as well as when it is exited. The red state is the current state.1 The Implementation Like all of the other demos included in this course. The CStateMachineDoc class holds onto the cStateMachine object and is responsible for serializing it. resetting the current state machine. Each state also has a list of transitions that it will evaluate after its simulation iteration to see if it should transition out. opening a saved state machine. and it can contain any number of states. On the tool bar.9 shows a representation of the state machine. There is also a list of actions it can perform for each simulation iteration. there are the options for creating a new state machine.2. and the CStateMachineView is responsible for drawing the state transition diagram. There is always a single machine at the top level.The pane on the right in Figure 4. CStateMachineApp CStateMachineView CStateTreeView CStateMachineDoc cStateMachine cState cTransition cAction Figure 4. The left pane shows a tree view of the current state machine.10 144 . There are various dialogs responsible for aiding in adding/removing/updating states. the State Machine Demo makes use of MFC. actions. 4. The CStateTreeView class is responsible for ensuring the tree view is up to date. and iterating the simulation for one iteration for the current state machine. Each state can have a list of actions that it performs when the state is entered.

Let us take some time to delve into the implementation of the state machine in detail. The shared_ptr<> template smart pointer is not intrusive. 145 . exit. AddState(shared_ptr<cState> state). Let us now go over the details of this class.org).The cStateMachine class maintains a vector of pointers to cState objects. Iterate(void). RemoveState(shared_ptr<cState> state). since creating a new shared_ptr<> will create a new instance of the ref and use count storage. virtual void void void shared_ptr<cState> shared_ptr<cState> void tStateList void int int cStateMachine(void). Additionally. Serialize(ofstream &ar). Reset(void). Similarly. The cAction class has derived versions for actions that act on a single value. and iteration phases of the state. &States(void) { return(mStates). UnSerialize(ifstream &ar). This is important to note. One thing you might notice is the use of shared_ptr<> template classes. the cState objects maintain a vector of cTransition object pointers. so it uses additional storage for the ref counts and use counts. GetStatePtr(cState *state). or the reference counting will break. shared_ptr<cState> mCurrentState. GetStatePtrByName(string &name). }. } ToDot(string &dotString). It is absolutely critical that the original or a copy of the shared_ptr<> initially created be used when making new shared_ptr<> objects pointing to the same memory. The shared_ptr<> template class is an automatic reference counting class provided by the Boost template libraries (www. Above we have the class declaration for the cStateMachine class.boost. the cTransition class has some derived types for scripted transitions and transitions that do simple comparison operations on a value held by the state. The State Machine Class class cStateMachine : public cObject { public: typedef vector< shared_ptr<cState> > tStateList. protected: tStateList mStates. ~cStateMachine(void). Each of the cState objects maintains a vector of pointers to the cAction objects used in the enter. as well as scripted actions which we will discuss later.

mCurrentState = trans->TargetPtr(). we check to see if we have a current state pointer to ensure we can iterate its simulation. Let us walk through this method. ++it cTransitionPtr trans = *it. void cStateMachine::Iterate(void) { if (mCurrentState) { mCurrentState->Iterate(). if (mCurrentState) First. The destructor is virtual to provide for correct deletion of derived types.begin(). ~cStateMachine(void). for( cState::tTransitionList::iterator it = mCurrentState->Transitions(). mCurrentState->Enter().end().virtual cStateMachine(void). we might pass a time delta in to provide for frame independent AI code. for ( ) { cState::tTransitionList::iterator it = mCurrentState->Transitions(). } } } } The iterate method iterates the current state’s simulation.begin(). if (trans->ShouldTransition()) { mCurrentState->Exit(). it != mCurrentState->Transitions(). mCurrentState->Iterate().end(). 146 . ++it ) We then loop over all of the transitions for the current state. In the future. it != mCurrentState->Transitions(). break. We then iterate the current state. The cStateMachine class provides a default constructor which builds an empty state list and has a NULL current state. and checks to see if the state should transition afterwards.

if (trans->ShouldTransition()) We check each transition to see if the criteria for that transition have been met. We break out of the loop.empty()) mCurrentState = mStates. shared_ptr<cState> GetStatePtr(cState *state). and we do not want to try to transition again. and resets its value back to its initial value. but it also helps so we do not miscalculate our ref counting. If we determine we should transition. void void AddState(shared_ptr<cState> state). allowing any exit time actions to take place. if (mCurrentState) mCurrentState->Reset(). } 147 . This is mostly necessary for the GUI code. and execute all of the actions in the new state’s enter list. break. mCurrentState->Exit(). RemoveState(shared_ptr<cState> state). since we have transitioned. mCurrentState->Enter(). tStateList &States(void) { return(mStates). This is used to reset the machine.cTransitionPtr trans = *it. The machine takes responsibility for deleting the states. The AddState and RemoveState methods provide for a means to add and remove states from the machine. } The Reset method makes the first state in the state list the current state. We then set our current state to be the target state of the transition. shared_ptr<cState> GetStatePtrByName(string &name). but the shared_ptr<> classes should do that for us when the last reference to the state goes away. These methods provide for a means to get the actual shared_ptr<> object that the machine is holding onto. we first exit the existing state. Now we enter the new current state. mCurrentState = trans->TargetPtr().front(). void cStateMachine::Reset(void) { if (!mStates.

} InitialValue(void) const {return(mInitialValue). mInitialValue(0) {} cState(float initialValue.research. UnSerialize(ifstream &ar).att. } SetInitialValue(float value) { mInitialValue = value. The Serialize and UnSerialize methods are the methods responsible for writing and reading the state machine to/from disk. &Transitions(void) { return(mTransitions). Exit(void). mInitialValue(initialValue). } virtual float float void void void const string void void void void tTransitionList tActionList 148 .com/sw/tools/graphviz/) DOT format representation of the state machine. int int Serialize(ofstream &ar). This is used by the transition view to draw the diagram. } Enter(void). &Name(void) const { return(mName). } SetName(const string &name) { mName = name. Value(void) const { return(mValue). Iterate(void).} SetValue(float value) { mValue = value. mName(name) {} ~cState(void). typedef vector<shared_ptr<cAction> > tActionList. void ToDot(string &dotString). cState(string name) : mName(name). string name) : mValue(initialValue). The ToDot() method builds a string which contains an AT&T Graphviz™ (http://www.Here we have an accessor to gain access to the list of states the machine has. } Reset(void). } &Actions(void) { return(mActions). The State Class class cState : public cObject { public: typedef vector<shared_ptr<cTransition> > tTransitionList. mValue(0).

tActionList tActionList shared_ptr<cAction>

&EnterActions(void) { return(mEnterActions); } &ExitActions(void) { return(mExitActions); }

GetActionPtr(cAction *action, tActionList &actions); shared_ptr<cTransition> GetTransitionPtr(cTransition *trans); int int protected: int int float float tTransitionList tActionList tActionList tActionList string }; Serialize(ofstream &ar); UnSerialize(ifstream &ar, cStateMachine &sM); SerializeActions(ofstream &ar, tActionList &aL, string actionListType); UnSerializeActions(ifstream &ar, tActionList &aL, shared_ptr<cState> &state); mValue; mInitialValue; mTransitions; mActions; mEnterActions; mExitActions; mName;

The cState class provides the list of actions to be run at iteration time, as well as the time when the state is entered or exited. The state class also provides the transitions which determine which other states can be entered from this state. Let us take a look at the state class in greater depth.
cState(string name) : mName(name), mValue(0), mInitialValue(0) {} cState(float initialValue, string name) : mValue(initialValue), mInitialValue(initialValue), mName(name) {} ~cState(void);

virtual

The cState class provides no default constructor, but does provide a constructor to initialize the name of the state, as well as a means to set an initial value and the name of the state. The destructor is virtual to provide for correct polymorphic destruction of derived classes.
float float void void Value(void) const { return(mValue); } InitialValue(void) const {return(mInitialValue);} SetValue(float value) { mValue = value; } SetInitialValue(float value) { mInitialValue = value; }

cState provides accessors to its initial value and current value members.

149

void cState::Reset(void) { mValue = mInitialValue; Enter(); }

The Reset() method resets the current value in the state to the initial value, and executes all of the actions in its enter list. This is used for resetting the state machine.
const string void &Name(void) const { return(mName); } SetName(const string &name) { mName = name; }

The class provides a means by which to access the name, or update it.
void void void Enter(void); Iterate(void); Exit(void);

The Enter(), Iterate(), and Exit() methods execute the list of actions appropriate to the method by iterating over the list and calling Execute() on each action therein.
tTransitionList tActionList tActionList tActionList &Transitions(void) { return(mTransitions); } &Actions(void) { return(mActions); } &EnterActions(void) { return(mEnterActions); } &ExitActions(void) { return(mExitActions); }

The class provides accessors to the lists of transitions and actions it contains.
shared_ptr<cAction> GetActionPtr(cAction *action, tActionList &actions); shared_ptr<cTransition> GetTransitionPtr(cTransition *trans);

Similar to the cStateMachine class, the cState class provides a means to get access to the actual shared_ptr<> objects contained within the class to keep the reference counting up to date.
int int Serialize(ofstream &ar); UnSerialize(ifstream &ar, cStateMachine &sM);

The class also provides methods to serialize itself to/from a file.

150

The Action Classes
class cAction : public cObject { public: cAction(cStatePtr state) : mState(state) {} virtual ~cAction(void) {} virtual void Execute(void) {}; virtual string Label(void) { return(string("Base Action")); } cState &State(void) const { return(*StatePtr()); } cStatePtr StatePtr(void) const { return(mState.lock()); } protected: weak_ptr<cState> mState; };

The action class does the work in a state. Notice the Action class has a weak_ptr<> to the state it belongs in. This is another Boost template library smart pointer. A weak_ptr<> maintains a pointer to the object, but does not increment its ref count until it is locked. This is to prevent cyclic ref counted pointer connections which would prevent the memory from being freed. A shared_ptr<> object is typically used for something that is a HAS A relationship whereas a weak_ptr<> object is typically used for something that is a USES A relationship. The cState owns the cAction, so the cState gets the shared_ptr to the cAction, and the cAction gets a weak_ptr<> back to the cState. Let us examine this class a little more closely.
virtual cAction(cStatePtr state) : mState(state) {} ~cAction(void) {}

The constructor takes a shared_ptr<> to the state the action belongs to. This ensures proper ref counting, and weak_ptr<> objects need a shared_ptr<> object to be constructed. The destructor is virtual to allow for proper polymorphic destruction of derived types.
virtual void Execute(void) {};

The execute method does nothing in the base class.
virtual string Label(void) { return(string("Base Action")); }

The label is for the UI. It is overloaded in the derived types.
cState cStatePtr &State(void) const { return(*StatePtr()); } StatePtr(void) const { return(mState.lock()); }

The StatePtr() method locks the weak_ptr<> object, which returns a shared_ptr<> object. The State() method calls StatePtr() to get the shared_ptr<> object, and then dereferences it to hand back a reference.

151

class cValueBasedAction : public cAction { public: cValueBasedAction(cStatePtr state, float value) : cAction(state), mValue(value) {} virtual ~cValueBasedAction(void) {} virtual void Execute(void) {}; virtual string Label(void) { return(string("Value based Action")); } float Parameter(void) const { return(mValue); } protected: float mValue; };

The cValueBasedAction is a base class for actions that act on the cState object’s value data member using a parameter. Let us look at what is different from the cAction base class.
cValueBasedAction(cStatePtr state, float value) : cAction(state), mValue(value) {}

The constructor now also takes a value to initialize the cValueBasedAction’s parameter.
virtual void Execute(void) {};

The execute method still does nothing. The derived types will take care of that.
float Parameter(void) const { return(mValue); }

We now have a parameter member accessor. Let us take a look at the derived types’ Execute methods, since that is all that changes from now on:
void cAddAction::Execute(void) { State().SetValue(State().Value() + mValue); }

The cAddAction adds its value to its cState object’s value.
void cSubtractAction::Execute(void) { State().SetValue(State().Value() - mValue); }

The cSubtractAction subtracts its value from its cState object’s value.
void cInverseSubtractAction::Execute(void) { State().SetValue(mValue - State().Value()); }

The cInverseSubtractAction subtracts its cState object’s value from its value.
void cMultiplyAction::Execute(void) { State().SetValue(State().Value() * mValue); }

152

153 . the cTransition class uses weak_ptr<> objects for its cState pointers. } The cDivideAction divides its cState object’s value by its value. This will define a transition from source to target. mTarget. } { return(mTarget. } return(*TargetPtr()). } &Source(void) { &Target(void) { SourcePtr(void) TargetPtr(void) return(*SourcePtr()). } The cInverseDivideAction divides its value by its cState object’s value. is virtual to provide for correct polymorphic destruction of derived types.SetValue(mValue / State(). ShouldTransition(void) { return(false).lock()). the cScriptedAction. } Label(void) { return(string("Base Transition")).lock()). cStatePtr target) : mSource(source). cStatePtr target) : mSource(source). mTarget(target) {} ~cTransition(void) {} virtual The cTransition class constructor takes a source cState pointer and a target cState pointer. Like the cAction class. } mSource. but we will address it later after we have discussed scripting. Let us take a deeper look at this class. The destructor. void cInverseDivideAction::Execute(void) { State().Value()). The cTransition class determines if a given cState should transition into the target state given by the transition.SetValue(State(). as always. cTransition(cStatePtr source. void cDivideAction::Execute(void) { State(). There is one other type of cAction derived class. } { return(mSource.The cMultiplyAction multiplies its value by its cState object’s value. The Transition Classes class cTransition : public cObject { public: cTransition(cStatePtr source. mTarget(target) {} virtual ~cTransition(void) {} virtual bool cState cState cStatePtr cStatePtr virtual string protected: weak_ptr<cState> weak_ptr<cState> }.Value() / mValue).

label += (*mFunc). "%0. } { return(mSource. } Label(void) { string label. mFunc(func) { } virtual ~cComparitorTransition(void) { if (mFunc) delete mFunc. } { return(mTarget. mThreshold). label += " ". } return(*TargetPtr()).virtual bool ShouldTransition(void) { return(false).lock()). float threshold. } The base class ShouldTransition method simply returns false. mFunc = NULL.lock()). } virtual bool ShouldTransition(void) { return((*mFunc)(Source(). label += number. sprintf(number. return(label). mThreshold)).3f". } SetThreshold(float threshold) { mThreshold = threshold. } Like the cAction class. char number[16]. target). The derived types will do some work here to determine if the source state should transition to the target state. class cComparitorTransition : public cTransition { public: cComparitorTransition ( cStatePtr source. cStatePtr target. we have accessors to both references to the source and target states as well as shared_ptr<> objects. cState cState cStatePtr cStatePtr &Source(void) { &Target(void) { SourcePtr(void) TargetPtr(void) return(*SourcePtr()). } The label is overridden by the derived classes to provide the UI with a descriptive label. } float void virtual string 154 . cComparitor *func ) : cTransition(source. virtual string Label(void) { return(string("Base Transition")).Label().Value(). mThreshold(threshold). } Threshold(void) const { return(mThreshold).

3f".Label(). cComparitor *mFunc. }. virtual string Label(void) { string label. virtual bool ShouldTransition(void) { return((*mFunc)(Source(). float threshold. label += (*mFunc). Let us go over this class. } We also have some accessors for the transition’s threshold. mFunc = NULL. cComparitor *func ) : cTransition(source. sprintf(number. mThreshold). mThreshold(threshold). target). as the base class does. float void Threshold(void) const { return(mThreshold). It will use this comparison to determine if the state should transition to the source state.protected: float mThreshold. cStatePtr target. 155 . label += " ". } The ShouldTransition method simply returns the result of the comparison function operation. but it also takes a threshold. mThreshold)). char number[16]. It uses the value of the state and the threshold of the transition for the comparison operands. The cComparitorTransition class is a simple transition class which uses a comparitor which will make a comparison between the value of the state and the parameter given to the transition.Value(). virtual ~cComparitorTransition(void) { if (mFunc) delete mFunc. cComparitorTransition ( cStatePtr source. } SetThreshold(float threshold) { mThreshold = threshold. } The destructor frees the comparison function if it has one. and a pointer to a comparison function. "%0. mFunc(func) { } The constructor takes the source and target states.

float rhs) { return(lhs < rhs). } The cLessThanComparitor checks to see if the left hand side value is less than the right hand side value. bool cGreaterThanOrEqualComparitor::operator()(float lhs. Now let us see some actual implementations of the cComparitor class. class cComparitor { public: virtual bool virtual string }. } The cLessThanOrEqualThanComparitor checks to see if the left hand side value is less than or equal to the right hand side value. let us take a look at the cComparitor object itself. Now that we have seen the cComparitorTransition implementation. } 156 . } return(label). bool cLessThanComparitor::operator()(float lhs. Label(void) = 0. It provides a binary operator. Our Label method makes a descriptive label for our UI. float rhs) { return(lhs <= rhs). } The cGreaterThanComparitor checks to see if the left hand side value is greater than the right hand side value. float rhs) { return(lhs == rhs). bool cEqualComparitor::operator()(float lhs. bool cGreaterThanComparitor::operator()(float lhs. Note the use of the comparison function’s label method to supplement the label of the transition. which is pure virtual in the base class. The cComparitor class itself is very simple. operator()(float lhs. float rhs) = 0. as well as a label for the UI (again pure virtual). as well as the different comparisons we have already implemented. } The cGreaterThanOrEqualComparitor checks to see if the left hand side value is greater than or equal to the right hand side value. bool cLessThanOrEqualComparitor::operator()(float lhs. float rhs) { return(lhs > rhs). float rhs) { return(lhs >= rhs).label += number.

This is where scripting and scripting languages come into play. given its ability to iterate by re-running the script rather than recompiling and linking the game. or which sound effects and particle effects are triggered to occur at a certain time during a non-interactive in-game cinema sequence. 4. We now have a good understanding of state machines and how they function. Thus our main game source code does not need to be recompiled every time changes to action sequences or other events are made. The only other type of cTransition we will discuss is the cScriptedTransition. The language commands are most often interpreted at runtime by the scripting system. Please refer back to the diagrams and descriptions in this section if you are unsure of how these pieces all fit together. Scripting is a methodology for making code data driven. you might even think it fairly dangerous to grant that level of access to team members who are not qualified C++ developers. do not necessarily need access to the actual code of the game. Scripts are generally written using a lightweight programming language that is much easier for less technical team members to learn. Scripting can also be used to control simple or complex sequences of events such as which movies play when. In many cases.The cEqualComparitor checks to see if the left hand side value is equal to the right hand side value. Indeed. It can be used to make a variety of things happen in your game. Scripting is typically used in places where a lot of iteration is required because it can be done faster. although some systems provide just-in-time compiling to speed things up. But we can make it even more powerful and flexible by adding scripting to our system. etc. Besides. This will be the subject of the next section of this chapter.3 Scripting in Games Not everything in our game has to be implemented using a heavyweight language such as C++. Perhaps the greatest flaw is the inability to easily and quickly update the system. many of whom will not always be accomplished computer scientists with extensive C++ training. } The cNotEqualComparitor checks to see if the left hand side value is not equal to the right hand side value. which we will address later after we have talked about scripting. For example. your AI controlled NPC can be scripted to take a specific action when interacting with the player. it is argued that they should not have such access. At the moment our state machine implementation is fairly robust and can certainly be extended to handle lots of different needs. game designers. C++ is very powerful. and launching the scenario again. Scripting is also typically memory management friendly. recompiling a game can take a very long time. but it does have some pitfalls in game development. float rhs) { return(lhs != rhs). bool cNotEqualComparitor::operator()(float lhs. An alternative approach that allows game designers to make various updates and modifications to game behavior and functionality without changing the core source code (and without the need for repeated compilations for every change) would certainly be useful. at least from a user’s point of 157 . and we have gone through a large portion of the source code used in our chapter demonstration.

This can be good and bad. JavaScript and its variants. and a large library of both built-in functionality as well as extended functionality via modules implemented in C or C++. Developing a proprietary scripting system requires a significant amount of time and effort. Fortunately. In this course we will follow that trend and use a scripting language that is powerful and easily integrated into our existing C++ source code. There are numerous scripting languages available such as Perl. You will better understand this when we get to the examples. Boost provides a wonderful C++ to Python interface library that will make our life very easy. so in the demo you will see whitespace. 158 . every line that is part of that method must be at leading whitespace count 1 or more. but as we saw a moment ago. we will take a crash course in Python. the concepts and techniques we will study in this chapter should help prepare you for what you will need to do to get up and running with another system. In the next section. high level dynamic data types. etc. Lua.scintilla. Python. VBScript. Fortunately.org/) editor has been embedded into the script editor dialogs in the State Machine Demo. many game shops are making the decision to go with technologies that are freely available and ready for use. We have chosen Python as the scripting language for this course.1 Scope and Whitespace An important thing to be aware of right upfront is that Python determines scope based on leading whitespace and does not use the familiar curly braces { } or similar character driven means. UnrealScript. like our team at Midway Games (makers of Psi Ops: The Mindgate Conspiracy©) prefer to use existing systems like Python.4 Introduction to Python Python is an interpreted objected oriented programming language. Thus. there are other options available should you decide that a different choice better suits your game project. since it can cause strange memory usage patterns if you are working on embedded platforms and not on a PC where you have virtual memory. if you define a method at leading whitespace count 0. such as Bioware (makers of Neverwinter Nights©) prefer to create their own scripting system. While you will receive a very quick introduction to Python here in this course. Even if you do decide to adopt an alternative language. In most scripting systems you can simply define a variable and it will be cleaned up whenever the garbage collection system determines it is no longer in use. 4. dynamic typing. while others. 4. the Scintilla (http://www. With the maturation of existing scripting languages and systems and their current flexibility and stability. Some game development shops.python. it is highly recommended that you visit the Python website (www. but it is advised that you download a decent text editor that allows visualization of whitespace.org) and look at their tutorials and other materials to learn more and get more comfortable with the language. modules.4. It include features like classes. using the Boost library (which we will discuss shortly) we will have a very simple task of integrating scripting into our state machine. exceptions.view.

All of the numerical operations are the same as C++ (including % for modulo).4. and there is also a power operator ** (i. only bigger than you would think. tuples. “or”. Strings are considered a sequence type. Comparisons are done very much like C/C++: < <= > >= == <> or != Is is not And Or Not Less Than Less Than or Equal Greater Than Greater Than or Equal Equal Not Equal Object Identity Negated Object Identity Logical AND Logical OR Negation Operator The only major differences are in the “is”. or False or True. and xrange objects. and “not” constructs. Python also supports iterator types for iterating across sequences.e. The important thing to remember is that strings can be written using single or double quotes. “and”. Most sequence objects support the “in” and “not in” operations which are used in iterating 159 . Boolean numbers are a special type of integer. Integers are longs in C++ (32 bits). but refer to the Python documentation for proper descriptions.2 Default Types and Built-Ins Python has a standard assortment of data types. as are lists. You do not have to specify the type when a variable is created. Python does not care as long as the start and end of the literal string is done in the same way. There are four distinct types of numbers: integers. it is determined from context. Sequence types are also supported in Python. long integers. x ** y is x to the yth power).4. floating point numbers. and floating point numbers are equivalent to the C++ counterparts. buffers. “is not”. Integers. Complex numbers are numbers with i conjugate parts. There is probably not a major need to work with those because complex numbers are not used much in games. and floating point numbers in Python are double precision in C++ (64 bits). and complex numbers. long integers. Boolean types can be set to 0 or 1.

There are also File objects and Class objects. You can put documentation strings in the first line after a class or method declaration which Python can tell you about. While C++ will explicitly pass the this pointer on the stack to the method. Slices can be performed on most sequence types using [i:j:k] and a single item can be obtained with [i]. Notice the indenting that was mentioned before. This class provides a single method. The class scope is everything greater than 0 whitespace. stripping whitespace.hi_count + 1 Here we have a more complex class. Let us look at a more complex example.4. class HelloWorldCount: “document string for hello world count” hi_count = 0 initialized = False __init__(self): self. 4. The self value is basically the this pointer in C++. just like C++. and the built-ins min() and max() provide the smallest and largest item in the container. we can even derive Python classes from our C++ classes! Python does not have a concept of public versus private data. This could be used in your game to describe the event you are working on or you could ignore it altogether. The built-in method len() can also be used to determine the number of items in the sequence.3 Classes Python has a concept of classes.across items in the container. Note the string under the class declaration which is a document string.hi_count = self. There is a plethora of string operations which allow for various things such as changing case. and the say_hi is everything greater than 1. All class methods must have the first parameter be self. Python requires we specify it manually.hi_count = 0 self. 160 . as this is how Python knows which lines belong to which methods. and what belongs to the class itself. In fact. as you will see later. Let us take a look at an example Python class to see some of these concepts just discussed in action: class HelloWorld: def say_hi(self): print “Hello World!” Here we have a definition of the HelloWorld class. so every data member and method will be public. however. Pay special attention to the indention. as well as mapping types such as Dictionaries and Mappings.initialized = True def say_hi(self): print “Hello World!” self. See the documentation for more details. and determining if a character is a digit or alpha-numeric. Please refer to the official documentation for these concepts. say_hi().

and passing in our self member. Python class methods are always virtual. If desired.4.hi_count = 0 initialized = False These are members which are stored in the class and they have initial values assigned. It is not identical to a constructor. In this case we are assigning particular values that make it pretty clear what types of variables these should be. as well as sets the initialized member to True. def say_hi(self): print “Hello World!” self. __init__(self): self. just to illustrate what the __init__ method does).hi_count = 0 self. The __init__ method can have additional parameters as well.hi_count + 1 In the say_hi method. Python also supports inheritance and multiple inheritance like so: class SpecialHelloWorld(HelloWorld): def say_hi(self): HelloWorld. it initializes the hi_count to 0 (not really necessary. This is required. Thus. They can be called from within any class that includes the package in which the function is defined. Note that we do not have to specifically declare the variable type. This is actually a fairly common feature in scripting languages. any changes made to it will be reflected after the function call. In this example. the most specialized derived type’s method will be called. Python understands how the variables are used based on context. but it is as close as we get in Python.hi_count = self. so if the object is mutable. It is important to note that functions always pass their parameters by object reference. It is also possible to define freestanding functions that are implemented in C or C++ using the methods we will describe shortly.4 Functions Freestanding functions in Python can be defined outside the scope of any class. so you can initialize a class with external data. Our base class implementation can be called by simply using the base class name as if it were an object. 4. Notice the member variables are scoped using the self member. we could call other methods using the self member as well. we would list the classes separated by commas in the parentheses after the new class name. we again print “Hello World!” but we also increment our hi_count.initialized = True This method is similar to a constructor.say_hi(self) print “Special Hello World!” If multiple inheritance were desired. 161 .

def toCelsius(fahrenheit): “Find equivalent temperature in Fahrenheit degrees from Celsius degrees” celsius = (5.4.0 return fahrenheit Here we have a function which converts degrees Celsius to degrees Fahrenheit.0/5. c = a 2 + b 2 . although it does not have quite the same number of control statements as C++. 4. If-then-else statements look like the following: 162 . and for loops.0) * celsius + 32. def toFahrenheit(celsius): “Find equivalent temperature in Celsius degrees from Fahrenheit degrees” fahrenheit = (9.5 Control Statements Like any other programming language. this is due to the leading whitespace rules. Again. Python has its share of control statements. def jump(timesToJump = 3): “Jump however many times” pass Here we have a function with default parameters. and this function does exactly that. This is just as how C++ does it. Clearly. This method finds the value of c given a and b. The pass keyword is required so that the language parser can know where the scope of the function begins and ends. def doNothing(): “Do a whole lot of nothing” pass Here we have a function which does a whole lot of nothing.A few examples: def pythagorean(a.0) return celsius Here we have a function which converts degrees Fahrenheit to degrees Celsius.32.0) * (fahrenheit . Python supports if-then-else statements.0/9. b): “Find the length of the hypotenuse c as given by sides a and b” c = a*a + b*b c = sqrt(c) return c Here we have the Pythagorean Theorem a 2 + b 2 = c 2 .

hats = [‘top hat’. Print “Counting to 10” for i in range(1. We also can iterate over a range in numbers. we would use “else if” and not “elif”. if x < 0: print “x is less than zero” if x < -10: print “x is less than -10” else: print “x is less than zero but greater than -10” Note how the leading whitespace can make statements which would require braces in C++ easier to write. hat) 163 . 10) print i Here we are counting from 1 to 10 and printing it out. 2) print i The range function also allows us to modify our increment. For loop statements are somewhat different than what we are familiar with in C++. ‘cowbow hat’. hats = [‘top hat’. Again. 10.insert(0. hat hats. Print “Counting to 10 by 2” for i in range(2. hat Above we have a list of hats. Here we are counting from 2 to 10 by 2s. and we iterate across every element in the list. ‘cowbow hat’. but here it is useful to resolve a common ambiguity. the leading whitespace rules dictate which statements belong in which branch. ‘baseball cap’] for hat in hats[:]: print “I like my”.if x < 0: print “x is less than zero” elif x > 0: print “x is greater than zero” else: print “x is zero” Notice that in C++. They iterate over a range specified. ‘baseball cap’] for hat in hats print “I like my”. or over all of the elements in a list. The leading whitespace can cause problems in other places.

not just in an empty function declaration. If our count is not a multiple of 5. 164 . The slice operator [:]: however. we basically have a seamless C++ to Python integration ready for use in your own applications. adding to the container you are iterating over is a bad idea since it can invalidate your iteration. makes it easy to make a copy of the list in place. 200) if i % 5 == 0: print i else: continue if i == 100: break else: pass Here we are counting up to 200 one at a time. and then call its methods polymorphically from C++. and iterate over the copy. It was not strictly required here since an else statement was not needed. It is imported using the following line at the top of the file.Normally. Python has been encapsulated such that you can define a C++ class. while inserting into the original. It is possible to replace * with specific things you wish to import. from GI_AISDK import * This line imports everything from the GI_AISDK package. we continue. expose it to Python. and much of the work of abstracting it into a system you can use has already been done.4.Python to get the system working. but for the purposes of this demo. but it is used to demonstrate that pass can be used anywhere.6 Importing Packages The only remaining bits about Python which are necessary to know at this stage concern the packages. we do nothing. All of the functionality exposed to Python in our case is contained in a package called the GI_AISDK package. If our count is not 100.4. In the State Machine Demo. 4. you will want to import everything. derive a Python type from it. But at the end of the day. we break out of the loop. This was no mean feat incidentally. Print “Counting to 100 printing only by 5” for i in range(0. 4. Packages are collections of functionality that can be brought into the application via an import statement (very much like a C++ library). create an instance of the Python derived type from within the C++ application. it took many long nights and a good deal of assistance from the developers at Boost. Again we have a use of the pass keyword.7 Embedding Python Python provides a C API for embedding the Python interpreter into your own applications. If we reach 100. This is not as difficult as it sounds.

First.Python. In the next section we will take a look at how some of the most common things can be accomplished using Boost. Second. Notice that there are no quotes around the name. int someFunction(int). Assume we have some function as described above.Python website (http://www.org/libs/python/doc/index. It is essentially our core Python compilation unit (much like the combination of an . this must be a fully qualified valid variable name.hpp> <boost/python/class. 1. as well as extracting values from those types. Making a Module #include #include #include #include <boost/python. as well as call methods on C++ types that have been exposed. the Boost. making a module is very straightforward.Python: Embedding Python using Templates Boost.cpp file in C++). including instantiating Python types from C++.hpp> <boost/python/def. As seen above.hpp> <boost/python/module.Python is fairly easy if the function you are exposing is not terribly complex. Exposing a Function Exposing a function from C++ to Python using Boost.Boost.boost.Python provides a software layer which allows us to do two primary things.h file and a .hpp> BOOST_PYTHON_MODULE(GI_AISDK) { // class and function exposures go here } A module is a set of one or more classes or functions. While you will likely have to do little to expose your own classes if you are just using the framework provided.html) and read over the tutorials and documentation. the BOOST_PYTHON_MODULE will build a new module of the name provided. Write and use Python-like language in C++ to do work. it is highly recommend that you visit the Boost. 2. Expose our C++ classes to Python so that Python classes can be derived from them. 165 .Python library headers must be included.

and give the def() method the name we would like to expose the method as.def(“greet_person”. The class_<> template exposes the type given as the template parameter. class_<HelloWorld>(“HelloWorld”. } void greetPerson(void) { cout << “Hello World and “ << mPersonToGreet << endl.Python documentation and read through it to get a better understanding of the more complex cases.Python has concepts for internal references. This exposes the C++ function someFunction as the Python function some_function. it is highly advised that you seek out the Boost. &HelloWorld::greetPerson) . someFunction).def(“some_function”. Boost. } protected: std::string mPersonToGreet. there will be complications once you start using references and pointers. We could expose this class as shown. void setPersonToGreet(std::string person) { mPersonToGreet = person.Python documentation for the full scope of how to perform complex function exposure. 166 . We then list each method in the class we wish to expose. Let us assume we have the simple class above. class HelloWorld { public: HelloWorld(std::string personToGreet). so please refer to the Boost. and a multitude of actions which occur once you start passing around addresses to memory. We will get you started with some simple classes. Its constructor parameters are the name of the class exposed to Python. but again. Although this may seem quite simple.Python documentation. and copying const references. All we need do is add the line above in our Python module’s scope. Exposing a Class Exposing simple C++ classes is also very straightforward until you start working with complex virtual functions and their ilk. and custodians and wards for ref counting. }. init(std::string)) . This becomes complex once virtual methods and so on are added. &HelloWorld::setPersonToGreet) . Please take a look at the Boost.def(“set_person_to_greet”. since they have covered these features in great depth. as well as the type of initialization function which is required to initialize the object properly. The def() method returns a reference to the class_<> object again so we can chain the def calls.

class_<cTransition>("Transition".4. and then we will talk about the classes themselves. return_value_policy<reference_existing_object>()) . // Expose a base class new scripted actions should derive from class_<cScriptedAction.11 As shown in Figure 4. we will discuss how we actually exposed the classes. init<cStatePtr >()) . class_<cAction>("Action".def("state". The “Wrap” suffixed classes are classes that are necessary for Boost. please refer to the Boost. &cState::SetInitialValue) . cStatePtr >("State". cStatePtr >()). &cAction::State. our scripting engine implements specialized Python scripted types of the cAction and cTransition classes.Python documentation. boost::noncopyable >("PythonScriptedAction". The scripting engine also has a cScriptEngine interface which is implemented for Python by the cPythonScriptEngine.add_property("initial_value". std::string>()) . &cState::InitialValue. init<cStatePtr >()). First. cPythonScriptedActionWrap. For more information on why that is necessary. 167 .5 Our Scripting Engine cTransition cScriptEngine cAction cScriptedTransition cPythonScriptEngine cScriptedAction cPythonScriptedTransition cPythonScriptedTransitionWrap cPythonScriptedAction cPythonScriptedActionWrap Figure 4. init<cStatePtr. Let us delve into the implementations of these classes in more detail. &cState::Value. // Expose a base class new scripted transitions should derive from class_<cScriptedTransition. cPythonScriptedTransitionWrap. &cState::SetValue) .11. init<float. BOOST_PYTHON_MODULE(GI_AISDK) { // Expose the State class to Python class_<cState.add_property("value".Python to properly handle virtual methods on the C++ side.

and inform the class that it will be storing the cState objects internally as smart pointers. See the Boost. &cState::SetInitialValue) We also add a property for the Initial Value of the state. &cTransition::Source. &cState::Value. init<cStatePtr >()) Now we expose the cScriptedAction class. cStatePtr >("State". and expose it as “value” on the Python side. We supply the accessors to Python for getting and setting this property. We also specify this that class is not able to be copied. and specify that it has an initialization that requires a cState smart pointer. init<cStatePtr. // Expose a base class new scripted actions should derive from class_<cScriptedAction. .add_property("initial_value". cStatePtr >()) .Python.add_property("value". . Next we expose the cAction class as “Action”. See the Boost. cPythonScriptedActionWrap. &cState::InitialValue. } Here we have the actual exposure of the C++ types to Python for our module using Boost.Python documentation for more details. &cState::SetValue) We then add a property for the Value of the state to the exposure. We specify we want the class exposed as type “State” to Python. but specify that it will contain the cPythonScriptedActionWrap class to allow for the virtual functions.Python documentation for why this step is required. and expose it as “initial_value” to Python using the same mechanism. init<float. boost::noncopyable >("PythonScriptedAction".def("target". class_<cAction>("Action". BOOST_PYTHON_MODULE(GI_AISDK) First we declare a new module called the GI_AISDK. and that it has an initialization requiring a float and an STL string. // Expose the State class to Python class_<cState. &cTransition::Target.boost::noncopyable >("PythonScriptedTransition". return_value_policy<reference_existing_object>()) . This allows us to use the value member as an actual data member rather than having to use accessors on the Python side. and specify that it requires a cState smart pointer for its initialization method. 168 . return_value_policy<reference_existing_object>()) . std::string>()) Next we expose the cState class. since it is a pure virtual base class. init<cStatePtr >()).def("source". We expose this type as “PythonScriptedAction” to Python.

Again. and indicate that it is to return the existing object. see the Boost. and specify it will store a cPythonScriptedTransitionWrap class to allow for the virtual methods. We expose it as “PythonScriptedTransition” to Python. We expose it as “Transition” to Python. Again. // Expose a base class new scripted transitions should derive from class_<cScriptedTransition. we expose cScriptedTransition. &cTransition::Source.Python documentation for further details on why we do this. boost::noncopyable >("PythonScriptedTransition". cStatePtr >()) Lastly. cPythonScriptedTransitionWrap.. &cTransition::Target. Otherwise.Python documentation to get a firm grip on what we just reviewed and to help fill in any missing pieces that you are not yet comfortable with. It is highly recommended that you look through the Boost. &cAction::State. cStatePtr >()). and inform Python that the init method will require two cState smart pointers. return_value_policy<reference_existing_object>()) Finally. you run the risk of becoming lost and confused. if you try to expose your own classes. not to mention frustrated. Next. and specify that it requires two cState smart pointers for its initialization. return_value_policy<reference_existing_object>()) We then provide an accessor to the contained cState object exposed as “state” to Python. The Script Engine Class class cScriptEngine { public: virtual cScriptEngine(void) {} ~cScriptEngine(void) {} 169 . init<cStatePtr. we expose the cTransition class. we provide an accessor to the target state.def("source". class_<cTransition>("Transition". . We inform the system to use the existing object reference rather than any other tricky business. This class is non-copyable since it is a pure virtual class.def("target". we reference the existing object. return_value_policy<reference_existing_object>()) We expose the source cState object as “source” to Python using the accessor. . init<cStatePtr.def("state". exposed as “target” to Python.

cPythonScriptEngine(void). Finalize(void) = 0. It properly initializes the script engine. and make it usable for runtime processing. const string &script) = 0. GetObject(const string &typeName). virtual bool CompileScript(const string &scriptName. 170 .virtual bool virtual bool virtual bool }. class cPythonScriptEngine : public cScriptEngine { public: virtual bool Initialize(void). virtual bool Initialize(void) = 0. Initialize(void) = 0. as well as properly shuts it down. &Instance(void). virtual bool object string static cPythonScriptEngine protected: virtual CompileScript(const string &scriptName. The CompileScript method provides a means to parse and compile the script on the derived script engine. while the destructor is virtual to allow for correct polymorphic destruction of derived script engine types. const string &script). Returning true means successful compilation occurred while false indicates a script error of some sort. const string &script) = 0. CompileScript(const string &scriptName. The cScriptEngine class provides the interface to the scripting engine. virtual bool Finalize(void) = 0. GetErrString(void). The Initialize method provides a means for derived script engines to properly initialize their subsystems. virtual bool Finalize(void). ~cPythonScriptEngine(void). The Finalize method provides a means for derived script engines to properly shut down their subsystems and free their resources. It also provides a means by which to parse and compile a script for use in the runtime. virtual cScriptEngine(void) {} ~cScriptEngine(void) {} The constructor provides default initialization for the script engine. providing a time for final cleanup.

and encapsulates the acts of starting up and shutting down the scripting engine. mMainNamespace. // evaluate the python builtins builtins mBuiltinsNamespace = handle<>(borrowed( PyEval_GetBuiltins() )). mMainNamespace = handle<>(borrowed(PyModule_GetDict(mMainModule. initGI_AISDK). // obtain the __main__ namespace mMainModule = handle<>(borrowed( PyImport_AddModule("__main__") )). but they are protected. and properly sets up the built-ins and interpreter. // initialize the python interpretor Py_Initialize(). virtual cPythonScriptEngine(void). return true. bool cPythonScriptEngine::Initialize(void) { // add custom module PyImport_AppendInittab("GI_AISDK". Lastly. The constructor and destructors do nothing more than the base class. initGI_AISDK). } The Initialize method initializes the Python scripting engine. It also provides access for obtaining error messages from runtime exceptions as well as compiles the script for use at runtime. This is needed in order for us to use this module at runtime.get()) )). Let us take a closer look. ~cPythonScriptEngine(void). // initialize the python interpretor Py_Initialize(). it provides a means by which to obtain an object of a specified type from the Python namespace. This adds our custom module to the initialization phase of the Python engine. mBuiltinsNamespace. mMainModule. The Py_Initialize method does the primary initialization of Python’s main dictionary and modules. since the cPythonScriptEngine is a singleton instance class. GetLineNumber(object traceBack). Let us look at this class in more depth. // add custom module PyImport_AppendInittab("GI_AISDK". The cPythonScriptEngine implements the cScriptEngine interface.object handle<> handle<> handle<> }. 171 .

Here we obtain the namespace for this module.reset(). Let us look at this more closely.reset(). mMainNamespace. // evaluate the python builtins builtins mBuiltinsNamespace = handle<>(borrowed( PyEval_GetBuiltins() )). } return(true). bool cPythonScriptEngine::Finalize(void) { // reset our handles mMainModule. 172 . Here we evaluate the built-ins namespace. The Finalize method shuts down the Python interpreter.reset(). Thus. // shutdown the python interpretor Py_Finalize(). mMainNamespace. This is equivalent to the main() entry point in a C or C++ program. We also free our main namespace. mBuiltinsNamespace. The handle<> objects from Boost.reset().// obtain the __main__ namespace mMainModule = handle<>(borrowed( PyImport_AddModule("__main__") )). return true.get()) )).reset(). which happens if initialization fails. this allows us to use built-in methods and types. which frees up all its resources. This will release the memory if this is the last object holding a reference. mMainNamespace = handle<>(borrowed(PyModule_GetDict(mMainModule. so resetting them relinquishes their reference.Python handle<> containers throw exceptions if they get NULL pointers. we free our main module. We return success. Here we obtain a handle to the main module. mBuiltinsNamespace.reset(). All of our newly created objects will be created in this namespace. // reset our handles mMainModule. The Boost.Python are smart pointers.

we call Py_Finalize. return(true). and inserts the object code into the main dictionary for runtime use. if (!codeObject) { if (PyErr_Occurred()) return(false). which shuts down the interpreter. 173 .Lastly.get(). } The CompileScript method parses and compiles a Python script. We return success.c_str(). compile the code // and inject it into the dictionary PyCodeObject* codeObject = PyNode_CompileFlags(pythonNode. } // evaluate the code so the new class can find its way into the dictionary PyObject *evaluateResult = PyEval_EvalCode(codeObject.c_str()). Py_file_input). Let us take a look at what it is doing. mMainNamespace. } Py_DECREF(evaluateResult). const string &script) { // Parse the script node *pythonNode = PyParser_SimpleParseString(script. After freeing the namespaces. return(true). // Dump any errors that may have occurred if (PyErr_Occurred()) return(false).get()). bool cPythonScriptEngine::CompileScript(const string &scriptName. const_cast<char *>(scriptName. NULL). Py_DECREF(codeObject). // if the script parsed properly. Py_Finalize(). we free our built-ins. if (!evaluateResult) { if (PyErr_Occurred()) return(false). and does a final garbage collection. mMainNamespace. // free our parsed script node PyNode_Free(pythonNode).

mMainNamespace. Now that we have a PyCodeObject pointer to our compiled script.get(). Py_DECREF(codeObject). } 174 . PyCodeObject* codeObject = PyNode_CompileFlags(pythonNode. the script contains bad syntax and we return an error to that effect.node *pythonNode = PyParser_SimpleParseString(script.c_str(). we remove our reference to free it. which in Python will insert object declarations into the namespace if they exist. PyObject *evaluateResult = PyEval_EvalCode(codeObject. NULL). which inserts it into the namespace. if (!evaluateResult) { if (PyErr_Occurred()) return(false). This returns a success result. we compile the script into object code using PyNode_CompileFlags. if (PyErr_Occurred()) return(false). First. PyNode_Free(pythonNode). const_cast<char *>(scriptName. If we have a valid PyCodeObject pointer. It is effectively executing the code. if (!codeObject) { if (PyErr_Occurred()) return(false). Py_file_input).c_str()). mMainNamespace. Next. we return failure. Now that we are done with our PyCodeObject pointer. If any errors occurred. we evaluate it. which can be evaluated to insert it into the namespace. This will return a node pointer. we have Python parse the script string using PyParser_SimpleParseString(). so we free it. we check if an error occurred. If so. which we will use to compile the script. This will return a PyCodeObject pointer if successful. we do not need our parse node anymore. } If we do not have a code object. Something was wrong with the code.get()).

return main_namespace[typeName]. The GetErrString and GetLineNumber methods look at the exception stack. and parse out the required information to give a useful message about the runtime exception that just occurred. GetLineNumber(object traceBack). First we obtain the main namespace as a dictionary object. The last method of note is the Instance method. we remove our reference. GetObject is a convenience method for obtaining an object from the Python namespace. dict main_namespace(mMainNamespace). Now that we are done with our evaluation result. We then lookup our object by name in the dictionary. Py_DECREF(evaluateResult). something went wrong. and return it. This is about 100 lines of very unattractive Python innards code which is not necessary to know but. Let us take a look at it. Finally we return success. This lets us look up objects like a map. feel free to study the source code. if you do feel so inclined to delve into the exception handling. 175 . return(true). which simply returns a static instance of the class. static cPythonScriptEngine &Instance(void). string object GetErrString(void). object cPythonScriptEngine::GetObject(const string &typeName) { dict main_namespace(mMainNamespace). and it should free itself. } return main_namespace[typeName]. We check for an error code and return if an error occurred.If our evaluation result is NULL.

Python will provide when building the Python object. It is a bit tricky. cStatePtr state) : cScriptedAction(state). "execute"). void cPythonScriptedActionWrap::Execute(void) { call_method<void>(mSelf. Note that it is calling the “execute” method. void Execute(void). cStatePtr state) : cScriptedAction(state). }.Python call_method<> template function. The PythonScriptedActionWrap class is a wrapper class needed for Boost. protected: PyObject }. } virtual string Label(void) { return("Python Scripted Action"). const string &script ). mSelf(self) {} virtual ~cPythonScriptedActionWrap(void) { mSelf = NULL. which should define the scripted behavior for the derived type’s execution. } The Execute method uses the Boost. cPythonScriptedActionWrap(PyObject *self. as well as the standard cState smart pointer that the base class needs. which performs all of the magic of calling a specific method on a Python object. so look at the Boost. virtual ~cPythonScriptedActionWrap(void) { mSelf = NULL. Let us take a look at it.The Scripted Action Class class cPythonScriptedActionWrap : public cScriptedAction { public: cPythonScriptedActionWrap(PyObject *self.Python documentation for more details on why this is necessary. mSelf(self) {} The constructor takes a PyObject pointer which Boost. *mSelf.Python to allow polymorphic calling of Python derived virtual functions. class cPythonScriptedAction : public cScriptedAction { public: cPythonScriptedAction ( cStatePtr state. virtual ~cPythonScriptedAction(void) { } 176 . } The destructor simply NULLs its self pointer. const string &scriptName.

object scriptType = cPythonScriptEngine::Instance(). mScriptName(scriptName). Execute(void). const string &script ) : cScriptedAction(state). We then construct a new object of that type.GetObject(scriptName). 177 .GetObject(scriptName). we use our GetObject method to obtain the object from the namespace. and passes along the function calls as required. mScript. const string &scriptName. and the script source itself. This object builds a new derived Python type given by the script name. cPythonScriptedAction::cPythonScriptedAction ( cStatePtr state. &ScriptName(void) const { return(mScriptName) . mPythonInstance = scriptType(state). } mPythonInstance. void cPythonScriptedAction::Execute(void) { call_method<void>(mPythonInstance. Here. mScript(script) { object scriptType = cPythonScriptEngine::Instance(). passing in the state parameter it needs to build itself.ptr(). "execute"). Label(void) { return("Python Scripted Action"). The cPythonScriptedAction is our wrapper for Boost. Let us take a closer look. }.virtual string void const string const string protected: object string string }.Python’s wrapper. only on the python instance object. exactly as the Wrapper object did. mPythonInstance = scriptType(state). It then builds the Python object and stores off its instance. mScriptName. We store off this new object as our instance so we can make calls to it later. } The constructor takes the state pointer required by the base class. as well as the name of the script. } &Script(void) const { return(mScript) . } The Execute method simply uses the call_method<> template function.

target). } The destructor simply NULLs the mSelf PyObject pointer.Python when the object is constructed. target). bool protected: PyObject }. cStatePtr source. The cPythonScriptedTransitionWrap does exactly that. Just as with the derived scripted action class. cStatePtr target ) : cScriptedTransition(source. } virtual string Label(void) { return("Python Scripted Transition"). mSelf(self) {} virtual ~cPythonScriptedTransitionWrap(void) { mSelf = NULL. the derived scripted transition class requires a wrapper class to provide for polymorphic calling of the derived types in Python. cStatePtr source. cPythonScriptedTransitionWrap ( PyObject *self. ShouldTransition(void). mSelf(self) {} The class requires a PyObject. virtual ~cPythonScriptedTransitionWrap(void) { mSelf = NULL. bool cPythonScriptedTransitionWrap::ShouldTransition(void) { return(call_method<bool>(mSelf. this class uses the call_method<> template function to call the method on the Python object and automatically extracts the return value. cStatePtr target ) : cScriptedTransition(source. 178 . *mSelf. as well as two cState smart pointers required by the base class. which is provided by Boost. }.The Scripted Transition Class class cPythonScriptedTransitionWrap : public cScriptedTransition { public: cPythonScriptedTransitionWrap ( PyObject *self. "should_transition")). This value is then returned to the caller. } Like the cPythonScriptedActionWrap class.

and handles passing through the function calls.GetObject(scriptName). It then gets the object type from the namespace. cPythonScriptedTransition::cPythonScriptedTransition ( cStatePtr source. mScriptName(scriptName). const string &scriptName. It also takes the name of the script. mScript. cStatePtr target. virtual ~cPythonScriptedTransition(void) { } virtual string bool const string const string protected: object string string }. and stores it off for later use. } &Script(void) const { return(mScript) . } The constructor takes a cState smart pointer for the source and the target. target). cStatePtr target.GetObject(scriptName).class cPythonScriptedTransition : public cScriptedTransition { public: cPythonScriptedTransition ( cStatePtr source. mScriptName. const string &scriptName. and builds a new one. target). }. Here we get the object type from the main namespace using the script name. 179 . ShouldTransition(void). and the script string itself. It allows us to store the script name and script itself for ease of editing. mScript(script) { object scriptType = cPythonScriptEngine::Instance(). const string &script ). cPythonScriptedTransition provides a wrapper for the Wrap class. and passes them to the base class. target). mPythonInstance = scriptType(source. Label(void) { return("Python Scripted Transition"). mPythonInstance = scriptType(source. const string &script ) : cScriptedTransition(source. &ScriptName(void) const { return(mScriptName) . object scriptType = cPythonScriptEngine::Instance(). } mPythonInstance. Like cPythonScriptedAction.

from GI_AISDK import * First. from GI_AISDK import * class AddFibonacciAction(PythonScriptedAction): n1 = 1 n2 = 0 def execute(self): # add up last time. extract the result. Some Examples Now that we have looked at the internals of Python integration and the classes we will use to accomplish this in our project. class AddFibonacciAction(PythonScriptedAction): We then define a new action class as AddFibonacciAction which is derived from PythonScriptedAction.value Here we have a scripted action that increases the state’s value via the Fibonacci sequence (defined as Fn = Fn−2 + Fn−1 ). n1 and n2. bool cPythonScriptedTransition::ShouldTransition(void) { return(call_method<bool>(mPythonInstance.n1 self.ptr().n1 self. Let us take a closer look at the script. def execute(self): 180 . and store off the instance it returns.value = self. let us review some examples of Scripted Actions and Scripted Transitions to get a better feel for how everything fits together. passing in the required arguments.We then build a new object of the given type. n1 = 1 n2 = 0 We create two data members. we import our module.n1 = self.n2 = self. which will store local data for our calculations.state(). we use the call_method<> template function to call our method on our derived Python class.n2 + self. and return it to the caller.state(). "should_transition")). and time before self. } Just like the wrapper class.

n1 We then update n2 to be what n1 was.n1 = self.We define our execute method which will be called by the C++ code. Let us take a closer look.0.n2 = self. class RandomTransition(PythonScriptedTransition): Here we define a new transition class called RandomTransition which is derived from PythonScriptedTransition. if uniform(0.state(). # add up last time. 10.n2 + self. and make it n2 + n1.0) > 5: return True return False Here we have a transition that decides to transition if a random number generated is greater than 5.state(). 10. from random import * We then import the random module fully. self. which is called by our C++ class.n1 First we get our state’s current value.value = self. def should_transition(self): We define the should_transition method. Now let us try a scripted transition.0.0) > 5: 181 . from GI_AISDK import * First we import our GI_AISDK module fully.value Finally we set n1 to be what the new state’s value is. from GI_AISDK import * from random import * class RandomTransition(PythonScriptedTransition): def should_transition(self): if uniform(0. so that it can be added in the next iteration. self. and time before self.

where one entity tells the rest of the group what to do. or finite state machines. Whether you choose to use Python or some other language. we do not. Scripting adds a lot of flexibility to your application and ultimately allows you to build some very sophisticated AI. Conclusion The most common types of decision making are: Decision Trees – Decision trees are nested if-then-else statements. and rely upon external sources to make their decisions. keep track of their current decision. The squad behavior system essentially employs the concept of one entity telling fellow entities how to behave. we transition. we strongly recommend that you spend some time browsing the Python and Boost. we discussed state machines and their power to handle decision making tasks under various conditions. Rule Base – Rule base decision systems evaluate a series of rule criteria and score them. The rule which is given the highest score determines the decision. In this chapter. The demo for this chapter will hopefully make all of this much clearer to you when you see everything in one place. They contain no state information. or decision trees or rule base decision systems to determine what they want to do. At this point you should have a pretty good high level idea of how Python integrates into an application. hopefully you now have a better insight into how these systems work and how you might use them to accomplish your AI objectives. Once again. The entities themselves can use state machines. and try to establish a comfort level with the concepts introduced here in this chapter.Python documentation. and only evaluate the questions necessary to make a different decision. which are evaluated for every time step in order to reach a decision. We found that state machines can be used for such things as: Animation Systems Game State Save File Systems Artificial Intelligence We also discovered how such a system could be implemented. and examined a specific implementation of one of these systems in our chapter demonstration. otherwise. State Machines – State machines. Squad Behaviors – Squad behaviors are basically a complex form of flocking mixed with decision making.return True return False If the random number generated by the uniform distribution random generator is greater than 5. look at some tutorials on the web. 182 .

we will use our waypoints to store information that can trigger behaviors in our AI entities upon reaching the waypoint. A very handy GILES™ Waypoint Network Generator plug-in has been included with this course to facilitate all of this. so we could get a better understanding for how we might want to extend our demo using scripting. and the implementations we used for that system.Python. We learned about Boost. we went over a couple of examples of a scripted action and a scripted transition. and had a quick and dirty crash course in how to write it.We also discussed the topic of Scripting Engines. One of our main goals will be to integrate our waypoint data with the decision making concepts we learned in this chapter. In the next chapter. and delved into embedding such a system in our demo. So before moving on. and how it can be used to embed Python in our games for easy extensibility. and how they are useful in games. 183 . We learned about Python. We discussed the types of scripting systems commonly used. We chose waypoint networks as the navigation dataset in this case because they are very easy to work with in any 3D environment since they are not constrained by grids (making them very popular in game development shops). We talked about how we embedded Python in our chapter demo. We will be using it all again throughout the remainder of this course. We will explore these networks in more detail. we are going to begin pulling together all of our AI systems into a single SDK that you can use in your games. That is. Lastly. As part of that discussion we will revisit a pathfinding concept introduced very briefly at the start of the course – waypoint networks. please make sure that you understand the material and source code introduced in this chapter.

.

Chapter 5 Waypoint Networks 185 .

a state machine such as one we discussed previously. decision systems. In this chapter we will start bringing all of these ideas together to demonstrate how you might actually use each of these systems in an integrated environment. In this chapter we answer the following questions: • • • • What are waypoint networks? How can we attach data to waypoints. We will then talk about squads and squad leaders. we have talked about pathfinding algorithms.Overview So far. We will start by talking about waypoint networks. and how we can implement their state machines so they cooperate with one another. We will talk about how we can attach data to these networks such that the decision making system. so we can use it to make decisions? What are the common methods of implementing squad communication? How can we bring all of this together? 186 . a specific implementation of a pathfinding graph we briefly touched on earlier. flocking algorithms. can make decisions based on the data in the network. and scripting.

1.1 Waypoints Let us begin by talking about the waypoints themselves since they are obviously the fundamental element in a waypoint network. methods to traverse the network. Figure 5. a waypoint network is a graph. Other useful information would be an orientation. We will talk about the architecture of the network. etc. In this chapter. It also needs to have a collection of edges to other waypoints that can be reached from it.1 should look familiar. The red polygons are obstacles.5.1 Waypoint Networks Figure 5. We mentioned that one of the more popular methods of dealing with these sorts of worlds is called waypoint networks (or visibility points). 5. it has to have a position in space. a waypoint network is just a collection of waypoints and the edges between them. first of all. Ultimately.1 In Chapter Two we talked very briefly about pathfinding on non-gridded maps. It should also have a radius and we will talk about why this is important in just a bit.). since it is the same one we talked about previously when we introduced waypoint networks. So what are some important characteristics of a waypoint? Well. and the waypoints are just nodes in the graph. we are going to discuss this method of dealing with continuous worlds in depth. the small blue dots are waypoints. A waypoint probably should have some sort of identifier (an simple integer ID. although it is not strictly necessary. a GUID. If you recall some of the terminology we used during the early part of this course. and some additional considerations that come up when using such a system for games. The orientation is handy if you want to hint to the entities traversing the network that something useful might be found in the direction of this 187 . and the blue lines between them are the edges of the network.

Some other examples of things you might want to use as blind data on waypoints are: • Animation trigger data – tells the entity to play a specific animation upon reaching the waypoint • Wait Signal – tells the entity to pause briefly upon reaching the tagged waypoint • Look around signal – tells the entity to pause and look around for enemies • Cover – tells the entity that crouching here would provide them with cover from the direction indicated by the orientation of the waypoint • Defend – tells the entity that this position is a good defensive position • Danger – tells the entity that this position is particularly dangerous • Posture – tells the entity that movement from this waypoint should be done using a given posture (crouch. 10 meters/sec. what that data is. We use this color to set the color of the entity’s arrow when we render it. General blind data is basically game related data that is attached to the waypoint. we store a color as blind data. it can peer into that blind data (with full knowledge of what is in there). or 30 frames per second). not a continuous one. You have a certain amount of time pass each frame. with an excellent vantage over an enemy base. but the waypoint does not necessarily know. That means the smallest distance your entity can travel in 188 . In our chapter demo. Then. nor care about. You could set the orientation of this waypoint to face towards the base. walk) Discrete Simulations in Continuous Worlds Let us now talk a little about why the radius data member of a waypoint can be helpful. run.2 Let us assume that your entity is moving at say. you might set up a waypoint that is at the top of a hill. Figure 5. It is just holding onto it for the game entities to process when they come across it. but your game only updates every 33 milliseconds (30Hz. For example. The last thing you might want to attach to a waypoint is general blind data. and make some decisions based on it.waypoint. The important thing to remember about continuous worlds in games is that the game is a discrete simulation. and you typically update the position of an entity by integrating the velocity of the entity using that time delta (using standard Euler or other numerical integration techniques). when the entity runs across the waypoint. so the entities traversing the network could see that it is a good sniping position.

If the entity started at the green circle. and never reach it. let us take a look at the actual class we used to represent the waypoint in our demo. const D3DXQUATERNION &orient. } const D3DXQUATERNION &GetOrientation() const { return mOrientation. void FreeBlindData(). } void SetPosition(const D3DXVECTOR3 &position) { mPosition = position.sizeof(T)). if the radius of the black circle around the star was used. The Waypoint Class Now that we have a good understanding of the principle of the waypoint. resulting in the series of red circles. the likelihood that your entity will land right on it is pretty slim. } void SetOrientation(const D3DXQUATERNION &orientation) { mOrientation = orientation. } D3DXVECTOR3 &GetPosition() { return mPosition. data = *dataPtr. T &data) const { assert(offset <= (mBlindDataSize . T *dataPtr = (T*)((char*)mBlindData + offset).033 s). } const D3DXVECTOR3 &GetPosition() const { return mPosition. But if you give your waypoint a radius.sizeof(T))). So if your waypoint is a single point in space.a given frame is 3. every frame. and he wanted to get to the red star. } template<class T> void SetBlindData(UINT offset. virtual ~cWaypoint(). } D3DXQUATERNION &GetOrientation() { return mOrientation. } void AllocBlindData(UINT size). } float GetRadius() const { return mRadius. he could repeatedly jump over the star. the entity would have been found to have reached the waypoint after the first iteration.2. const tWaypointID &GetID() const { return mID. float radius). T data) { assert(offset <= mBlindDataSize .3 meters (10 m/s * 0. T *dataPtr = (T*)((char*)mBlindData + offset). class cWaypoint { public: cWaypoint(const D3DXVECTOR3 &pos. Take a look at Figure 5. *dataPtr = data. } void SetRadius(float radius) { mRadius = radius. However. then the entity has reached the goal. and if your entity is “close enough” to the waypoint. } 189 . template<class T> void GetBlindData(UINT offset.

This list of edges is used during the traversal to see which waypoints can reach which other waypoints. along with an orientation. mRadius. Quaternions are a useful way to represent rotation data because they have low memory footprint. void ClearEdges(). tEdgeList mOutgoingEdges. mPosition. tWaypointID mID. const cWaypointNetwork &network) const. GUIDs look like {D5FEE50A625B-4b6b-B10B-FAD046F0A729}. It is used by Windows for COM object registration. but most of it is accessors. bool RemoveEdge(const cNetworkEdge &edge). } const tEdgeList &GetEdges() const { return mOutgoingEdges. We will talk more about what these IDs are used for when we discuss the waypoint network class. Waypoints also have a position in space. and can be generated for us using the GuidGen tool provided with Microsoft Visual Studio. We chose quaternions for the orientations in our demo. } int int Serialize(ofstream &ar). The tEdgeList type is really just a typedef for an STL vector of network edge classes. There is a lot to take in there. A tWaypointID class in our demo is actually a tGUID class. GUID stands for “Globally Unique Identifier”. mOrientation. and the GUID structure stores a 128-bit integer in a specific fashion to serve that purpose. private: tWaypointID tEdgeList D3DXVECTOR3 D3DXQUATERNION float int void }. mOrientation. which is a wrapper class for the Microsoft Windows GUID structure. mBlindDataSize. float GetCostForEdge(const cNetworkEdge &edge. offer smooth interpolation. as well as the ::CoCreateGuid() method provided by the Windows API. D3DXVECTOR3 D3DXQUATERNION mPosition. mOutgoingEdges. First we will cover the class data members. A waypoint also has a list of edges. which we will discuss shortly. and solve the problem of gimble lock. Each waypoint has an ID. tEdgeList &GetEdges() { return mOutgoingEdges. among other things. *mBlindData.bool AddEdge(const cNetworkEdge &edge). UnSerialize(ifstream &ar). protected: friend class cWaypointNetwork. A discussion on 190 . cWaypoint(). mID.

return true. it is added to the list. bool cWaypoint::RemoveEdge(const cNetworkEdge &edge) { for (tEdgeList::iterator it = mOutgoingEdges.how quaternions work is beyond the scope of this course however. don't add it again for (tEdgeList::iterator it = mOutgoingEdges. which can be accessed using template functions provided. it != mOutgoingEdges. ++it) { cNetworkEdge &e = *it. if (e == edge) return false. return false. and if the edge to be added is not found. } mOutgoingEdges.end(). but if it interests you. starting with the edge management methods. if (e == edge) { mOutgoingEdges. float mRadius. let us discuss the implementations of the nontrivial methods.erase(it). } The AddEdge method iterates through its list of edges. } 191 . In this implementation. Lastly we have some data members for our blind data. This value is used to determine if an entity is “close enough” to be considered as having arrived at the waypoint.begin(). ++it) { cNetworkEdge &e = *it. int void mBlindDataSize.push_back(edge).begin(). it != mOutgoingEdges. Now that we know what the basic data of the class is. if we find it. } } return false. bool cWaypoint::AddEdge(const cNetworkEdge &edge) { // look for the edge. Waypoints also have the aforementioned radius.end(). further discussion on the topic can be found in the 3D Graphics Programming series and the Game Mathematics course here at Game Institute. *mBlindData. blind data is stored as a block of bytes. We will discuss those shortly.

float distance = D3DXVec3Length(&vec).GetDestination()). color). It uses this information to get the data out of the block for you by adding the byte offset to the data block pointer and casting it to your data type for you. as well as the type of data you want to extract. } This template method works in a similar fashion to the GetBlindData method. removes it from the list. the template method deduced the type of data you wanted by the type of the reference you passed in. if (!dest) return 0. It requires the caller know the byte offset into the data block so that you can locate your data. D3DXVECTOR3 vec = GetPosition() . } This method returns the cost for a given edge.dest->GetPosition(). it requires the caller to know the byte offset into the data block where the desired data is to be stored. The byte offset in this case is 0.The RemoveEdge method iterates through its list of edges. } This template method retrieves a value from the blind data block. someWaypoint->GetBlindData(0.0f. T *dataPtr = (T*)((char*)mBlindData + offset). const cWaypointNetwork &network) const { cWaypoint *dest = network. return distance * edge. template<class T> void GetBlindData(UINT offset. and if the edge is found. this should still be fairly self-explanatory. Here. First. *dataPtr = data. void SetBlindData(UINT offset. as well as the 192 . T *dataPtr = (T*)((char*)mBlindData + offset). It then computes the distance from itself to the destination waypoint. float cWaypoint::GetCostForEdge(const cNetworkEdge &edge. T &data) const { assert(offset <= (mBlindDataSize . the waypoint obtains the destination waypoint of the edge from the network. This distance is the base cost of the edge.sizeof(T)). The usage pattern looks like this: COLORREF color.FindWaypoint(edge. Again. While we have not yet discussed the implementation details of the edge class. T data) { assert(offset <= mBlindDataSize .sizeof(T))). data = *dataPtr. The edge also has a cost modifier associated with it. which is used to scale the base cost of the edge.GetCostModifier().

You could use that information as you traversed the network to make additional decisions for your entity. and one from the destination to the source. As such. someWaypoint->SetBlindData(0. the edge is open. or are outside the scope of this conversation (namely. Before we look at the implementation of the network edge class in our demo. multiplied by the cost modifier. color). The usage pattern looks pretty much identical to the last one: COLORREF color. let us look at the actual implementation of the network edge class in our demo. we make the modifier less than one but greater than or equal to zero. 193 . we can make the cost modifier greater than one. The Network Edge Class Now that we know what is involved with the network edges. In our demo. In our implementation. are too trivial for remark. The nice thing about a system like this is that it is very flexible and you can really let your creativity carry you almost as far as you want to go. The same logic would apply to doors or areas in the game world that might be temporarily off limits (perhaps a chemical weapon was used in the area). Here the template method deduced the type of the data you wanted to set based on the parameter you passed in. The cost modifier allows us to control how often an edge will be traversed. and not a source. The open flag determines if the edge can be traversed. just like on the waypoints. 5. if the bridge is down.type of data to store there. To make a bidirectional edge in our implementation. and it has a destination that it leads to. It then gets the pointer to the data block plus the offset. you could have blind data on the edges. the cost of an edge is the Euclidean distance from the owner waypoint to the destination waypoint. such as a drawbridge. We did not do it in our demo. a waypoint owns an edge. though this need not be the case. Edges have two other important characteristics: an open flag. the edge is closed. and a cost modifier. casts it to the type it needs. and the offset again is 0. and sets the data for you. serialization of the data). and can be traversed. If the bridge is up. edges in our implementation have only a destination waypoint. The remaining methods of the Waypoint class are either inline accessors for data. let us go over the details of the edges between the waypoints. one from the source to the destination.2 Network Edges Now that we have discussed the waypoints in a waypoint network. but it is worth remembering. a final note on network edges is in order. we simply create two edges. To traverse less often. Edges are unidirectional in our chapter demo implementation. If you were so inclined. So if we want the edge to be traversed more often. This is useful in game situations that have certain environmental conditions.1.

}. bool mOpen. bool open = true). float mCostModifier. Again. } void SetIsOpen(bool open) { mOpen = open. int int Serialize(ofstream &ar). } void SetDestination(const tWaypointID &destination) { mDestination = destination. UnSerialize(ifstream &ar). protected: friend class cWaypoint. 194 . and again. This is not quite as complicated as the waypoint class. } const tWaypointID &GetDestination() const { return mDestination. } float GetCostModifier() const { return mCostModifier. Let us take a closer look. bool mOpen. cNetworkEdge() {} private: tWaypointID mDestination. This ID is used to look up the waypoint in the network. float mCostModifier. } bool IsOpen() const { return mOpen. this is useful for such things as drawbridges or other passages which can become temporarily impassible. the bulk of the interface is accessors. As mentioned before. } bool operator==(const cNetworkEdge &rhs). the base cost of the edge is the Euclidean distance from the owner waypoint to the destination waypoint. tWaypointID &GetDestination() { return mDestination. tWaypointID mDestination. starting once again with the data members. float cost = 1. } void SetCostModifier(float costmod) { mCostModifier = costmod.// // cNetworkEdge .0f. The cost modifier is used to scale the base cost of the edge. This flag determines if this edge is currently traversable. virtual ~cNetworkEdge().an edge from one waypoint to another // class cNetworkEdge { public: cNetworkEdge(const tWaypointID &destination. The destination of the edge uses the waypoint ID system to identify the destination we can reach.

so we will not need to spend any time on those. Thus. &path). bool RemoveWaypoint(const tWaypointID &waypointID). As mentioned earlier. bool FindPathFromWaypointToWaypoint(const const tPath bool FindPathFromPositionToPosition(const const const tWaypointID &fromWaypoint. void GetExtents(D3DXVECTOR3 &minimum. In our implementation. While it does contain the actual method for traversing the network. the waypoints own the edges. D3DXVECTOR3 &destination. UnSerialize(ifstream &ar). The class methods are basically just accessor methods or serialization methods. a waypoint network is simply a collection of waypoints and edges (a graph).3 The Waypoint Network We have now seen the waypoints and we have examined the edges.a collection of waypoints and their associated edges // class cWaypointNetwork { public: cWaypointNetwork(). The Waypoint Network Class The waypoint network class is very straightforward. private: bool FindClosestValidWaypointToPosition(const D3DXVECTOR3 &origin. // // cWaypointNetwork . so it is time to take a look at the network itself. so the network really just turns out to be a collection of waypoints and the class acts primarily as a waypoint manager. that is basically all there is to it. on to the network! 5. cWaypoint *FindWaypoint(const tWaypointID &waypointID) const. void ClearWaypoints().1. } int int Serialize(ofstream &ar). tPath &path). So let us just dive right in. const tWaypointMap &GetWaypoints() const { return mWaypoints. bool AddWaypoint(cWaypoint &waypoint).Believe it or not. D3DXVECTOR3 &maximum) const. tWaypointID &toWaypoint. virtual ~cWaypointNetwork(). it really is not very complicated. const cWaypointVisibilityFunctor 195 . cWaypointVisibilityFunctor &visibilityFunc. and see what we are getting ourselves into. D3DXVECTOR3 &origin.

This lets us quickly find the waypoints we want using our IDs.GetID(). adds it to the map. if (!FindWaypoint(id)) { mWaypoints[id] = &waypoint. but not by much. There it is. It also removes the waypoint from the map. bool cWaypointNetwork::RemoveWaypoint(const tWaypointID &waypointID) { // if the waypoint doesn't exist don't remove it tWaypointMap::iterator it = mWaypoints.erase(it). and if it is not. and if successful. if (wp) delete wp. The sole data member in the waypoint network is a map of waypoint IDs to waypoints. deletes the waypoint. } The RemoveWaypoint method is slightly more complex. tWaypointMap mWaypoints. return true. // otherwise free the waypoint. }.end()) return false.find(waypointID). Let us take a look at the specifics. tWaypointID &result). return true. mWaypoints. 196 . if (it == mWaypoints. } The AddWaypoint method does a quick check to see if the waypoint is already in the map. } return false. It looks up the waypoint in the map. wp = NULL. const tWaypointID &toWaypoint).&visibilityFunc. it really is just a collection of waypoints. the waypoint network. and remove it cWaypoint *wp = it->second. float GoalEstimate(const tWaypointID &fromWaypoint. bool cWaypointNetwork::AddWaypoint(cWaypoint &waypoint) { // don't add the waypoint if its GUID already exists tWaypointID id = waypoint. tWaypointMap mWaypoints. As you can see. freeing its memory.

x = maximum.y. if (pos.y) minimum.y > maximum.x < minimum.z < minimum. if (pos. ++it) { cWaypoint *wp = it->second.z) maximum. if (pos. wp->GetRadius().z) minimum.begin(). and clear the map for (tWaypointMap::iterator it = mWaypoints.x = pos.x) maximum.z. for (tWaypointMap::const_iterator it = mWaypoints.end()) return NULL. wp->GetRadius()).z = FLT_MAX.void cWaypointNetwork::ClearWaypoints() { // delete all waypoints.begin().x) minimum.y.y = pos.end(). } The ClearWaypoints method simply iterates through all of the waypoints in the map. } The FindWaypoint method looks the waypoint up in the map. if (wp) delete wp. D3DXVECTOR3 pos = wp->GetPosition() + D3DXVECTOR3(wp->GetRadius().y = maximum. It then clears the map of its entries. it != mWaypoints. void cWaypointNetwork::GetExtents(D3DXVECTOR3 &minimum. } mWaypoints. return it->second.x > maximum.x.y = pos.z = -FLT_MAX.y = minimum.z = pos. ++it) { cWaypoint *wp = it->second. cWaypoint *cWaypointNetwork::FindWaypoint(const tWaypointID &waypointID) const { // if we find the waypoint.y) maximum. if (it == mWaypoints.end().y.y < minimum. if (pos. if (pos. } } 197 . maximum.find(waypointID). return it tWaypointMap::const_iterator it = mWaypoints. wp = NULL.x = minimum.clear().z > maximum.z = pos. D3DXVECTOR3 &maximum) const { minimum.x. and deletes them and frees their memory. if (pos.x = pos. and simply returns it if found. it != mWaypoints.

} } } // return if we've found any close waypoints we can see return foundClosest. // if our distance to the waypoint is the closest we've found yet if (distsq < closestDistanceSq) { // keep track of it closestDistanceSq = distsq. // if we have a valid waypoint. float distsq = D3DXVec3LengthSq(&vec). and we can see it from this position if (wp && visibilityFunc. } FindClosestValidWaypointToPosition is a useful method employed during the pathfinding traversal of the network. tWaypointID &result ) { float closestDistanceSq = FLT_MAX.pos. There is one last method to discuss in the waypoint network class before delving into the pathfinding algorithm employed in the demo.IsVisible(origin.begin(). const cWaypointVisibilityFunctor &visibilityFunc.end(). const D3DXVECTOR3 &pos = wp->GetPosition(). bool cWaypointNetwork::FindClosestValidWaypointToPosition ( const D3DXVECTOR3 &origin. but first let us work out what this method does. pos)) { // check our distance to the waypoint D3DXVECTOR3 vec = origin . bool foundClosest = false. ++it) { cWaypoint *wp = it->second. We will discuss those methods in detail shortly. it != mWaypoints. result = wp->GetID(). 198 . // iterate through all the waypoints for (tWaypointMap::iterator it = mWaypoints.The GetExtents method iterates through all of the waypoints. and expands the minimum and maximum vectors to build a bounding box for the waypoint network. foundClosest = true.

The method also takes a visibility functor. }. let us discuss the algorithm. Now that we know a bit about the parameters. Normally this is hooked up directly to a physics simulation system. First we set the closest distance squared value to the maximum float value.First the parameters of the method: const D3DXVECTOR3 &origin. This is the interface for the waypoint visibility functor. // // cWaypointVisibilityFunctor . The result is then used to determine if you can see between the points. bool foundClosest = false. This result will be set to the closest waypoint to the origin point that can be seen from the origin point (assuming there is one). The idea is that you would derive your own type and overload the IsVisible() method. 199 . If you can draw the line you can see from one point to the other. float closestDistanceSq = FLT_MAX. any distance will be less than this distance.a base class for determining visibility // for network waypoints // class cWaypointVisibilityFunctor { public: cWaypointVisibilityFunctor() {} virtual ~cWaypointVisibilityFunctor() {} bool IsVisible(const D3DXVECTOR3 &origin. if not. which is the location from which we want to find the closest waypoint in the network. you cannot. This class is used to interface to your game system to provide visibility information between waypoints. const D3DXVECTOR3 &destination) const. We also set a flag noting we have not found a closest node yet. Let us take a quick look at that class. The method takes an origin point in space. Thus. const cWaypointVisibilityFunctor &visibilityFunc. which does ray casting into the physics representation of the world to see if you can draw a line from one point to another without running into anything (often called a “line of sight” test). That method would then do the line of sight test from the origin point to the destination point and return success if the line could be drawn. Let us return now to the parameters of the FindClosestValidWaypointToPosition method… tWaypointID &result The last parameter of the method is an address to a waypoint ID.

and as such it saves us a square root operation. // if our distance to the waypoint is the closest we've found yet if (distsq < closestDistanceSq) We then test this computed distance to see if it is less than the closest waypoint distance we have found so far.pos.// iterate through all the waypoints for (tWaypointMap::iterator it = mWaypoints. and a host of other hierarchical spatial data structures in great detail.end(). we would have a BSP or oct-tree that would assist us in finding the waypoint closest to our point. const D3DXVECTOR3 &pos = wp->GetPosition(). Be certain to check out that course at some point in the not too distant future so that you can integrate a search tree into your application and realize the benefits.begin(). This is a more complex test however. Another possible solution would be to check the cone from the origin point to the destination waypoint. but our demo does not require such an approach. Ideally. taking into account its radius. // keep track of it closestDistanceSq = distsq. Something that the default implementation of the visibility functor does is just check from the origin to the center of the waypoint. We use the squared result since we are just comparing the results in terms of magnitude.IsVisible(origin. pos)) We then call our visibility functor to see if we can see the waypoint from the origin point. since we do not have many waypoints to hunt through. so it was not used for the demo. Since it could do a binary search. it != mWaypoints. To be fair. we compute the distance squared to the waypoint from our origin position. octtrees. and we can see it from this position if (wp && visibilityFunc. float distsq = D3DXVec3LengthSq(&vec). As we get each waypoint out of the map. this is not the best design strategy. The linear search is good enough for the demo however. Again. it would turn this search from an O(n) search to an O(log2 n) search. foundClosest = true. 200 . result = wp->GetID(). // check our distance to the waypoint D3DXVECTOR3 vec = origin . If we can see the waypoint. 3D Graphics Programming Module II here at the Game Institute covers BSP trees. we could take into account the waypoint’s radius if we wanted. ++it) We then start iterating through all of the waypoints. we get its position. // if we have a valid waypoint. cWaypoint *wp = it->second.

If we are dealing with case (b). The entity is traveling from a point on the network to a point on the network. 201 . There are three separate cases that can occur when computing a path for an entity: a. The caller will check this flag to see if the waypoint ID reference passed in will contain the closest waypoint or not. navigating it is simple. When we say “on the network” versus “off the network. After we have iterated across all the waypoints. we mark it as the closest distance we have found. If we are dealing with case (a). using the waypoint it is starting at. It would be an optimization to provide a method that could fast path the finding of the known waypoints. but that was not done for this demo. which internally finds the closest waypoints to the start and end.” we mean that an entity that is “on the network” is actually within the bounds of a waypoint. the FindPathFromPointToPoint was used. then the entity can use the FindPathFromWaypointToWaypoint method directly. Primarily in this demo. this actually simplifies the code somewhat. In the demo. then the entity again uses the FindPathFromPointToPoint method passing the position of the waypoint the entity is within or the position of the waypoint that is the goal. The entity is traveling to or from a point on the network to or from a point off the network. c. If we are dealing with case (c). whereas “off the network” means the entity is not within the bounds of an actual waypoint. and set our flag to say that we have found a closest waypoint. // return if we've found any close waypoints we can see return foundClosest. bool cWaypointNetwork::FindPathFromPositionToPosition ( const D3DXVECTOR3 &origin. we use the same A* algorithm as in our initial pathfinding demo. The entity is traveling from a point off the network to a point on the network. b. We also store off the ID of the waypoint. whichever we have a waypoint for. then the entity uses the FindPathFromPointToPoint method. only it has been modified to traverse the waypoint network structure instead of a fixed grid. and then uses FindPathFromWaypointToWaypoint to compute the path between those waypoints. 5.If we find that the newly computed distance is less than the closest distance we have found so far. we return our flag. Let us take a look at the implementation of this method. Let us take a look at what needs to be done. and the waypoint it is going to as the parameters.2 Navigating the Waypoint Network Now that we have a waypoint network. Believe it or not.

we fail to make a path. path). a position for the destination.clear(). // // cAStarWaypointNode . a visibility functor. and a closest waypoint to the destination. and they are not the same. // find the waypoint closest to the destination position if (!FindClosestValidWaypointToPosition(destination. so let us just walk through it right here.a special node for scoring the weights for determining // pathing through a waypoint network // using A* // class cAStarWaypointNode { public: cAStarWaypointNode(const tWaypointID &id). const cWaypointVisibilityFunctor &visibilityFunc. // // // if if the starting waypoint is the same as the ending waypoint. tPath &path tWaypointID closestToOrigin. tWaypointID closestToDestination. If we have a closest waypoint to the origin. and a reference to a path. Next we clear the path. The method takes a position for the origin. It is not overly complex. and early abort in the event the closest waypoint to the origin is the same waypoint as the destination. we can just go directly from the origin to the destination without acquiring the network. Again. we should discuss a helper class. // find the waypoint closest to the starting position if (!FindClosestValidWaypointToPosition(origin. We will talk about that method in a moment. 202 . // clear the path path. we can just walk straight to the destination point (closestToOrigin == closestToDestination) return true. closestToOrigin)) return false. } Here is the algorithm in its entirety. return FindPathFromWaypointToWaypoint(closestToOrigin. the cAStarWaypointNode class. visibilityFunc. If we do not find one. We then find the closest valid waypoint to the destination.) { const D3DXVECTOR3 &destination. we fail to make a path. In that case. The tPath typedef is simply an STL list of waypoint ID objects. we call upon the FindPathFromWaypointToWaypoint method to compute a path for us. closestToDestination)) return false. if we do not find one. closestToDestination. visibilityFunc. Before we get going on the FindPathFromWaypointToWaypoint. We first try to find the closest valid waypoint to the origin.

} const tWaypointID &GetWaypoint() const { return mWaypoint. // seed the search with the starting point float g = 0. tNodeMap nodeMap. It is very similar to the A* node classes we studied earlier in the course. Once we find the path. toWaypoint). float h = GoalEstimate(fromWaypoint. } void SetVisited(bool visited) { mVisited = visited. float f. bool mVisited. float &h) const { f = m_f.cAStarWaypointNode(const tWaypointID &id. float m_h. } cAStarWaypointNode *GetParent() const { return mParent. float m_g. cAStarWaypointNode *n. float h) { m_f = f. private: tWaypointID mWaypoint. Enough of the easy stuff. float g. float g. g and h values for computing the actual cost of the node. float m_f. h). cWaypoint *nwp. tAStarWaypointNodeList closed. we walk from end to start via the parent node pointers.0f. cAStarWaypointNode *node = new cAStarWaypointNode(fromWaypoint. float h). tPath &path ) { // ye verily do traversal of network here tAStarWaypointNodePriorityQueue open. This class is used for managing the traversal of the network. }. nodeMap[fromWaypoint] = node. } bool GetVisited() const { return mVisited. float f = g + h. and it keeps track of the parent node that got us here. m_g = g. } void GetCosts(float &f. float &g. g. f. m_h = h. onto the pathfinding! bool cWaypointNetwork::FindPathFromWaypointToWaypoint ( const tWaypointID &fromWaypoint. } bool operator<(const cAStarWaypointNode &rhs) const. the class maintains the f. g = m_g. cAStarWaypointNode *mParent. h = m_h. } void SetCosts(float f. const tWaypointID &toWaypoint. float GetCost() const { return m_f. but it is worth mentioning. 203 . } void SetParent(cAStarWaypointNode *parent) { mParent = parent. } void SetCost(float cost) { m_f = cost. This is done so we do not have to keep track of heuristic estimate costs and visited status on the waypoints themselves. Just as before.

*this).end().clear(). it != nwp->GetEdges().contains(node)) && g <= newg ) 204 . // now iterate the search while(!open.begin(). h). if (nodeIt == nodeMap.contains(node) || closed. float newg = g + nwp->GetCostForEdge(edge. if we don't have an // entry in the node map.end()) { // never been here wasInMap = false. node = node->GetParent().find(edge.front(). tNodeMap::iterator nodeIt = nodeMap.push_front(node->GetWaypoint()).pop_front().IsOpen()) continue.push_back(node). g. we've never been here // before. if (n->GetWaypoint() == toWaypoint) { path. if (!dest || !edge.empty()) { n = open. nwp = FindWaypoint(n->GetWaypoint()). node = n.GetDestination()] = node. bool wasInMap = true.open.GetDestination()). } else node->GetCosts(f.GetDestination()). // first check the node map.GetDestination()). h). } for (tEdgeList::iterator it = nwp->GetEdges(). node = nodeIt->second. nodeMap[edge. cWaypoint *dest = FindWaypoint(edge. open. } return true. g. ++it) { cNetworkEdge &edge = *it. // add it to the map node = new cAStarWaypointNode(edge. n->GetCosts(f. while (node) { path. if ( wasInMap && (open.

} else { // update this item's position in the queue // as its cost has changed // and the queue needs to know about it open. there are some small changes..contains(node)) { // remove it closed. f = g + h. Most importantly. const tWaypointID &toWaypoint. toWaypoint). node->SetParent(n). Let us take a look at what is going on. node->SetCosts(f. g. we are already in the queue // and we have a cheaper way to get there. as it is the A* algorithm we have already discussed in this course. } This is obviously the core of our pathfinding on the waypoint network. } } } } closed.contains(node)) { open. This algorithm should look familiar. if(closed. tPath &path ) 205 .. g = newg. h = GoalEstimate(node->GetWaypoint().remove_item(node). Even so..add_item(n). // unable to find a route return false. } if(!open. h). bool cWaypointNetwork::FindPathFromWaypointToWaypoint ( const tWaypointID &fromWaypoint. it does the traversal all in one fell swoop rather than one iteration at a time like our last demo did.sort().{ } else { // do nothing..add_item(node).

and push the A* node onto our open queue. cWaypoint *nwp. toWaypoint). The first thing we do is seed the search with the starting point.0f. This map associates waypoint IDs to A* waypoint nodes. // ye verily do traversal of network here tAStarWaypointNodePriorityQueue open. tAStarWaypointNodeList closed. The algorithm begins with a priority queue of A* waypoint nodes which is the “open” queue of nodes to examine.The method takes a waypoint ID of the waypoint to start at. Last. tNodeMap nodeMap. float f = g + h. a waypoint ID of the waypoint to end at. we have a node map. float h = GoalEstimate(fromWaypoint.front(). This also helps to let us know if we have visited a node or not. create an A* waypoint node for this waypoint. // now iterate the search while(!open. associate the waypoint ID to the A* node in our map. cAStarWaypointNode *node = new cAStarWaypointNode(fromWaypoint. and false it no path from the starting waypoint to the ending waypoint can be found. 206 . Next we have a list of A* waypoint nodes which is the “closed” list of already examined waypoints. We also have a waypoint pointer which is the waypoint associated with the A* waypoint node object we are currently traversing. g. nodeMap[fromWaypoint] = node. We estimate the distance from the start to the goal using our heuristic. We have an A* waypoint node object which is the node we are currently traversing. The method returns true if a path was found. f. cAStarWaypointNode *n. This gets us ready for the main loop. n = open. // seed the search with the starting point float g = 0.empty()) So long as we have nodes to investigate in our open queue. we will perform the search. h). and a reference to a path.push_back(node). open.

find(edge. n->GetCosts(f. we get the costs for the current A* node. float newg = g + nwp->GetCostForEdge(edge. We will not go into detail on how that works. If the current node is the goal node. if (n->GetWaypoint() == toWaypoint) { path. it != nwp->GetEdges().push_front(node->GetWaypoint()).IsOpen()) continue.GetDestination()). and pop the node off the queue. g.pop_front(). } Next we see if the current node we popped off the queue happens to be the goal node. ++it) Assuming we have not reached the goal node. if (!dest || !edge. The implementation lives in WaypointNetwork.end()) 207 . cWaypoint *dest = FindWaypoint(edge. tNodeMap::iterator nodeIt = nodeMap. we iterate across all of the edges of the current node. bool wasInMap = true. pushing the nodes onto the path in reverse order. and compute a new g using the current g. if we don't have an // entry in the node map. or the edge is closed. node = n. If we cannot find the destination. we grab the top node off the priority queue. we clear out our path. not the other way around. For each edge. During every iteration. we find the destination waypoint in the network. open. h). we skip this edge. This way the path has the nodes in the order they should be visited. cNetworkEdge &edge = *it.nwp = FindWaypoint(n->GetWaypoint()). as this is basic algorithms theory. we've never been here // before. find the waypoint for the ID the node is associated with. while (node) { path. if (nodeIt == nodeMap. and the cost for this edge.GetDestination()). for (tEdgeList::iterator it = nwp->GetEdges(). Assuming we have a traversable edge with a valid destination waypoint. node = node->GetParent().clear(). We use a special derived version of the STL list container for our priority queue. } return true. and walk the parent pointers of the A* nodes. // first check the node map.h if you feel inclined to peruse it. *this).begin().end().

node->GetCosts(f. toWaypoint). } else Next we check to see if the destination waypoint has a representing A* node in the map yet. nodeMap[edge.. we simply get the A* node from the map for destination waypoint ID. and set the costs.. g = newg. h).GetDestination()] = node. we set its parent to the current node. h). f = g + h. Here we check to see if we should update the destination’s A* node. we are already in the queue // and we have a cheaper way to get there. and associate it with the destination waypoint ID in the map.remove_item(node).GetDestination()).contains(node)) { // remove it closed. If the node was previously in the map. g. node = nodeIt->second. compute the heuristic estimate for this node.. and the existing g cost is less or equal the new g cost computed.. if(closed. and either the open or closed lists contain the node. } 208 . g. If it does exist. node->SetCosts(f. If it does not.contains(node)) && g <= newg // do nothing. h = GoalEstimate(node->GetWaypoint().contains(node) || closed. We then get the current costs for the destination waypoint’s A* node. if ( ) { } wasInMap && (open. we create a new A* node.{ // never been here wasInMap = false. node->SetParent(n). we do nothing. If we have determined that we need to update the destination A* node. // add it to the map node = new cAStarWaypointNode(edge. We also set its g to the new g computed.

closed.add_item(node). as it needs re-examination. Our discussions will take us back to other topics covered in Chapters 3 and 4 so that we can see how these pieces can fit together to produce interesting results. This action automatically keeps the queue sorted. After each of the edges for the current node have been visited. That is all there is to finding a path using a waypoint network. From a theoretical perspective.Then.contains(node)) { open. we remove it from the closed list. if after exhausting the open queue. Given your experience with A* earlier in the course. In the next few sections we are going to talk about how we decided to use this new system in our chapter demo. If the open queue does contain the node however. } If the open queue does not contain the node. 209 .sort(). if we find the node in the closed list. if(!open. we return false. we add it to the open queue. This will ensure the node is properly placed in the queue based on its priority. } else { // update this item's position in the queue // as its cost has changed // and the queue needs to know about it open. you now have all of the information you need to take your waypoint networks and use them to find paths from place to place in the game world. Lastly. we do not find a path and return above. // unable to find a route return false. reporting failure to build the path. This should provide you with a foundation to work from so that you can begin integrating your own ideas that suit your particular game. this should be second nature to you. we simply resort the open queue. the current node is added to the closed list.add_item(n).

210 . float mGoalRadius. ApplyAvoidance(cEntity &entity).5. and creates a movement behavior that follows a waypoint network path. The idea is that you have a set of separate behaviors and that each could contribute to the final desired movement of the entity. The goal radius parameter is used to determine if the entity has satisfactorily reached its goal. D3DXVECTOR3 mUpVector. It has a decent number of data members. this behavior has a turning rate parameter which limits how quickly the entity can make turns. float mAvoidDist. float mGoalRadius. so we will examine those first. float mTurnRate. our demo makes use of the behavioral movement system developed in the flocking demo. float goalRadius.3 Flocking and Waypoint Networks In Chapter 3 when we were talking about flocking. float maxTimeBeforeAgitation. cWaypointNetwork &mWaypointNetwork. cEntity &entity).1 The Pathfind Behavior class cPathfindBehavior : public cBehavior { public: cPathfindBehavior(float turnRate. Just as with many of our movement behaviors from the flocking demo. Name(void) { return("Pathfind Behavior"). Bear in mind the goal is different than the current waypoint. float avoidDist. virtual void void virtual string Iterate(float timeDelta. float mMaxTimeBeforeAgitation. we discussed the concept of behavior based movement. cWaypointNetwork &waypointNetwork). while the current waypoint is the next one. } protected: float mTurnRate. Here we see our pathfind behavior. The goal is the final destination. Following up on that concept.3. const D3DXVECTOR3 &upVector. 5. }. virtual ~cPathfindBehavior(void).

SetNextWaypoint(wpID).entityPos. squadmate. if (wpID == GUID_NULL && path.size() > 0) { // set the next waypoint! wpID = path. Basically it has to do with keeping the entity from getting stuck trying to reach the next waypoint.SetDesiredMove(-entity. This is the vector which is “up” in the game world. cEntity &entity) { // pathfinding only works on squad mate type entities! cSquadEntity &squadmate = dynamic_cast<cSquadEntity&>(entity).0f. squadmate. path. but for now just know that it is used in conjunction with the max time before agitation variable.Position(). More on this value later.float mAvoidDist. cWaypointNetwork &mWaypointNetwork. void cPathfindBehavior::Iterate(float timeDelta..0f. 0.pop_front().. ApplyAvoidance(entity). The avoid distance is the distance at which the entities strive to remain apart from one another. } else if (wpID == GUID_NULL || path.GetPath(). squadmate.GetGoal() .ResetTimeSinceWaypointReached(). return. entityPos = entity. stand around entity. We need this to look up waypoints from IDs. This is a reference to the network we will be navigating. desiredMoveAdj(0.0f).Velocity()). float mMaxTimeBeforeAgitation. Now that we have a good idea what kinds of data this behavior needs. tPath &path tWaypointID D3DXVECTOR3 D3DXVECTOR3 = squadmate.empty()) { desiredMoveAdj = squadmate. We will discuss it in more detail later. if (D3DXVec3Length(&desiredMoveAdj) < mGoalRadius) { // we made it.ResetTimeSinceWaypointReached(). let us look at how it does its job. wpID = squadmate. 0.GetNextWaypoint().front(). D3DXVECTOR3 mUpVector. The max time before agitation is another data member we will discuss a bit later. } } 211 .

ApplyAvoidance(entity).entityPos. if (wp) { D3DXVECTOR3 wppos = wp->GetPosition().ResetTimeSinceWaypointReached(). currentDesiredMove = agitationVector. desiredMoveAdj *= mTurnRate. stand around entity. desiredMoveAdj = wppos . no more waypoints. wp = mWaypointNetwork. if (D3DXVec3Length(&desiredMoveAdj) < wp->GetRadius()) { // close enough! next waypoint! if (path. D3DXVECTOR3 currentDesiredMove = entity. } else { // uh.. path. D3DXVec3Normalize(&desiredMoveAdj. return. squadmate. // nudge the desired move vector by using a vector perpendicular to the // current desired move's direction. &currentDesiredMove.cWaypoint *wp = mWaypointNetwork.pop_front().GetGoal() .entityPos. squadmate. wppos = wp->GetPosition(). squadmate. 212 .front(). } entity. squadmate.size() > 0) { // set the next waypoint! wpID = path.SetNextWaypoint(wpID).FindWaypoint(wpID).entityPos.ResetTimeSinceWaypointReached().GetTimeSinceWaypointReached() > mMaxTimeBeforeAgitation) { // agitate the movement to get the guy moving properly D3DXVECTOR3 agitationVector. &desiredMoveAdj). D3DXVec3Cross(&agitationVector.IncrementTimeSinceWaypointReached(timeDelta). if (D3DXVec3Length(&desiredMoveAdj) < mGoalRadius) { // we made it.SetDesiredMove(-entity.SetDesiredMove(currentDesiredMove). ASSERT(wp != NULL).DesiredMove().ResetTimeSinceWaypointReached(). } } } } // move in the direction of your next pathnode or your goal position squadmate.Velocity()).FindWaypoint(wpID). &mUpVector). if (squadmate.SetNextWaypoint(GUID_NULL). desiredMoveAdj = wppos . squadmate. // start walking toward our target position desiredMoveAdj = squadmate. currentDesiredMove += desiredMoveAdj * Gain()..

We also reset a timer which keeps track of how long it has been since we reached a waypoint.entityPos. entityPos = entity. } } 213 .Position(). The behavior requires the time passed since the last iteration as well as the entity upon which to iterate. First.SetNextWaypoint(wpID).ResetTimeSinceWaypointReached().size() > 0) { // set the next waypoint! wpID = path. } That was a fairly big method. squadmate.Velocity()).GetPath(). squadmate. path. squadmate. if (D3DXVec3Length(&desiredMoveAdj) < mGoalRadius) { // we made it.0f. we get the squad mate’s path. and we have some nodes left in our path. else if (wpID == GUID_NULL || path.pop_front(). void cPathfindBehavior::Iterate(float timeDelta.0f). cEntity &entity) We begin with the prototype.GetNextWaypoint(). wpID = squadmate. // pathfinding only works on squad mate type entities! cSquadEntity &squadmate = dynamic_cast<cSquadEntity&>(entity).0f.SetDesiredMove(-entity. ApplyAvoidance(entity). tPath &path tWaypointID D3DXVECTOR3 D3DXVECTOR3 = squadmate. return.. So we make sure that is the case here. and set it on the squad mate. we get the next waypoint from the path.GetGoal() . his position.empty()) { desiredMoveAdj = squadmate. if (wpID == GUID_NULL && path.front(). desiredMoveAdj(0. 0. and initialize the desired movement adjustment to be nil. his next waypoint ID.. } If the waypoint ID is NULL. This particular implementation only works on the special squad entities derived for this demo. stand around entity.ResetTimeSinceWaypointReached(). 0. so we will tackle it a bit at a time.ApplyAvoidance(entity).

we have reached it. and set our desired movement vector to be the vector from our current position to this new waypoint’s position. or our path is empty.FindWaypoint(wpID). and set our desired movement adjustment to be the vector from our current position to the waypoint’s position. 214 . path.SetNextWaypoint(GUID_NULL). } If we have path nodes left in our path. // start walking toward our target position desiredMoveAdj = squadmate. cWaypoint *wp = mWaypointNetwork.front(). We then reset the amount of time it has been since this squad mate last reached a waypoint.size() > 0) { // set the next waypoint! wpID = path.FindWaypoint(wpID).pop_front(). If that vector’s length is less than the goal’s radius.GetGoal() . So we compute our desired move adjustment to be the vector to the goal from our position. In that case. Assuming we have not reached our goal.entityPos. we are moving directly towards our goal position.entityPos. We get the position of the waypoint. wppos = wp->GetPosition(). wp = mWaypointNetwork. So… // close enough! next waypoint! if (path. we set our velocity to bring us to a stop by negating it. if (D3DXVec3Length(&desiredMoveAdj) < wp->GetRadius()) If the magnitude of that vector is less than our waypoint’s radius. else { // uh.SetNextWaypoint(wpID). desiredMoveAdj = wppos . desiredMoveAdj = wppos .If our waypoint ID is NULL. and again reset the amount of time it has been since we last reach a waypoint.ResetTimeSinceWaypointReached(). no more waypoints. squadmate. squadmate. We then find the waypoint for this new waypoint ID. we have reached the goal. we find the waypoint for our current waypoint ID.entityPos. ASSERT(wp != NULL). apply some avoidance measures (which we will cover later). squadmate. if (wp) Assuming we have a waypoint… D3DXVECTOR3 wppos = wp->GetPosition(). we grab the next waypoint from the path. We then return out of the method. and set the squad mate’s next waypoint to be that ID.

IncrementTimeSinceWaypointReached(timeDelta). } } Otherwise. we apply some mojo.. At that point. we start walking towards our goal.SetDesiredMove(currentDesiredMove). We set the desired movement adjustment to be the vector from our current position to our goal. 215 . Lastly.Velocity()). return. currentDesiredMove += desiredMoveAdj * Gain(). squadmate.ResetTimeSinceWaypointReached(). ApplyAvoidance(entity). ApplyAvoidance(entity). we set the desired move to the newly computed desired move and we apply some avoidance. desiredMoveAdj *= mTurnRate. &desiredMoveAdj). we actually apply some clamps to the desired movement adjustments..SetDesiredMove(-entity. If we have not aborted from reaching a goal. we have reached the goal. If the magnitude of the vector from our current position to the goal is greater than the goal radius. We get our current desired move. We also NULL out our squad mate’s next waypoint ID. We then return from the method. // // some mojo here… // entity. But first. and apply the turn rate scalar. D3DXVec3Normalize(&desiredMoveAdj. we increment the amount of time it has been since this squad mate reached a waypoint. and we reset the time since this squad mate last reached a waypoint. D3DXVECTOR3 currentDesiredMove = entity. stand around entity. Then we add our desired move adjustment. At this point. We will talk about how that works very shortly. normalize our desired movement adjustment. if we have no more nodes left in our path.DesiredMove(). // move in the direction of your next pathnode or your goal position squadmate.if (D3DXVec3Length(&desiredMoveAdj) < mGoalRadius) { // we made it. We will go over exactly what that is shortly. We apply some avoidance. scaled by the behavior gain. we again negate our velocity and set it to be our desired movement adjustment to bring us to a halt. to the current desired move.

squadmate. our entities can suffer from this same problem. Enter the mojo… if (squadmate. However. currentDesiredMove = agitationVector. If 216 . while not slowing down. we can do something about it -. If the car was trying to get into the blue circle. But there is a cure! By keeping track of how long it has been since we last reached the waypoint.GetTimeSinceWaypointReached() > mMaxTimeBeforeAgitation) { // agitate the movement to get the guy moving properly D3DXVECTOR3 agitationVector. and its current velocity was in the direction of the green arrow. Sadly. &currentDesiredMove. The car corners pretty well and you can almost turn on a dime.3 Take a look at Figure 5. } Right before we set our desired movement. then the adjusted velocity would be the blue arrow. the red arrow is the adjusted velocity. we can detect if it has been an unacceptably long period of time since we made progress. // nudge the desired move vector by using a vector perpendicular to the // current desired move's direction. Imagine driving a car at 10 mph.Getting Stuck There is a little complication that must be taken into account when applying turning radius limits.perturb their velocity.3. Figure 5. This rate limit just keeps carrying it around the red circle! We never get to the blue circle because we keep trying to turn the same amount. since the car has a turning radius.ResetTimeSinceWaypointReached(). D3DXVec3Cross(&agitationVector. Now start driving 50 mph. we check to make sure that the amount of time it has been since this entity last reached a waypoint is not greater than the max time before we agitate his motion. You cannot turn on a dime anymore because the car has a turning radius. &mUpVector). If it has been too long.

it is, we cross the desired movement vector with the up vector, and get a vector perpendicular to our current motion. We then use that vector as our desired movement vector, for one and only one tick. This perturbs the motion of the entity just enough, and he can recover from the circular pattern we just witnessed.

5.3.2 A Word on Avoidance
There is one last bit remaining unexplained in the pathfinding behavior, and that is the ApplyAvoidance behavior. While we could have just used the avoidance behavior from the flocking demo, it did not provide exactly the same kind of avoidance that we wanted in this application. One thing to bear in mind is that avoidance is not supposed to keep the entities from ever running into each other. In order to do that, we would need to run a full scale simultaneous solve of the positions and velocities of the entities, and resolve interpenetrations. We did not want to worry about that in this demo since it introduces systems that are beyond the scope of this course. The idea here is that a proper collision system would keep you from being on top of one another, while the avoidance behavior would attempt minimize the number of times they keep running into each other. So the systems work in tandem. Discussion of a full blown collision system can be found in 3D Graphics Programming Module II. In a nutshell, the avoidance behavior checks to see if any entities are too close, and if so, computes an adjustment vector to move the entity on a travel path tangent to the other entities’ periphery.
Entity 2

v

θ
r 2r

Entity 1 r

Entity 1b

Figure 5.4

Looking at Figure 5.4, we compute the vector from Entity 1 to Entity 2. We then know if we want Entity 1 to pass by the side of Entity 2 without colliding, we should move in the direction from Entity 1

217

to Entity 1b. We can compute this by calculating θ = tan −1

2r . We then rotate v by θ to get the vector v pointing in the direction we need to go. Let us look at how the code does this.

void cPathfindBehavior::ApplyAvoidance(cEntity &entity) { // pathfinding only works on squad mate type entities! cSquadEntity &squadmate = dynamic_cast<cSquadEntity&>(entity); D3DXVECTOR3 entityPos(entity.Position()); D3DXVECTOR3 currentDesiredMove(entity.DesiredMove()); // let's make sure we aren't bound to hit anyone else cWorld &world = squadmate.World(); for (tGroupList::iterator git = world.Groups().begin(); git != world.Groups().end(); ++git) { cGroup *grp = *git; for (tEntityList::iterator eit = grp->Entities().begin(); eit != grp->Entities().end(); ++eit) { cEntity *e = *eit; if (e == &entity) continue; D3DXVECTOR3 otherEntityPos(e->Position()); D3DXVECTOR3 toEntity = otherEntityPos - entityPos; if (D3DXVec3Length(&toEntity) < mAvoidDist) { // let's apply some avoidance. D3DXVec3Normalize(&toEntity, &toEntity); float zRotation = atan2f(toEntity.y, toEntity.x); D3DXMATRIX rotationMat; D3DXMatrixRotationZ(&rotationMat, zRotation); D3DXVECTOR4 rotatedDesiredMove; D3DXVec3Transform(&rotatedDesiredMove, &currentDesiredMove, &rotationMat); D3DXVECTOR3 newDesiredMove(rotatedDesiredMove.x, rotatedDesiredMove.y, rotatedDesiredMove.z); D3DXVECTOR3 desiredMoveAdj = newDesiredMove – currentDesiredMove; desiredMoveAdj *= mTurnRate; currentDesiredMove += desiredMoveAdj * Gain(); entity.SetDesiredMove(currentDesiredMove); }

} }

}

The method looks long, but the iteration takes up the bulk of the space.
// pathfinding only works on squad mate type entities! cSquadEntity &squadmate = dynamic_cast<cSquadEntity&>(entity);

218

First off, this method only works on our special derived squad entity type of entity. So check that first.
D3DXVECTOR3 entityPos(entity.Position()); D3DXVECTOR3 currentDesiredMove(entity.DesiredMove());

Next we get the current position and current desired move of the entity.
cWorld &world = squadmate.World(); for (tGroupList::iterator git = world.Groups().begin(); git != world.Groups().end(); ++git) { cGroup *grp = *git; for (tEntityList::iterator eit = grp->Entities().begin(); eit != grp->Entities().end(); ++eit) {

We then iterate through all the groups in the world, and each of the entities in each group.
cEntity *e = *eit; if (e == &entity) continue;

We then perform a sanity check, to ensure we are not trying to avoid ourselves.
D3DXVECTOR3 otherEntityPos(e->Position()); D3DXVECTOR3 toEntity = otherEntityPos - entityPos;

Now we compute a vector from this entity to the other entity.
if (D3DXVec3Length(&toEntity) < mAvoidDist)

If the magnitude of the vector is less than our avoid distance, we need to try to avoid this entity.
// let's apply some avoidance. D3DXVec3Normalize(&toEntity, &toEntity); float zRotation = atan2f(toEntity.y, toEntity.x); D3DXMATRIX rotationMat; D3DXMatrixRotationZ(&rotationMat, zRotation);

First we normalize the vector to the entity, and compute the rotation based on the vector. We then build a rotation matrix using this vector. Note that this method is useful only in avoiding things in 2d, where Z is up.
D3DXVECTOR4 rotatedDesiredMove; D3DXVec3Transform(&rotatedDesiredMove, &currentDesiredMove, &rotationMat); D3DXVECTOR3 newDesiredMove(rotatedDesiredMove.x, rotatedDesiredMove.y, rotatedDesiredMove.z);

219

Next we perform a little data hoop jumping. D3DXVec3Transform puts the results in a D3DXVECTOR4, because it wants to preserve the w value. That is fine, but there are no convenient operators to turn a vector 4 back to a vector 3 without constructing one by hand. So we rotate our current desired move vector by the rotation matrix we computed and then we put it into a usable vector.
D3DXVECTOR3 desiredMoveAdj = newDesiredMove – currentDesiredMove; desiredMoveAdj *= mTurnRate; currentDesiredMove += desiredMoveAdj * Gain(); entity.SetDesiredMove(currentDesiredMove);

Now we get a new desired move adjustment vector by getting the vector from our current desired move to new current desired move, apply our turn rate limit, and then add it into the our current desired move, making use of the behavior gain. Finally, we then set the desired move using computed vector. That is it for the behavioral movement component in the demo. Now we are ready to see how the squad members and squad leaders make their decisions about where to go in the world.

5.4 Squads and State Machines
So now we have waypoint networks, ways to pathfind through them, and a movement behavior to get our entities to follow a path. All that is left are the squad members themselves and how they decide what to do. In the last chapter, we discussed state machines as decision systems, and scripting as a means to extend what our games can do in a data driven way. We now take our next step, and integrate it into the demo with pathfinding and waypoint networks.

5.4.1 Methods of Squad Communication
In our demo, we will have a squad leader and three squad members. The squad leader tells the squad members what to do, and they do it. In truth, it does not get much simpler than this. So how do they manage this communication? There are a few typical ways to implement squad communication.

Direct Control
In this method, the squad leader directly calls methods to modify variables or cause transitions in the squad member’s state machine (or whatever decision system you are using). It is a simple approach, and the one we are using for our demo, but it is not the most flexible. The squad leader has to know all about the squad members, and make them do the right things to get them to behave as he wants them to. Basically the squad leader needs to know what to do, and how to do it.

220

Poll the Leader
Another approach is to have the squad members poll the leader to find out what he wants them to do. This pushes the responsibility of knowing how to do it onto the squad member. This is a slightly better design than the last one since it gives the squad members more autonomy and allows for more variety at the squad member level. The downside here is the squad members now need to know all about the squad leader. Although, in fairness, ‘many to one’ is often easier to manage than ‘one to many’ since the group members only need to know about one type of object – the leader.

Events
The last approach worth mentioning is an event system. The idea is that the squad leader decides what he wants to have happen, and sends an event to the squad members. They receive the event, process it, and decide how to do what the squad leader wants. When they are done, or if something comes up that requires them to seek redirection from the squad leader, they send an event back to the squad leader, who receives it, processes it, and sends an event back. It is a more complicated system, but fairly general and flexible.

5.4.2 The Squad Member
First, let us talk about the squad member itself, and the class that drives it. We will then discuss its state machine which gets it to do the things we want.

The Squad Entity Class
The squad entity class derives directly from our entity class from the flocking demo. It is specialized in a few ways, to give us the ability to recall the waypoints we were traversing, the network we are on, etc. Let us take a look at that class now.
class cSquadEntity : public cEntity { public:

cSquadEntity ( cWorld &world, unsigned type, float senseRange, float maxVelocityChange, float maxSpeed, float desiredSpeed, float moveXScalar, float moveYScalar,

221

OnGoalReached(). } tWaypointID &GetNextWaypoint(void) { return mNextWaypoint. } bool void void bool bool GoalReached(). } const cStateMachine *GetStateMachine(void) const { return mStateMachine. 222 . OnWaitingForCommand().IsEqual(GUID_NULL).virtual float moveZScalar ). } void SetGoal(const D3DXVECTOR3 &goal) { mGoalPosition = goal. } protected: cWaypointNetwork tWaypointID D3DXVECTOR3 tPath COLORREF float cStateMachine }. } HasValidPath() { return mPath. } const tWaypointID &GetNextWaypoint(void) const { return mNextWaypoint. } cWaypointNetwork *GetWaypointNetwork(void) { return mWaypointNetwork. void SetPath(const tPath &aPath) { mPath = aPath. } void SetStateMachine(cStateMachine *machine) { mStateMachine = machine. } cWorld &World() { return mWorld. } void SetWaypointNetwork(cWaypointNetwork *network) { mWaypointNetwork = network. ~cSquadEntity(void). } tPath &GetPath(void) { return mPath.0f. } const D3DXVECTOR3 &GetGoal(void) const { return mGoalPosition. } bool WaypointReached(). mGoalPosition. *mStateMachine. } void SetNextWaypoint(const tWaypointID &wp) { mNextWaypoint = wp. mTimeSinceNextWaypointReached. mPath. *mWaypointNetwork. } COLORREF GetColor(void) { return mColor. mColor. virtual void Iterate(float timeDelta). } D3DXVECTOR3 &GetGoal(void) { return mGoalPosition. HasValidWaypoint() { return !mNextWaypoint. } float GetTimeSinceWaypointReached() const { return mTimeSinceNextWaypointReached. } void SetColor(COLORREF color) { mColor = color. } const tPath &GetPath(void) const { return mPath. } cStateMachine *GetStateMachine(void) { return mStateMachine. mNextWaypoint. } void IncrementTimeSinceWaypointReached(float deltaTime) { mTimeSinceNextWaypointReached += deltaTime. } const cWaypointNetwork *GetWaypointNetwork(void) const { return mWaypointNetwork.size() > 0. void OnWaypointReached(). void ResetTimeSinceWaypointReached() { mTimeSinceNextWaypointReached = 0.

and our ultimate goal position. mGoalPosition. Here we have the path we will be using to find our way through the network to our ultimate goal. void cSquadEntity::Iterate(float timeDelta) { // iterate our state machine if (mStateMachine) mStateMachine->Iterate(). COLORREF mColor. cWaypointNetwork *mWaypointNetwork. vec. we have the state machine for this entity. float mTimeSinceNextWaypointReached. Next. if (D3DXVec3Length(&mVelocity) > kEpsilon) { D3DXVECTOR3 vec.0f. 0. D3DXQuaternionRotationYawPitchRoll(&mOrientation. so it is larger and more complicated looking than it really is. This gets changed when we walk over waypoints. we have the waypoint network that this entity is traveling on. we have the waypoint ID of the next waypoint we are traveling to. We will start our examination with the data members. Here we have the timer that keeps track of how long it has been since this entity has reached a waypoint. tPath mPath. Now that we have seen the data this class uses. the entity has his velocity agitated in order to get him to his waypoint. 223 . D3DXVec3Normalize(&vec. cStateMachine *mStateMachine. tWaypointID D3DXVECTOR3 mNextWaypoint.y. // Update our orientation const float kEpsilon = 0. If this timer value gets to be too large. &mVelocity). let us look at the non-trivial methods. Last. Here we have the color we are going to draw this entity in the waypoint network view. cEntity::Iterate(timeDelta). float zRotation = atan2f(vec. First off.01f.Most of this interface is accessors. and then some of the less obvious methods.x).

and finally we compute a new orientation vector which is compatible with this demo’s coordinate system.0f. #define ENTITY_RADIUS 0. zRotation). } } return false. Here we have the iterate method. Otherwise.Position().IsEqual(GUID_NULL) && mWaypointNetwork) { cWaypoint *wp = mWaypointNetwork->FindWaypoint(mNextWaypoint). if ((distToWP .} } 0. or no waypoint network. we find the next waypoint in the network. In this case. } } return false. and move the entity through the world. If we have no next waypoint. We first update our state machine for this frame.ENTITY_RADIUS) < wp->GetRadius()) { return true. and compute our distance to the waypoint. We do two things above and beyond the default implementation. #define GOAL_REACH_THRESHOLD 1.75f bool cSquadEntity::WaypointReached() { if (!mNextWaypoint. float distToWP = D3DXVec3Length(&vec). then we call the default Iterate implementation. Its job is to execute the movement behaviors on the entity. if (wp) { D3DXVECTOR3 vec = wp->GetPosition() . which is called every frame.Position(). } return false.5f bool cSquadEntity::GoalReached() { D3DXVECTOR3 vec = mGoalPosition . we take into account the radius of the entity to determine if we have reached the waypoint. we return false. The WaypointReached method is called to check to see if the next waypoint has been reached. if (D3DXVec3Length(&vec) < GOAL_REACH_THRESHOLD) { return true. } 224 .

} } SetColor(RGB(0. only it computes the distance to the goal position rather than to the next waypoint. } The OnGoalReached method gets called when the squad member has reached his goal. 150. 150)). 0)). void cSquadEntity::OnGoalReached() { SetColor(RGB(255. } The OnWaitingForCommand method gets called when the squad member is waiting for a command. It simply changes the color of the entity. void cSquadEntity::OnWaypointReached() { if (!mNextWaypoint. 0. and sets the color of the entity using that value. if (wp) { COLORREF color. void cSquadEntity::OnWaitingForCommand() { SetColor(RGB(150. 225 . It queries the blind data for the color value stored in it.IsEqual(GUID_NULL) && mWaypointNetwork) { cWaypoint *wp = mWaypointNetwork->FindWaypoint(mNextWaypoint). wp->GetBlindData(0. color). SetColor(color). return. 255. It simply changes the color of the entity. 150)).The GoalReached method works similarly to the WaypointReached method. } The OnWaypointReached method gets called when the squad member has reached his next waypoint.

he goes back to waiting for a command.5 As the state transition diagram shows (see Figure 5.on_waiting_for_command(). It simply calls the entity’s on_waiting_for_command handler. This action is executed when the waiting for command state is entered. Each time he reaches a waypoint (or the goal).entity(). He starts waiting for a command. First we have the waiting for command action. he goes to the waypoint reached state. and he has no more waypoints. Let us take a look at the Python scripted actions and transitions that get this job done for us. the squad member has a simple state machine.source().entity().source(). from GI_AISDK import * class CommandGiven(PythonScriptedTransition): def should_transition(self): return self. Once he has reached the goal point. he moves to the goal using the network.has_valid_path() and \ not self. and once he gets one.has_valid_waypoint().5).state().entity(). 226 .The Squad Member State Machine Figure 5. from GI_AISDK import * class WaitingForCommandAction(PythonScriptedAction): def execute(self): self.

from GI_AISDK import * class HasMoreWaypoints(PythonScriptedTransition): def should_transition(self): return self.entity().entity().entity().entity(). The HasMoreWaypoints transition simply evaluates if the entity has a valid path.source().entity(). from GI_AISDK import * class NoMoreWaypoints(PythonScriptedTransition): def should_transition(self): return self.on_goal_reached().source(). or the goal. but the entity has not yet begun walking that path. 227 .has_valid_path() The NoMoreWaypoints transition simply returns if the goal has been reached and the entity does not have a valid path. and calls the appropriate handler.has_valid_path() or not \ self.goal_reached() and not \ self.source(). from GI_AISDK import * class WaypointReachedAction(PythonScriptedAction): def execute(self): if self.entity().waypoint_reached(): self. This action is performed when the waypoint reached state is entered.entity().waypoint_reached() or \ self. The HaveReachedWaypoint transition checks to see if the entity has reached a waypoint. from GI_AISDK import * class HaveReachedWaypoint(PythonScriptedTransition): def should_transition(self): return self.goal_reached().entity().state(). else: self.state(). The WaypointReachedAction checks to see if the waypoint or the goal has been reached.state().source().source().entity(). This means a new target has been set. or has not yet reached the goal.source().on_waypoint_reached().goal_reached().The CommandGiven transition checks to see if the entity has a valid path and not a valid waypoint.

protected: vector<cSquadEntity*> tPointOfInterestMap cPointOfInterest }. float moveYScalar. void RemoveSquadMember(cSquadEntity *member). cPointOfInterest *GetSelectedPointOfInterest() const { return mSelectedPointOfInterest. } tPointOfInterestMap *GetPointsOfInterest() { return mPointsOfInterest. float senseRange. float maxSpeed. float maxVelocityChange. void ClearSquadMembers(). Now we just have to investigate how the squad leader works and we will be ready to set up this demo. float desiredSpeed.4. } void SetSelectedPointOfInterest(cPointOfInterest *poi) { mSelectedPointOfInterest = poi. bool SquadArrivedAtGoal().3 The Squad Leader The squad leader derives from our squad entity class. The Squad Leader Class class cSquadLeaderEntity : public cSquadEntity { public: cSquadLeaderEntity ( cWorld &world. virtual ~cSquadLeaderEntity(void). void SendSquadToRandomPOI(). } void SetPointsOfInterest(tPointOfInterestMap *pointsOfInterest) { mPointsOfInterest = pointsOfInterest. mSquadMembers. so let us take a look at that. He also has some specialized functionality. } void AddSquadMember(cSquadEntity *member).That does it for the squad members. float moveZScalar ). 228 . *mPointsOfInterest. 5. This allows him to navigate the world if he so desires. unsigned type. *mSelectedPointOfInterest. float moveXScalar.

This is basically just like the waypoint map the waypoint network has. Last. only with points of interest instead. vector<cSquadEntity*> mSquadMembers. the squad leader has a large set of points of interest. closestPOI). the points of interest are not on the network. tPointOfInterestID closestPOI = FindPointOfInterestNearestPosition ( *mPointsOfInterest.One of the first things you will notice is the point of interest class. void cSquadLeaderEntity::SendSquadToRandomPOI() { if (!mPointsOfInterest && mSquadMembers. The bad guys would then see the point of interest. the things you care about will not be on the pre-computed network (although certainly. poiID). you will notice that there are a few methods and some data that need a little explaining. and he directs his squad to investigate them at random. where the hero is trying to sneak into a compound filled with bad guys. There are two methods of interest in the squad leader class. cPointOfInterest *poi = FindPointOfInterest(*mPointsOfInterest. and tell them what to do.size() > 0) return. This demonstrates that most of the time in a continuous world environment. First we have the vector of squad members in our squad. if (!poi) return. mSquadMembers[0]->GetGoal() ). Ultimately. we have the point of interest we are currently directing our squad to investigate. A point of interest is a lot like a waypoint. the game could create a “noise” point of interest at the position where he made the noise. tPointOfInterestID poiID = SelectRandomPointOfInterest(*mPointsOfInterest. Typically. cPointOfInterest *mSelectedPointOfInterest. you create a point of interest for anything in the game world that is going to be important to the decision maker. Consider the example of a stealth action game. only it is not connected to the network. and go investigate. 229 . Again. Next we have a map of point of interest IDs to points of interest. tPointOfInterestMap *mPointsOfInterest. In the demo. If the hero makes a noise. Let us take a look at them. you can include points of interest on the waypoint network if desired). This allows the leader to keep track of them. Getting back to the squad leader class. we need to be able to get on and off the network without mishap.

Let us take a closer look. tPath pathToWP. Next we find the closest POI to the squad’s current goal position. Now we select a random point of interest.SetSelectedPointOfInterest(poi).size() > 0) return. tPointOfInterestID closestPOI = FindPointOfInterestNearestPosition ( *mPointsOfInterest. pathToWP. and sends every squad member to investigate. or no squad members. cWaypointVisibilityFunctor functor. GetWaypointNetwork()->FindPathFromPositionToPosition ( entity->Position(). closestPOI). For starters. so we will not go into it here (see PointOfInterest. poiID).clear(). poi->GetPosition(). ++it) { cSquadEntity *entity = *it. tPointOfInterestID poiID = SelectRandomPointOfInterest(*mPointsOfInterest. if (!mPointsOfInterest && mSquadMembers. entity->SetNextWaypoint(GUID_NULL). if we have no points of interest. entity->SetPath(pathToWP). ignoring the one we just found as the closest. entity->SetGoal(poi->GetPosition()). it != mSquadMembers. pathToWP ). } } First we have the SendSquadToRandomPOI method.cpp). or are already there.begin(). for (vector<cSquadEntity*>::iterator it = mSquadMembers. we do not have any work to do. functor. This method selects a random point of interest. since they are already going there. This method is pretty simple.end(). mSquadMembers[0]->GetGoal() ). cPointOfInterest *poi = FindPointOfInterest(*mPointsOfInterest. 230 . if (!poi) return. The idea here is that we do not want to select this point of interest again.

we will null out their current waypoint. cWaypointVisibilityFunctor functor. entity->SetPath(pathToWP). We then find the path from the entity’s current position to the position of the point of interest we selected. bool cSquadLeaderEntity::SquadArrivedAtGoal() { for (vector<cSquadEntity*>::iterator it = mSquadMembers. we bail out.end(). We will be finding a path for each of them. } return true.begin(). we iterate through all of our squad members. tPath pathToWP. for (vector<cSquadEntity*>::iterator it = mSquadMembers. pathToWP. entity->SetGoal(poi->GetPosition()). and clear our path that we are going to find for them. if (!entity->GoalReached()) return false. so we will need a path.Next.begin(). It is worth noting here. SetSelectedPointOfInterest(poi). that we could easily modify this method to select a different point of interest for each squad member by moving the random waypoint selection code inside the loop. ++it) { cSquadEntity *entity = *it. pathToWP ). entity->SetNextWaypoint(GUID_NULL). We then select this as our current point of interest. If we do not find it. ++it) { Next. and a visibility functor. For each entity. it != mSquadMembers.end(). We set that path on the entity. and set his goal to the position of the point of interest we selected. we find the point of interest in the map using the ID we got back from the random selector. cSquadEntity *entity = *it. poi->GetPosition(). GetWaypointNetwork()->FindPathFromPositionToPosition ( entity->Position(). it != mSquadMembers. functor. } 231 .clear().

and directs the squad to a point of interest. whereupon he waits a second. That is everything we need to discuss about the squad leader class. He then waits for the squad to arrive at their destination. we return true. and command squad to POI state. Let us take a look at the Python scripted actions and transitions that make this work.The other method we should talk about is the SquadArrivedAtGoal method. and if any one of them has not reached their goal. then commands them to another point of interest. so let us take a look at the state machine the squad leader uses. He begins in the command squad to POI state. It is done by simply iterating through all the squad members. This method returns success if all of the squad members have arrived at their goals. the wait to give orders state. we return false. from GI_AISDK import * class SquadArrivedAtPOI(PythonScriptedTransition): def should_transition(self): 232 . Otherwise. The Squad Leader State Machine Figure 5.6 The squad leader has three states: the awaiting squad task completion state.

squad_arrived_at_goal(). This action is performed when the state is entered. The SquadArrivedAtPOI transition simply returns if the squad has arrived at the goal. from GI_AISDK import * class SquadsProceedingToPOI(PythonScriptedTransition): def should_transition(self): return True The SquadsProceedingToPOI transition simply returns true. That is really all there is to the squad leader. let us see how we set up our demo to bring it all together. from GI_AISDK import * class CommandSquadMembersToPOI(PythonScriptedAction): def execute(self): self. and with the tools provided you will be able to develop some very cool and interesting AI.entity(). The CommandSquadMembersToPOI action sends the squad to a random point of interest using the method provided by the squad leader. Since the squad leader issues the orders for the squad to proceed to their goal upon entering the CommandSquadToPOI state. To wrap things up in this chapter.send_squad_to_random_poi().return self.entity().state(). Obviously there is much more behavior you are probably thinking about adding. 233 . there is no need to remain in the state afterwards.source().

The circles without the arrows are the points of interest. but does not allow editing of the state machines. The arrows inside the waypoints show the orientation of the waypoint. and then you should be ready to jump in and have some fun with it! 234 . While this demo makes no use of that information.5 Setting up the Demo Figure 5. The pane on the bottom right is the state machine view. as it shows the currently active squad member’s state machine. orientation on waypoints can be very useful. It displays the waypoints. Let us quickly go over the initialization code for the demo. As the squad members cross the waypoints.5. their edges. It is similar to the state machine view in the state machine demo. The state with the red border is the state the squad member is currently residing in. or in the state machine tree on the left. If you want. On the left. so it deserves attention. and his path to the currently selected point of interest (also filled in gray) will be shown in black. and the points of interest. The purple circle with the arrow is the squad leader.7 The waypoint network and squads demo is shown in Figure 5. and the blue circles with the (in the case of this diagram orange) arrows are the squad members. You may select squad members by clicking on them in this view. The currently selected entity will be filled in gray. since this demo uses those files to load the squad leader and squad member state machines. we have a tree view that represents the state machines of the squad leader and his squad members. along with the squad members. you can use the state machine demo to edit your state machines. The top pane on the right is the waypoint network view.7. as mentioned before. This tree view is just like the one in the state machine demo. their arrow will change to the color of the waypoint crossed.

5f. float kMoveZScalar = 0. kMaxTimeBeforeAgitation. float kMaxTimeBeforeAgitation = 5.0f. mWaypointNetwork->UnSerialize(wpnfile). // don't allow z moment float kSeparationDist = 1. if (mWorld) delete mWorld.0f.0f. // allocate a new ones mWaypointNetwork = new cWaypointNetwork().0f. // make pathfinding behavior mPFbeh = new cPathfindBehavior(kPathfindingMaxRateChange.wpn").0f. float kMaxVelocityChange = 1. cGroup *squadGroup = new cGroup(*mWorld). float kMaxSpeed = 5. // seconds to reach a // path node before the entity // gets agitation stimulus D3DXVECTOR3 upVector(0. mWorld = new cWorld. UnSerializePointsOfInterest(mPointsOfInterest.0f. 0. float kMoveYScalar = 1. wpnfile). float kSenseDistance = 20.0f. float kGoalReachedRadius = 1. // The entities in this demo // live in the XY plane const int kNumSquadMembers = 3. ClearPointsOfInterest(mPointsOfInterest). mWorld->Add(*squadGroup). // load a default waypoint network ifstream wpnfile("demo. *mWaypointNetwork). mPFbeh = NULL. mWorld = NULL. tPointOfInterestID startPoiID = SelectRandomPointOfInterest( 235 . if (mPFbeh) delete mPFbeh. 1. upVector.0f.BOOL CWaypointNetworksAndSquadsDoc::OnNewDocument() { if (!CDocument::OnNewDocument()) return FALSE.0f. // free the old network and world if (mWaypointNetwork) delete mWaypointNetwork. kSeparationDist.0f).0f. float kPathfindingMaxRateChange = 0. const const const const const const const const const const const int kSquadType = 0x1. float kMoveXScalar = 1. kGoalReachedRadius. ClearPointsOfInterest(mPointsOfInterest).2f. mWaypointNetwork = NULL.

cSquadEntity *squadmate = new cSquadMemberEntity ( *mWorld.0f).stm").4f.. kMaxVelocityChange. mSquadLeader->SetPointsOfInterest(&mPointsOfInterest). 0. kSquadType. kMoveXScalar. kMaxSpeed * 0. D3DXQUATERNION squadLeaderOrientation(0. 10. try { if (statemachine->UnSerialize(squadleaderfsm) == FALSE) { // error handling mojo.. mSquadLeader->SetWaypointNetwork(mWaypointNetwork). D3DXVECTOR3 destination = endPoi->GetPosition(). // setup some squad mates to roam the network for (int i = 0. GUID_NULL). tPointOfInterestID endPoiID = SelectRandomPointOfInterest(mPointsOfInterest. squadGroup->Add(*mSquadLeader). 0. statemachine = NULL. D3DXVECTOR3 squadleaderpos(34. startPoiID). delete statemachine.. } } catch(error_already_set) { // error handling mojo. (float)i. cPointOfInterest *startPoi = FindPointOfInterest(mPointsOfInterest.0f. kMaxSpeed. kMoveZScalar ).0f. mSquadLeader->SetOrientation(squadLeaderOrientation). endPoiID).0f. 1. 0. ifstream squadleaderfsm("SquadLeader. cPointOfInterest *endPoi = FindPointOfInterest(mPointsOfInterest. mSquadLeader->SetPosition(squadleaderpos). kMoveYScalar.0f). D3DXVECTOR3 origin = startPoi->GetPosition().. cStateMachine *statemachine = new cSquadStateMachine(mSquadLeader). ++i) { D3DXVECTOR3 pos(0. // setup the squad leader mSquadLeader = new cSquadLeaderEntity ( *mWorld. 236 . i < kNumSquadMembers. } statemachine->Reset(). mSquadLeader->SetSelectedPointOfInterest(endPoi). startPoiID).mPointsOfInterest. kSenseDistance. statemachine = NULL. 0.0f. mSquadLeader->SetStateMachine(statemachine).0f). delete statemachine.

statemachine = NULL. visibilityFunctor. squadmate->SetPosition(startPoi->GetPosition() + pos). statemachine->Reset().4f. statemachine = NULL. kMaxSpeed. kMoveZScalar } cWaypointVisibilityFunctor visibilityFunctor. } squadmate->AddBehavior(*mPFbeh). if (!mSelectedEntity) mSelectedEntity = squadmate. squadmate->SetStateMachine(statemachine). delete statemachine. if (mWaypointNetwork->FindPathFromPositionToPosition(origin.). delete statemachine. } 237 . kSquadType. return TRUE. statemachine = new cSquadStateMachine(squadmate). squadGroup->Add(*squadmate). } } catch(error_already_set) { // error handling mojo.. squadmate->SetWaypointNetwork(mWaypointNetwork). tPath thePath. mSquadLeader->AddSquadMember(squadmate). kMaxVelocityChange. ifstream ar("SquadMember.. destination.. kSenseDistance. } mPause = false. squadmate->SetGoal(endPoi->GetPosition()).. kMoveXScalar. try { if (statemachine->UnSerialize(ar) == FALSE) { // error handling mojo. thePath)) { squadmate->SetPath(thePath).stm"). kMaxSpeed * 0. kMoveYScalar.

1.0f. we free them so we can start with a clean slate.0f. mPFbeh = NULL.0f.0f. float kGoalReachedRadius = 1. this method is called. float kMoveYScalar = 1.As with most of our prior demos. a new world.0f. // don't allow z moment float kSeparationDist = 1. float kMaxVelocityChange = 1. mWorld = new cWorld. ClearPointsOfInterest(mPointsOfInterest). cGroup *squadGroup = new cGroup(*mWorld). const const const const const const const const const const const int kSquadType = 0x1. // free the old network and world if (mWaypointNetwork) delete mWaypointNetwork.0f. // make pathfinding behavior mPFbeh = new cPathfindBehavior(kPathfindingMaxRateChange. mWaypointNetwork = NULL.0f. if (mPFbeh) delete mPFbeh.0f). When a new document is created. float kPathfindingMaxRateChange = 0.0f.0f. kGoalReachedRadius. We set up some constants for use in creating the behaviors and entities in the world. create a group for our squad.5f. if we have a waypoint or world or pathfinding behavior or any points of interest around. float kMaxTimeBeforeAgitation = 5. so all of the data lives in the document class. mWorld = NULL. // allocate a new ones mWaypointNetwork = new cWaypointNetwork(). this demo is implemented in MFC.2f. float kMoveZScalar = 0. float kMoveXScalar = 1. if (mWorld) delete mWorld. // The entities in this demo // live in the XY plane const int kNumSquadMembers = 3. 0. and add it to the world. First. float kMaxSpeed = 5. mWorld->Add(*squadGroup). 238 . // seconds to reach a // path node before the entity // gets agitation stimulus D3DXVECTOR3 upVector(0. We then allocate a new waypoint network. float kSenseDistance = 20. Let us go over it to see how it is put together.0f.

delete statemachine. Next we load in the default waypoint network file (feel free to open that file up in a text editor and try modifying it). *mWaypointNetwork). ClearPointsOfInterest(mPointsOfInterest). GUID_NULL). mWaypointNetwork->UnSerialize(wpnfile). kSquadType. cPointOfInterest *startPoi = FindPointOfInterest(mPointsOfInterest. 1. startPoiID). mSquadLeader->SetOrientation(squadLeaderOrientation). tPointOfInterestID startPoiID = SelectRandomPointOfInterest( mPointsOfInterest. We then select a random starting point of interest and a random goal point of interest to start off with. UnSerializePointsOfInterest(mPointsOfInterest.0f. cPointOfInterest *endPoi = FindPointOfInterest(mPointsOfInterest.4f. wpnfile). mSquadLeader->SetPointsOfInterest(&mPointsOfInterest). kSenseDistance. kMoveZScalar ). D3DXVECTOR3 origin = startPoi->GetPosition().wpn").0f). 0. kMaxSpeed * 0. ifstream squadleaderfsm("SquadLeader. endPoiID). kMoveYScalar.. tPointOfInterestID endPoiID = SelectRandomPointOfInterest(mPointsOfInterest. upVector.. // setup the squad leader mSquadLeader = new cSquadLeaderEntity ( *mWorld. kMoveXScalar. We also load in the default points of interest from that file.kSeparationDist. kMaxVelocityChange. kMaxSpeed.0f. try { if (statemachine->UnSerialize(squadleaderfsm) == FALSE) { // error handling mojo. kMaxTimeBeforeAgitation. D3DXVECTOR3 destination = endPoi->GetPosition(). D3DXQUATERNION squadLeaderOrientation(0.0f.stm"). We then create a pathfinding behavior. mSquadLeader->SetSelectedPointOfInterest(endPoi). This behavior is shared by all of the entities. cStateMachine *statemachine = new cSquadStateMachine(mSquadLeader). // load a default waypoint network ifstream wpnfile("demo. 239 . 0. startPoiID).

. kMoveYScalar. 240 . kMaxSpeed * 0.stm"). } statemachine = NULL. D3DXVECTOR3 squadleaderpos(34. ++i) Here we create a set of squad members. Finally. } statemachine->Reset(). feel free to experiment with this machine using the State Machine demo from the last chapter) and set it. Now we set up the squad leader. 0. 0. and set his initial selected point of interest. squadGroup->Add(*mSquadLeader).. cSquadEntity *squadmate = new cSquadMemberEntity ( *mWorld. we give him the waypoint network and finally add him to the squad. mSquadLeader->SetStateMachine(statemachine). kSquadType. // setup some squad mates to roam the network for (int i = 0. delete statemachine.4f. try { if (statemachine->UnSerialize(ar) == FALSE) { // error handling mojo. give him an initial position and orientation. statemachine = new cSquadStateMachine(squadmate). ifstream ar("SquadMember. kMaxVelocityChange. We also load his state machine from a file (again. statemachine = NULL.0f). kMoveZScalar ).0f. D3DXVECTOR3 pos(0. We set his points of interest. delete statemachine. mSquadLeader->SetWaypointNetwork(mWaypointNetwork). statemachine = NULL..} catch(error_already_set) { // error handling mojo. } } catch(error_already_set) { // error handling mojo. (float)i. kMaxSpeed.0f). delete statemachine. i < kNumSquadMembers. kMoveXScalar.. 10. mSquadLeader->SetPosition(squadleaderpos).. kSenseDistance..

There is obviously a lot to learn as a game developer. and some different ways that we can get them to command their squads. and a movement behavior for traversing the network. mSquadLeader->AddSquadMember(squadmate). we discussed how to bring together the various AI components we learned about in this course to get them to cooperate in a single project. once every 33 milliseconds. We looked at squad entities. if (mWaypointNetwork->FindPathFromPositionToPosition(origin. we load his state machine from a file (which can be edited in the State Machine Editor from the previous chapter). We talked about how we can build a waypoint network for traversing a continuous world. We add the pathfinding behavior. We even talked about squad leaders. tPath thePath. squadmate->SetStateMachine(statemachine). Ultimately we built a practical example that demonstrated a squad examining points of interest using a waypoint network. We also inform the squad leader about the new squad mate and find a path for him from his current position to the goal position of the initial goal point of interest. set his waypoint network. This method calls the Iterate method on the world object. statemachine->Reset(). } squadmate->AddBehavior(*mPFbeh). and saw how we can use scripted state machines to drive their behavior. Hopefully you have found our discussions and demonstrations enjoyable. thePath)) { squadmate->SetPath(thePath). AI 241 . cWaypointVisibilityFunctor visibilityFunctor. destination. That is it! The demo is now initialized. squadmate->SetGoal(endPoi->GetPosition()). it should be possible to build a robust AI system for your games using the framework provided as a starting point. There is one other place that needs noting -. } For each squad member. With the material discussed in this course. This method gets called by a timer set in the waypoint network view. which ensures the entities get iterated (making sure that their behaviors and state machines get iterated as well). set his state machine. squadmate->SetWaypointNetwork(mWaypointNetwork). but for many of us.statemachine = NULL. initialize his position. Conclusion In this chapter. while using scripting to extend our state machines. visibilityFunctor. and add him to the group. squadGroup->Add(*squadmate). squadmate->SetPosition(startPoi->GetPosition() + pos).the UpdateWorld method in the document. if (!mSelectedEntity) mSelectedEntity = squadmate.

In this course we have covered a lot of the core AI topics that you will need to understand if you want to develop AI for games. please come by and let us know. If you enjoyed the materials we encountered and remain interested in taking your game AI even further. but certainly not to the end of the road. At this point you have a very solid set of working code that can serve as the basis for further exploration and enhancement. This is because you really get a chance to be as creative as you want to be (within reason!) and see the results of your work acted out by the little virtual characters in your world. if you do create some interesting AI for your game using the systems we discussed. We would love to hear about it and see your AI in action! We wish you the very best of luck in your future game programming adventures! 242 . there are plenty of books and internet tutorials available for you to examine.programming is certainly one of the most fun and exciting. doing things that make you feel that these guys are really thinking for themselves. You will get the chance soon enough to experience this for yourself. seeming to communicate and cooperate with one another. there is a lot more to study in the field of AI then what we were able to talk about in our short time together. And thus we have reached the end of our course. And of course. It is a very satisfying feeling to watch your team of AI entities travel from place to place in the world in a realistic manner. But as mentioned right at the beginning.

Sign up to vote on this title
UsefulNot useful