## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

This Week’s Topic This week is probably going to start a bit slow for those of you who are experienced game designers (or those who are hoping to dive deep into the details). Instead, I want to use this week mostly to get everyone into the mindset of a game designer presented with a balance task, and I want to lay out some basic vocabulary terms so we can communicate about game balance properly. You can think of this week like a tutorial level. The difficulty and pacing of this course will ramp up in the following weeks. What is Game Balance? I would start by asking the question “what is game balance?” but I answered it in the teaser video already. While perhaps an oversimplification, we can say that game balance is mostly about figuring out what numbers to use in a game. This immediately brings up the question: what if a game doesn’t have any numbers or math involved? The playground game of Tag has no numbers, for example. Does that mean that the concept of “game balance” is meaningless when applied to Tag? The answer is that Tag does in fact have numbers: how fast and how long each player can run, how close the players are to each other, the dimensions of the play area, how long someone is “it.” We don’t really track any of these stats because Tag isn’t a professional sport… but if it was a professional sport, you’d better believe there would be trading cards and websites with all kinds of numbers on them! So, every game does in fact have numbers (even if they are hidden or implicit), and the purpose of those numbers is to describe the game state. How do you tell if a game is balanced? Knowing if a game is balanced is not always trivial. Chess, for example, is not entirely balanced: it has been observed that there is a slight advantage to going first. However, it hasn’t been definitively proven whether this imbalance is mechanical (that is, there is a bona fide tactical/strategic advantage to the first move) or psychological (players assume there is a first-move advantage, so they trick themselves into playing worse when they go second). Interestingly, this first-move advantage disappears at lower skill levels; it is only observed at championship tournaments. Keep in mind that this is a game that has been played, in some form, for thousands of years. And we still don’t know exactly how unbalanced it is! In the case of Chess, a greater degree of player skill makes the game unbalanced. In some cases, it works the other way around, where skilled players can correct an inherent imbalance through clever play. For example, in Settlers of Catan, much of the game revolves around trading resources with other players. If a single player has a slight gameplay advantage due to an improved starting position, the other players can agree to simply not trade with that player for a time (or only offer unfair trades at the expense of that player) until such time as the starting positions equalize. This would not happen in casual games, as the players would be unable to recognize a slight early-game advantage; at the tournament level, however, players would be more likely to spot an inherent imbalance in the game, and act accordingly. In short, game balance is not an easy or obvious task. (But you probably could have figured that out, given that I’m going to talk for ten straight weeks on the subject!) Towards a critical vocabulary Just like last summer, we need to define a few key terms that we’ll use as we talk about different kinds of balance.

Determinism For our purposes, I define a “deterministic” game as one where if you start with a given game state and perform a particular action, it will always produce the same resulting new game state. Chess and Go and Checkers are all deterministic. You never have a situation where you move a piece, but due to an unexpected combat die roll the piece gets lost somewhere along the way, or something. (Unless you’re playing a nondeterministic variant, anyway.) Candyland and Chutes & Ladders are not deterministic. Each has a random mechanism for moving players forward, so you never know quite how far you’ll move next turn. Poker is not deterministic, either. You might play several hands where you appear to have the same game state (your hand and all face-up cards on the table are the same), but the actual results of the hand may be different because you never know what the opponents’ cards are. Rock-Paper-Scissors is not deterministic, in the sense that any given throw (like Rock) will sometimes win, sometimes lose, and sometimes draw, depending on what the opponent does. Note that there are deterministic elements to all of these games. For example, once you have rolled your die in Chutes & Ladders, called the hand in Poker, or made your throw in Rock-PaperScissors, resolving the turn is done by the (deterministic) rules of the game. If you throw Rock and your opponent throws Paper, the result is always the same. Non-determinism The opposite of a deterministic game is a non-deterministic game. The easiest way to illustrate the difference is by comparing the arcade classic Pac-Man with its sequel Ms. Pac-Man. The original Pac-Man is entirely deterministic. The ghosts follow an AI that is purely dependent on the current game state. As a result, following a pre-defined sequence of controller inputs on a given level will always produce the exact same results, every time. Because of this deterministic property, some players were able to figure out patterns of movements; the game changed from one of chasing and being chased to one of memorizing and executing patterns. This ended up being a problem: arcade games required that players play for 3 minutes or less, on average, in order to remain profitable. Pattern players could play for hours. In Ms. Pac-Man, an element of non-determinism was added: sometimes the ghosts would choose their direction randomly. As a result, Ms. Pac-Man returned the focus of gameplay from pattern execution to quick thinking and reaction, and (at the championship levels, at least) the two games play quite differently. Now, this is not to say that a non-deterministic game is always “better.” Remember, Chess and Go are deterministic games that have been played for thousands of years; as game designers today, we count ourselves lucky if our games are played a mere two or three years from the release date. So my point is not that one method is superior to the other, but rather that analyzing game balance is done differently for deterministic versus non-deterministic games. Deterministic games can theoretically undergo some kind of brute-force analysis, where you look at all the possible moves and determine the best one. The number of moves to consider may be so large (as with the game Go) that a brute-force solve is impossible, but in at least some cases (typically early-game and end-game positions) you can do a bit of number-crunching to figure out optimal moves. Non-deterministic games don’t work that way. They require you to use probability to figure out the odds of winning for each move, with the understanding that any given playthrough might give a different actual result. Solvability This leads to a discussion of whether a game is solvable. When we say a game is solvable, in

general, we mean that the game has a single, knowable “best” action to take at any given point in play, and it is possible for players to know what that move is. In general, we find solvability to be an undesirable trait in a game. If the player knows the best move, they aren’t making any interesting decisions; every decision is obvious. That said, there are lots of kinds of solvability, and some kinds are not as bad as others. Trivial solvability Normally, when we say a game is solvable in a bad way, we mean that it is trivially solvable: it is a game where the human mind can completely solve the game in real-time. Tic-Tac-Toe is a common example of this; young children who haven’t solved the game yet find it endlessly fascinating, but at some point they figure out all of the permutations, solve the game, and no longer find it interesting. We can still talk about the balance of trivially solvable games. For example, given optimal play on both sides, we know that Tic-Tac-Toe is a draw, so we could say in this sense that the game is balanced. However, we could also say that if you look at all possible games of Tic-Tac-Toe that could be played, you’ll find that there are more ways for X to win than O, so you could say it is unbalanced because there is a first-player advantage (although that advantage can be negated through optimal play by both players). These are the kinds of balance considerations for a trivially solvable game. Theoretical complete solvability There are games like Chess and Go which are theoretically solvable, but in reality there are so many permutations that the human mind (and even computers) can’t realistically solve the entire game. Here is a case where games are solvable but still interesting, because their complexity is beyond our capacity to solve them. It is hard to tell if games like this are balanced, because we don’t actually know the solution and don’t have the means to actually solve it. We must rely on our game designer intuition, the (sometimes conflicting) opinions of expert players, or tournament stats across many championshiplevel games, to merely get a good guess as to whether the game is balanced. (Another impractical way to balance these games is to sit around and wait for computers to become powerful enough to solve them within our lifetimes, knowing that this may or may not happen.) Solving non-deterministic games You might think that only deterministic games can be solved. After all, non-deterministic games have random or unknown elements, so “optimal” play does not guarantee a win (or even a draw). However, I would say that non-deterministic games can still be “solved,” it’s just that the “solution” looks a lot different: a solution in this case is a set of actions that maximize your probability of winning. The card game Poker provides an interesting example of this. You have some information about what is in your hand, and what is showing on the table. Given this information, it is possible to compute the exact odds of winning with your hand, and in fact championship players are capable of doing this in real-time. Because of this, all bets you make are either optimal, or they aren’t. For example, if you compute you have a 50/50 chance of winning a $300 pot, and you are being asked to pay $10 to stay in, that is clearly an optimal move for you; if you lost $10 half of the time and won $300 the other half, you would come out ahead. In this case, the “solution” is to make the bet. You might wonder, if Poker is solvable, what stops it from becoming a boring grind of players computing odds with a calculator and then betting or not based on the numbers? From a game balance perspective, such a situation is dangerous: not only do players know what the best move is (so there are only obvious decisions), but sometimes optimal play will end in a loss, effectively punishing a player for their great skill at odds computation! In games like this, you need some kind

you favored Paper). if you’re making a real-time strategy game with some units that are strong against other unit types (in an intransitive way). but the solvable aspects of the game are used to inform players. the game would have perfect information. Your opponents’ behavior may influence your decisions: if the guy sitting across from you is betting aggressively. If you threw more of one type than the others (say. Then we would have a different solution where the ratios would be uneven. if players combined their information. completely solvable. your opponent could throw the thing that beats your preferred throw (Scissors) more often. You might be able to see. An example of this is the card game Rummy. Yet other games have information that is concealed from all of the players. You might find this useful. and no player knows what cards remain in the draw deck or what order those cards are placed in (hidden information). that any deterministic game with perfect information is at least theoretically.of mechanism to get around the problem of solvability-leading-to-player-frustration. meaning that each player does not know the entire game state. each player has privileged information where they know some things the opponents don’t. and the reason it’s so interesting. Suppose we made a rules change: every win with Rock counts as two wins instead of one. With Hearts in particular. all players know what is in the discard pile (common information). in these games. and betting high with a hand that can’t really win. because players have some privileged information about the possibility space of the game. for example. meaning that you should throw about as many of each type as any other. So in general. is that players may choose to play suboptimally in order to bluff. Chess and Go are obvious examples. But in fact. there does not appear to be an optimal move. Perfect information A related concept to solvability is that of information availability. It is these psychological elements that prevent Poker from turning into a game of pure luck when played by skilled individuals. The way Poker does this. Trading-card games like Magic: the Gathering offer additional layers of privileged information. the sum of player information is the game state. The solution to Rock-Paper-Scissors is a ratio of 1:1:1. which is why at the highest levels Poker is a game of psychology. In a game with perfect or complete information. you might change the relative capabilities to make certain units more cost-efficient or more powerful overall. So. which lets them win slightly more than average. not math. all players know all elements of the game state at all times. There are mathematical ways to figure out exactly what this new ratio would be. Card games like Hearts or Poker work this way.” Since the outcome depends on a simultaneous choice between you and your opponent. the “solution” to RPS is to throw each symbol with equal frequency in the long term. and therefore there is no way to solve it. and we will talk about how to do that later in this course. is it because he has a great hand and knows something you don’t know? Or is he just bad at math? Or is he good at math. then. for example. but he’s trying to trick you into thinking his hand is better than it really is? This human factor is not solvable. which in turn would change the relative frequencies of each unit type appearing (given optimal play). but you want certain units to be more rare and special in gameplay than others. In . each player knows what is in his or her own hand but no one else’s hand (privileged information). Other games have varying degrees of incomplete information. the game is solvable… it’s just that the solution looks a bit different from other kinds of games. and in fact part of the game is trying to figure out the information that the other players know. In this game. Solving intransitive games Intransitive games are a fancy way of saying “games like Rock-Paper-Scissors.

) Other sports have metagame mechanics in place to control this positive feedback. then revealed and resolved the moves at the same time. The Metagame The term metagame literally means “the game surrounding the game” and generally refers to the things players do when they’re not actively playing the game. and symmetry doesn’t change that. players analyze the common behaviors and strategies of their opponents. which further increases their chance of winning more games. In one respect. (With apologies to anyone who lives in New York. • Salary caps. and so on. Trading card games like Magic: the Gathering are a clear example of this: in between games. If there is a limit to how much players can make. players construct a deck. Symmetry Another concept that impacts game balance is whether a game is symmetric or asymmetric. balance of the metagame is an important consideration. Even if the game itself is balanced. except for that pesky little detail about White going first. the game would be completely symmetric (and in fact there are variants along these lines). Could you make Chess symmetric with a rules change? Yes: for example. Symmetric games are those where all players have exactly the same starting position and the same rules. since they have the exact same starting positions. this is the reason everyone else hates the Yankees. Professional sports have all kinds of things going on in between games: scouting. there are some cards that can give you some limited information on all of these things (such as cards that let you peek at your opponent’s hand or deck). the weakest team gets to choose first. you know that no player is at an advantage or disadvantage from the beginning. Even weaker teams are able to match the max salary for a few of their players. Professional sports are a great example. the weakest teams pick up the strongest players each year. you need extra rules to handle cases where two pieces move into or through the same square. a good team can’t just have an infinite supply of talent. Here is a positive feedback loop that is inherent in any professional sport: teams that win more games. drafting. but their actions are still affecting their chances to win their next game. more money lets them attract better players. Another example would be championship-level Poker or even worldtournament Rock-Paper-Scissors.particular. get more money. Note that in this case. symmetry requires added complexity. When a bunch of players leave their teams to be picked up by other teams. • Player limits. However. or when one piece enters a square just as another piece exits the square. you could say that perfectly symmetric games are automatically balanced. if both players wrote down their moves simultaneously. but not their opponent’s. At the very least. or certain strategies that are clearly optimal. There are a finite number of players allowed on any team. Even more interesting. trading. it prevents a single team from being able to throw infinite money at the problem. and part of the challenge of deck construction is deciding how important it is to gain information versus how important it is to actually attack or defend. For games that have a strong metagame. American Football includes the following: • Drafts. Perfect symmetry is therefore not an “easy way out” for designers to make a balanced game. and the contents of that deck affect their ability to win. a metagame imbalance can destroy the balance of the game. Thus. training. . there may still be certain pieces that are much more powerful than others. symmetry alone does not guarantee that the game objects or strategies within the game are balanced. although neither player knows the exact order of cards in their own draw pile. Chess is almost symmetric. each player knows the contents of cards in their own deck.

For Lazy People I’m going to try to leave you each week with some things you can do right now to improve the balance of a game you’re working on. so I thought this was worth bringing up. and then some “homework” that you can do to improve your .” players are (thankfully) less willing to put up with those kinds of shenanigans. This is a metagame solution: if all the competitive decks use Card X. the imbalance is either part of the game mechanics or individual game objects (i. so the problems you see during playtesting are not always the exact things that need to be fixed. (In his defense. Occasionally you might see a designer that tries to balance an overpowered card in a previous set by creating a “counter-card” in the next set. nor did he know that people would largely ignore the rules for “ante” which served as an additional balancing factor. the imbalance exists in the metagame to begin with. and there are other ways to work around an imbalance like this. In drastic cases they can restrict or outright ban a card. where the deck is more of an influence on your play style than a fixed strategy. so a metagame fix for this imbalance is appropriate. but in most cases this is not practical. it is too late to fix it with a “patch” if some kind of gross imbalance is discovered. This is admittedly an extreme example. He had no way of knowing that some people would spend thousands of dollars on cards just to get a full set of rares.e. Game Balance Made Easy. you might think that fixing the metagame is a great way to balance the game. or issue some kind of errata. metagame fixes make the game more balanced. Thus. the weakest team in the NFL might be able to beat the strongest team. They were put in place on purpose. and so on as deep as you can go. and what is causing that. The counter-card might have other useful effects. but it’s not much better. Some cards are rarer than others.) Today. [something really bad happens to them]” with no other effect. ask yourself why this imbalance is really happening. the metagame imbalances that result from this are a symptom and not the root cause. while the initial problem continues unchecked. Richard Garfield clearly thought that rarity itself was a way to balance the game. This essentially turns the metagame into Rock (dominant deck) – Paper (deck with counter-card) – Scissors (everything else). and it mostly shifts the focus from the actual play of the game to the metagame: you may as well just show your opponent your deck and determine a winner that way. by people who know something about game balance. Still. When you identify an imbalance. In TCGs. Trading card games offer two examples of where this tactic fails. some games have gone so far as to print a card that says “When your opponent plays [specific named card].These metagame mechanics are not arbitrary or accidental. Game balance versus metagame balance In professional sports. a metagame fix for a TCG is a response to a symptom. As a result. The lesson here is that a game balance problem in one part of a game can propagate to and manifest in other areas. metagame fixes feel like a hack. Why the difference? The reason is that in sports. In TCGs. what is actually causing it… and then. First. let’s go back to the early days of Magic: the Gathering. From this. some rare cards ended up being flat-out better than their more common counterparts. trading card game designers are more aware of this problem. The game overall might be designed such that player choices during the game contribute greatly to the outcome. it is only useful in the context of the metagame. before slapping a fix on it. while one does occasionally see games where “more rare = more powerful. what is causing that. the designers are stuck. TCGs have a problem that video games don’t have: once a set of cards is released. this was not an unreasonable assumption at the time. This may be preferable to a metagame with only one dominant strategy. and it’s part of the reason why any given Sunday. Second. then a new Card Y that punishes the opponent for playing Card X gives players a new metagame option… but if Card Y does nothing else. specific cards).

the other player starts with the full 100 Gold and plays the weaker Humans. think about . You might find out that these mechanics are covering up game imbalances that will become more apparent when they’re removed. loses their bid. Players who do well (even without rules exploits) may feel like the other players are punishing them for being good players. you instead shift that burden to the players and make them balance the game for you. you can fix them and make the game stronger. Here’s a solution: make players bid their starting Gold on the right to play the Orcs at the start of the game. When you find the actual imbalances that used to be obscured. if one player finds a game imbalance. If you work in the game industry as a designer. Kill-the-leader mechanics serve as a strong negative feedback loop. the easiest way to fix it is to get your players to do this for you. there’s not a lot to do. perfect information. that’s what game balance is all about. Note that this can actually be a great tool in playtesting. Eventually. Whoever bids the most. kill-the-leader mechanics. either while playtesting your own game. but you think the Orcs are a little more powerful than the humans (but not much). Okay. solvability. examine your game to see if you are using your players as a game balance crutch (through auctions. you are probably playing at least one game in your spare time. Players may “sandbag” (play suboptimally on purpose) in order to not attract too much attention. or on television (such as watching a game show or a professional sports match). is there anything you can do right now to improve the balance of a game you’re working on? I would say. include mechanics that let the players easily gang up on the leader. and this will make things balanced.) Homework I’ll go out on a limb and guess that if you’re reading this. say. players will reach a consensus and start bidding about the same amount of Gold. Here’s another way to get players to balance your game for you: in a multiplayer free-for-all game. and you need to be careful of that. Maybe you have occasion to watch other people play. determinism. (You can always add your auctions or killthe-leader mechanics back in later. and some players may feel that the outcome of the game is more decided on their ability to not be noticed than their actual game skill. early-game skill is not as much a factor as late-game. Of course. so instead I’m going to start by saying what not to do. One way to do this is auction mechanics. 100 Gold… maybe the Orcs start with a little less. don’t just play/watch for fun.skills. or similar). As you play (or watch) these games this week. if they are important to the gameplay. That way. the other players can cooperate to bring them down. There is nothing wrong with auctions as a game mechanic. you may be playing a game at your day job for research. Feel free to add an auction in a case like this. if the Humans start with. there is nothing inherently wrong with giving players the ability to form alliances against each other… but doing it for the sole purpose of letting players deal with your poor design and balancing skills should not be the first and only solution. and the metagame) this week. but you have no idea how much less. but instead of taking the trouble to figure it out. this brings other gameplay problems with it. Suppose you’re a designer at Blizzard working on Warcraft IV. and you have an Orcs-vs-Humans two-player game that you want to balance. Again. If you’re having trouble balancing a game. You decide the best way to balance this is to reduce the starting resources of the Orcs. Instead. let your testers come to a consensus of how much something is worth. mind you – they are often very compelling and exciting – but they can be used as a crutch to cover up a game imbalance. Try removing that crutch and seeing what happens. Let me give an example of how this works. and negative feedback has other consequences: the game tends to take longer. then just cost it accordingly in the final version (without including the auction). I say this is lazy design because there is a correct answer here. Since we just talked about vocabulary (symmetry. How much less? Well.

the actions in the game and ask yourself if you think the game is balanced or not. where are the imbalances? What are the root causes of those imbalances. but to give you some practice in thinking critically about game balance. It’s emotionally easier to find problems in other people’s games than your own (even if the actual process is the same). so start by looking at the balance or imbalance in other people’s games first. The purpose of this is not to actually improve the game you’re examining. . and how would you change them if you wanted to fix them? Write down your thoughts if it helps. Why do you think that? If you feel it’s not.

there’s a 100-to-1 linear relationship between Gold and Dexterity. its only purpose is to act as a continual slow drain on your resources. Add +1 to one value. because you can’t really know how to balance a game or how to choose the right numbers unless you first know what kinds of numbers you’re dealing with. but there are some cases where it makes sense to have two different values that just happen to have a one-to-one conversion. randomly chosen each time). Sometimes. Even within a game. While these are clearly two separate values that serve very different purposes within the game. it’s equivalent to adding +1 to the other. suppose I tell you that a sword costs 250 Gold. maybe 250 Gold is a lot at the start of the game but it’s pocket change at the end. I tell you that the player only gets 1 Gold at most from winning each combat. They only have meaning in relation to each other. In World of Warcraft. something that they find while adventuring. Exponential and Triangular Relationships . Each character also has Gold. If a healing spell always costs 5 MP and heals exactly 50 HP. you can treat the two values as identical. we’re going to be examining relationships between numbers. and has no other value (and cannot be sold or exchanged for anything else). and so on). a balance change is as simple as replacing one kind of number with another. As an example. This is going to be important later. If you can spend 100 Gold to gain +1 Dexterity. you might just make a single value. Note that we are so far ignoring cases where a relationship is partly random (maybe that healing spell heals somewhere between 25 and 75 HP. then there is a 1-to-10 linear relationship between MP and HP. and then it’s really expensive. That has no meaning. and each one only has meaning in relation to the other. so we’re conveniently leaving that out of the picture for now. And so on. where the conversion rate between two values is a constant. and it is versatile (you can use it to bribe guards. Now you have two numbers.Level 2: Numeric Relationships This Week’s Topic This week. That tells you nothing unless you know how much damage enemies can take before they keel over dead. Ultima III: Exodus has Food. Or. and then you know the sword is cheap. Numbers in games don’t exist in a vacuum. something that each character needed to not starve to death in a dungeon. 1000 Food costs 1000 Gold. Damage and Hit Points. A more general case of an identity relationship is the linear relationship. what kinds of ways can numbers be related to each other? Identity and Linear Relationships Probably the simplest type of relationship. Unlike food. Gold doesn’t degrade over time. and could only buy it from food vendors in towns. Randomness is something we’ll get into in a few weeks. For game balance purposes. Food and Gold have an identity relationship… although it is one-way in this case. You would think that in such a case. each unit of Food costs 1 Gold (10 Food costs 10 Gold. the relative value of something can change. purchase weapons or armor… or purchase Food). For example. Food decreases over time. suppose I tell you that the main character in a game does 5 damage when he attacks. buy hints. so understanding what kinds of numbers there are and getting an intuition for how they work is something we need to cover before anything else. In particular. is where two values change in exactly the same way. Or. 1 Gold used to be a tidy sum. until I tell you that the player routinely finds bags with thousands of Gold lying around the country side. since you can convert Gold to Food but not vice versa. With all that said. which math geeks would call an identity relationship. I’m going to talk about the different kinds of numbers you see in games and how to classify them. but today it takes tens or hundreds to buy the really epic loot. You never got food as an item drop.

suppose a player can pay resources to gain additional actions in a turn-based strategy game. Or. you will eventually reach a point where each successive item costs zero (or even negative). You may have a relationship where there are either increasing or diminishing returns. The difference between the next two numbers (3 and 6) is 3. 2. An interesting thing to notice about triangular numbers is when you look at the difference between each successive pair of numbers. the numbers get really big. Here we have a decreasing return. buying a lot of something all at once is actually not as good as buying one at a time. In our Widget example above. the next costs 2 (for a running total of 3). 4 or 5 of something costs 1. double the other one. 4. The simplest way to do this is an exponential relationship: when you add to one value. that if you have a math formula the game balance will break at the mathematical extremes. 2. Because the numbers get prohibitively large very quickly. What if you want a decreasing cost. 3. which gets kind of ridiculous. and so on. figure out how much the first one should cost. you have probably at least seen this series: 1. For example. So the successive differences are linear: they follow the pattern 1. as you buy more of them. This gives you a relationship where buying 1. nearly every card in any Collectible Card Game that I’ve played that has the word “double” on it somewhere (as in. In this case. so it makes sense to give players a discount for “buying in bulk” as it were. but three or four extra actions might be like taking a whole extra turn — it might feel a lot more than 3 or 4 times as powerful as a single action. … That is the classic triangular pattern (so called because several ways to visualize it involve triangles). The next difference (between 6 and 10) is 4. 4… Triangular numbers usually make a pretty good first guess for increasing costs. For example. 6. As you can see. 10. One extra action might be a small boost. a linear relationship doesn’t work for your game. 2. maybe the players are simply prevented from buying more than 3 or 4 Widgets at a time. you need a numeric relationship that increases or decreases its rate of exchange as you exchange more or less at a time. What if you want something that increases. very careful when using exponentials. then make each one after that cost 1 less. The design solution is to set hard limits on the formula. so that you don’t ever reach those extremes. maybe you have a game where players have incentive to spend all of their in-game money every turn to keep pace with their opponents. In our earlier example. where something starts out expensive and gets cheaper? In that case. 21. 28. For example. you have to be careful when using exponential relationships. In such cases. and hoarding cash has a real opportunity cost (that is. really fast when you do this. Note that in this case. You would therefore want the cost of each extra action to increase. The difference between the first two numbers (1 and 3) is 2. and so on. they miss out on opportunities they would have had if they’d spent it instead). 3. where each extra item purchased is not as useful as the last. one card doubles some value on another card) ends up being too powerful. If you’re unfamiliar with the term. maybe the first extra action costs 1 resource. and that was an all-or-nothing gamble where it doubled your attack strength but then made you lose at the end of the turn if you hadn’t won already! The lesson here is to be very. This is actually a pretty common thing in game balance. suppose you decide the first Widget should cost 7 Gold. the next costs 3 (for a total of 6). An example is doubling: for each +1 you give to one value. Then try making the second cost 6 Gold (for a total of 13). multiply the other one. 15. 8 or 16. but not as fast as an exponential? A common pattern in game design is the triangular relationship.Sometimes. I know offhand of one exception. . 3. This would be increasing returns: each extra action is more valuable than the last. the third costs 5 Gold (total of 18). respectively.

and it is best to make this central resource either the win or loss condition for the game. Defense. You can create any ratio between two values that you want… but do so with some understanding of what effect it will have on play! Relationships Within Systems Individual values in a game usually exist within larger systems. we can gain a lot of insight into how the game is balanced. making a single resource central to all of the others. so that 5th point of damage will probably cost a lot more than you would otherwise expect. the less damage you take. These interact with the economic and leveling systems in the game. If none of the typical relationships work for your game. which increases your stats. you get two things: Gold and Experience (XP). Attack and Defense). which increases all of your stats (HP. If you draw this all out on paper. which lets you defeat more enemies. Let’s examine the leveling system first. There’s a direct relationship between HP and Defense: the more defense you have. For example. where several specific quantities are particularly cheap (or expensive). and every few steps you get attacked by an enemy. There are buff/debuff spells that likewise reduce the damage you take in a combat. MP. You lose if your HP is ever reduced to zero. And in fact. they are not the only ones available. the faster you can defeat an enemy. You might have oscillations. you can essentially convert HP into encounters). There are teleport spells that take you across long distances. By analyzing all of the different numbers and relationships between them in a game’s systems. Magic Points (MP). so you take less damage. where certain thresholds cost more than others because those have in-game significance. you can survive more fights with higher Attack. the designers put the HP stat in the middle of everything! This is a common technique. which means they preserve your HP. like a higher Attack stat. There are attack spells that do damage (hopefully more than you’d do with a standard attack). How are all of these numbers related? Random encounters are related to HP: each encounter reduces HP (you can also say it the other way: by walking around getting into fights. you have four main stats: Hit Points (HP). Attack and Defense. you’ll see that everything — Attack. so that you don’t have to get in fights along the way. respectively. The higher your attack stat. In the game’s combat system. come up with your own custom relationship! Maybe you have certain cost peaks. there’s one additional wrinkle here: the combat system interacts with two other systems in the game through the monster encounters. as more encounters means less HP. This is a game of attrition. If you defeat an enemy faster. Thus. which means your HP lasts longer. this creates a feedback loop: defeating enemies causes you to gain a level. Ironically. So even though MP is versatile. there is actually a huge difference between doing 4 or 5 damage. this would be a positive feedback loop that would cause the player to gain high levels . these finish combats earlier. After you defeat a monster. Now. Collect enough XP and you’ll level up. so these again act to preserve your HP. if everything in your game has 5 hit points. you are exploring game areas. Effectively. There are healing spells that directly convert MP into HP. virtually all of the uses for it involve converting it (directly or indirectly) into HP. Let us take a simple example: the first Dragon Warrior game for the NES. This is an inverse relationship. Monster Encounters — is linked directly to HP. we see the same relationship between HP and Attack.Other Numeric Relationships While linear and triangular relationships are among the most common in games. In fact. MP is an interesting case. As you can see. that means it has less opportunity to damage you. MP. there are an infinite number of potential numeric relationships. because you can use it for a lot of things. increasing your Defense is equivalent to giving yourself a pile of extra HP. As the loss condition for the game.

enemies. say you have a healing spell and a damage spell. coins. use it to purchase Keys. There’s actually a numeric score. unlocking a new area with monsters that are too strong to handle does not help the player much. so it takes progressively more and more XP to gain a level. And then there are objects within the game — coin blocks. and you want to know which is better. in order to make the player really work for it. Gold is used for a few things. and then comparing. most of which mimic the effects of certain spells. Its primary use is to buy equipment which permanently increases the player’s Attack or Defense. say you want to know which is better. Again. Now. Calculate the amount of HP that the player would no longer lose as a result of using the damage spell and ending the combat earlier. Examining the economic system. Of course. is that of progression itself. and in order to open them you need to use your Gold to purchase magic keys. In this case. Or. the gain is so slow that they are incentivized to take some risks so they can level a little faster. and time (from a countdown timer). and compare that to the amount of HP actually restored by the healing spell. so that the player has reason to use them. a particular sword or a particular piece of armor. once you buy the most expensive equipment in the game. in that each block gives you some number of coins. since collecting a coin gives 200 points. That counteraction comes in the form of an increasing XP-to-Level relationship. For example. thus you can (on a limited basis. since you only have a few inventory slots) convert Gold to temporary MP. Let’s examine an action title. Gold can also be used to buy consumable items. is it useful for any other kind of game? The answer is yes. you probably want the choices made available at that time to be at least somewhat balanced with each other. What kinds of resources do we have in Mario? There are lives. (made popular from the arcade and NES versions). However. • Coins: there is a 100-to-1 relationship between Coins and Lives. How would a designer balance things within all these systems? By relating everything back to the central value of HP. while the player could maximize their level by just staying in the early areas of the game beating on the weakest enemies. and use those keys to open new areas which have stronger monsters (which then let you get even more Gold/XP). There is a relationship between Coin Blocks and Coins. For example. what prevents this from being a positive feedback loop is that it’s limited by progression: you have a limited selection of equipment to buy. you would expect those to be roughly equivalent in terms of their HP-to-cost ratios. which the player uses to increase their stats. Let us proceed to analyze the relationships. . if this kind of analysis works for a stat-driven game like an RPG. if the player reaches a new town with several new pieces of equipment. For example. the original Super Mario Bros. Another Example You might wonder. Many areas in the game are behind locked doors. Another counteracting force is that of player time. at any given time in the game. extra Gold doesn’t do you much good. You defeat monsters. and the more expensive stuff requires that you travel to areas that you are just not strong enough to reach at the start of the game. get Gold. which lets them defeat even more monsters. this loop is itself limited by the player’s stats. figure out how much extra HP each would save you. since collecting 100 coins awards an extra life. And of course.of power very fast. Here we see another feedback loop: defeating monsters earns Gold. if there weren’t some kind of counteracting force in the game. There is a 1-to-200 relationship between Coins and Score. Another loop that is linked to the economic system. this does not mean that everything in the game must be exactly equal to be balanced. thus effectively converting Gold into extra permanent HP. You may also want the more expensive equipment to be less costeffective. you may want spells that are learned later in the game to be more cost-effective. and so on — which can sometimes work for or against you depending on the situation.

the games that followed the original eliminated Score entirely. This makes sense in the context of early arcade games. since sometimes an enemy will cost you a life. enemy stomping. Interestingly. Looking at these relationships. “get the highest score.) • Lives: there is this strange relationship between Lives and everything else. the amount of time. the . which in turn causes you to level up. any level with more than 100 Coins would provide a positive feedback loop where you could die intentionally. but Super Mario 3 did.” but rather. If you have a level that contains 200 Coins. from the PS2 game Baldur’s Gate: Dark Alliance. and so on.• Time: there is a 100-to-1 relationship between Time and Score. or the length of a level (making a local change within individual levels). However. There is an inverse relationship between Enemies and Lives. and even eliminate a resource or change the central resource to something else. unlike other arcade games of the time. and everything was later related to Lives. since everything is tied to Score. Here’s another example. Compare that with how many coins you find in a typical level. Time and Enemies on a level. In this action-RPG. You can figure out how many enemies you kill and their relative risks (that is. so indirectly you can convert a Life to Score. losing a Life resets a bunch of things that give scoring opportunities. to create a doubling effect where you convert 1 Life to 2 Lives in a single iteration. you get XP from defeating enemies. We just saw one example of this in the Mario games. and losing a Life resets Coins. you can either change the amount of score granted to the player from each of these things (making a global change throughout the game). with Lives and Coins. (In a few select levels there is potentially a positive relationship between Enemies and Lives. • Relationship between Lives and Score: There is no direct link between Lives and Score. and how much time you typically complete the level with. because losing a life resets the Coins. and repeat to gain infinite lives. Level 2 to Level 3 costs 2000 XP. Note that since Coins give you extra Lives. time completion) contributes to the final score. They can either combine to become more intense. The Mario series survived this quite well. There is also an inverse relationship between Time and Lives. since you get a time bonus at the end of each level. There are a few ways. When you’re designing a game. the relationships stack with each other. since running out of time costs you a life. at Level 3 you get 2 points. Each time you level up. then the 100 Coins to 1 Life relationship combines with 1 Life to 200 Coins in that level. but that is a special case. you cannot earn extra Lives by getting a sufficiently high Score. Any of these techniques could be used to adjust a player’s expected total score. Then. These also follow a triangular progression: at Level 2 you get 1 upgrade point. which enemies are harder to kill and which are more likely to kill you). and also how much each of these activities (coin collecting. since the win condition is not “beat the game. The original Super Mario Bros. get more than 100 Coins. The XP-to-Level relationship is triangular: going from Level 1 to Level 2 requires 1000 XP. we see that Score is actually the central resource in Super Mario Bros. or you can vary the number of coins and enemies. • Enemies: there is a relationship between Enemies and Score. did not have any levels like this. Interactions Between Relationships When you form chains or loops of resources and relationships between them. rising to Level 4 costs 3000 XP. or they can cancel each other out (completely or partially). as stomping enough enemies in a combo will give extra lives. note that you can change your resources around. you get a number of upgrade points to spend on special abilities. this does not happen the other way around.” How would you balance these resources with one another. since killing enemies gives you from 100 to 1000 points (Depending on the enemy).

By changing the rate of XP gain (that is. however. scale the XP gains to take about an hour per level up at that point. but you never want them to go longer than an hour between levels. as with most RPGs. the way this interacts with Upgrade Points. when you know what level the player will be at during every step of the way! How Relationships Interact How do you know how two numeric relationships will stack together? Here’s a quick-reference guide: . it is the XP-to-Points ratio we care about.next level gives you 3 points. How does Time fit into this (as in. and then work your way backwards from there. However. the player will level faster in the early game and slower in the late game (which is usually what you want. and so on. the player will level up at a more or less constant rate. so they don’t have to stick around too long. The maximum level a player can reach is effectively limited by the XP-reward curve. the game’s system pushes the player’s level towards a narrow range in the middle of the extremes. where the player has progressed quickly through the game and is now at a lower level than the enemies in the current region. the amount of time the player spends on the game)? If the player were fighting the same enemies over and over for the same XP rewards. If the XP rewards increase more slowly than the rate of level ups. If the XP rewards increase at exactly the same rate. exactly how fast the XP rewards increase for defeating enemies) you can change both the rate of leveling up and the rate of Upgrade Point gains. which means that it doesn’t completely cancel out the triangular effect. For another. the XP gains will be relatively high (compared to the player’s level). In this case. since XP gives you Levels and Levels give you Upgrade Points. How would you balance the XP system? Simple: figure out what level they will be at in the late game. This increasing XP curve doesn’t increase as fast as the triangular progression of level-ups. While the awarding of these upgrade points is staggered based on levels. Note. If the XP rewards increase faster than the triangular rate of the levels themselves. on average you are earning them at a constant XP rate. Suppose you decide to have the player gain levels faster in the early game and slower in the late game. the player will actually level up faster as the game progresses. and the two triangular relationships actually cancel with each other to form a linear relationship of 1000 XP to 1 Upgrade Point. Note another useful property this leveling system has: it provides a negative feedback loop that keeps the player in a narrow range of levels during each point in the game. Since XP is the actual resource the player is earning. However. they are unlikely to gain much in the way of additional levels by defeating weaker enemies. and the player gets an increasing amount of XP per unit time. there is a system of increasing XP rewards as the player fights stronger monsters. and the player will only need to defeat a few enemies to level up quickly. they are actually getting an increasing rate of Upgrade Point gain! This kind of system has some interesting effects. on average). Since the XP-to-Point ratio is linear. Consider two situations: • Over-leveling: The player has done a lot of level-grinding and is now too powerful for the enemies in their current region. there would be a triangular increase in the amount of time it takes to earn a level (and a constant amount of time to earn each Upgrade Point. In either case. For one thing. these relationships chain together. but the play time between level gains doesn’t increase as fast as a triangular relationship. as it gives the player frequent rewards early on and starts spacing them out once they’ve committed to continued play). the XP gains aren’t that good if their level is already high. then the next gives you 4 points. • Under-leveling: Suppose instead the opposite case. you level up slightly faster in the early game and slower in the late game. but it does partly reduce it — in other words. they’ll be able to defeat the nearby enemies faster. It is much easier to balance a combat system to provide an appropriate level of challenge.

draw arrows between each set of boxes that has a direct relationship in your game. RPGs (Final Fantasy). even simple games can have pretty involved systems. again this could just mean they have higher stats. triangular. but the nature of the curve stays the same. In these games. then maybe something else. enemies get stronger.• Two linear relationships that combine: multiply them together. First. and you can use your diagram to see if the rewards and power gains the player gets from playing are expected to increase. • Two increasing relationships combine: you end up with an increasing relationship that’s even faster than either of the two individually. and as you’ve seen from the earlier examples here. • Linear relationship combines with an increasing (triangular or exponential) relationship: the increasing relationship just gets multiplied by a bigger number. Use this diagram to identify a few areas of interest in the balance of your game: • Do you see any loops where a resource can be converted to something else. or MMORPGs (World of Warcraft). If you can turn 1 of Resource A into 2 Resource B. If You’re Working On a Game Now… Are you designing your own game right now? Try this: make a list of every resource or number in your game on a piece of paper. exponential. I’ll recommend that you choose something relatively simple. In general. choose any single-player game that you’ve played and are familiar with. As the player progresses. and then back to the original? If you get back more of the original than you started with by doing this. or does it seem kind of arbitrary? If not. decrease. you may have just identified a positive feedback loop in your game. “Time” (as in. and what effect that might have on the game’s systems overall. an exponential relationship will dominate a triangular one (how fast this happens depends on the exact numbers used). • Do you see a central resource that everything else seems tied to? If so. etc. Exactly where the two curves meet and the game shifts from one to the other depends on the exact numbers. action-RPGs (Diablo). that has progression mechanics. and spread the boxes out.). Then. such as an NESera game or earlier. you might be able to make a pretty good guess at what other relationships will also change as a result. is that central resource either the win or loss condition. Put a box around each. or remain constant over time. Homework Here’s your game balance challenge for this week. If your game is a single-player game with some kind of progression system. and label the arrow with the kind of relationship (linear. and tweaking these can provide an interesting strategic shift for the players. there is some kind of progression where the player gains new abilities and/or improves their stats over time. but eventually the increasing relationship will outpace it. Two identical relationships (such as two pure triangulars) will cancel out to form a linear or identity relationship. If you change the nature of a relationship. perhaps by adding new relationships between resources? You can then use this diagram to predict changes to gameplay. or they might also gain new abilities that require better strategy and tactics to defeat. it may dominate early on. does it make sense to create a new central resource. and 1 Resource B into 5 Resource C. You’re going to analyze the numbers in this game. then there is a 1-to-10 conversion between A and C (2×5). • Two increasing relationships counteract one another: depends on the exact relationships. • Linear relationship counteracts an increasing relationship: if the linear conversion is large. the amount of time the player spends actually playing the game) should be one of your resources. . Examples of games with progression are action-adventure games (Zelda).

. if you do identify unbalanced areas of the game from this perspective. maybe the game felt too easy after you found a really powerful weapon. First. because the player was underpowered at the time and did not gain enough power fast enough to compensate. This exercise will probably take you a few hours. and identify the sudden spike in power when you get that weapon. For example. Perhaps you got a certain weapon. seek a strategy guide or FAQ that gives all of the numbers for the game. A web search may turn up surprisingly detailed walkthroughs that show you every number and every resource in the game. In such a case. you may be able to identify exactly what happened. as researching a game and analyzing the numbers is not a trivial task (even for a simple game). so you had to grind for a long time in one spot. you will be much more comfortable with identifying resources and relationships in games. ally. after doing this. So. and it probably takes you all of a few seconds to remember the standout moments of epic win and horrible grind in your chosen game. Lastly. and label the arrows with the relationship type). all you’re doing is using your memory and intuition. also show all relationships between the resources (draw arrows between them. and also being able to use your understanding of a game’s systems to improve the balance of those systems. From this diagram. but it is even better to make your intuition stronger by backing it up with math. • The hardest point in the game was not at the end. It’s useful to build intuition. allowing you to blast through it quickly since you were much more powerful than the inhabitants by the time you actually reached it. but somewhere in the middle. and made you effectively unbeatable from that point on until the end of the game. progressively harder? Or. • A sudden spike in difficulty with one dungeon that had much more challenging enemies than those that came immediately before or after. maybe you seemed to level up a lot in one particular dungeon. In this case you’d look at the combat system: look at how much damage you do versus how much enemies can take. However. as separate curves throughout the game.Start by asking yourself this question: overall. gaining a lot of power in a short time. let’s start analyzing. • A dungeon that was much easier than was probably intended. and exactly how they are all related. or special ability that was really powerful. make a list on paper of all of the resources in the game. you might start by looking at the leveling system: perhaps there is a certain range of levels where the XP requirements to gain a level are much lower than the rest of the progression curve. So far. Not only will you know exactly which resource needs to be changed. As another example. but also by how much. You may be able to graphically see the relationship of your power level versus that of the enemies over time. Using the FAQ as your guide. once you’ve written down your intuitive guesses at the points where the game becomes unbalanced. what was the difficulty curve of the game like? Did it start off easy and get slowly. did you notice one or more of these undesirable patterns: • A series of levels that seemed to go by very slowly. You might also look at the combat reward system: maybe you just gain a lot more XP than expected from the enemies in that dungeon. you should be able to use your numbers and curves to immediately suggest a change. Next.

you also often see a class of units that all behave similarly. there is some kind of resource that is used to buy stuff: Gil in Final Fantasy. a better deck isn’t just “more expensive”). like splash damage is strong against enemies that come clustered together (intransitive). doing more damage) than their lower-level counterparts. • Professional sports in the real world do this with monetary costs: a player who is better at the game commands a higher salary. Here are some examples: • RPGs often have currency costs to upgrade your equipment and buy consumable items. some things are just flat out better than others in terms of their in-game effects. • Shooters with progression mechanics like BioShock and Borderlands include similar mechanics. but it also performs its function of restoring your needs much more effectively. fliers beat footmen” comparison. because we really get to dive deep into nuts-and-bolts game balance in a very tangible way. • Retro arcade games generally have a transitive scoring mechanism. Mana in Magic: the . We’ll be talking about something that I’ve been doing for the past ten years. and you also spend ADAM to buy new special abilities. where everything is better than something else and there is no single “best” move. Some unit types might be strong or weak against others inherently (in an intransitive fashion). A really good bed in The Sims costs more than a cheap bed. In transitive games. • Sim games (like The Sims and Sim City) have costs for the various objects you can buy. How do we know how much to cost things? That is a big problem. but they cost more to buy. intransitive is like Rock-Paper-Scissors. the more points you get for defeating it. • Turn-based and real-time strategy games may have a combination of transitive and intransitive mechanics. The more dangerous or difficult an enemy. but the individual cards themselves generally have some kind of cost and they are all balanced according to that cost. although until now I’ve never really written down this process or tried to communicate it to anyone. and we balance that by giving them different costs.g. and that is what we’ll be discussing this week. what kinds of games do we see that have transitive mechanics? The answer is. but in most of these games the individual towers are upgradeable to stronger versions of themselves and the stronger versions cost more (transitive). In BioShock. I’m going to talk about how to balance transitive mechanics within games. there are costs to using vending machines to get consumable items. Leveling is also transitive: a higher-level character is better than a lower-level character in nearly every RPG I can think of. most of them. heavy infantry. like the typical “footmen beat archers. so that more expensive cards are more useful or powerful. You might notice something in common with most of these examples: in nearly all cases.Level 3: Transitive Mechanics and Cost Curves This Week’s Topic This week is one of the most exciting for me. archers beat fliers. thank goodness – that is. • Collectible-card games are another example where there may be intransitive mechanics (and there are almost always intransitive elements to the metagame. • Tower Defense games are often intransitive in that certain tower types are strong against certain kinds of attacks. However. for example. but with stronger and more expensive versions of weaker ones… such as light infantry vs. so that the better things cost more and the weaker things cost less in the game. and often these are transitive. As a reminder. Examples of Transitive Mechanics Just to contextualize this. higher-level abilities are just better (e.

this is the theory. Personally. that’s a limitation (you can’t just buy one for everyone in your party). it doesn’t matter: a negative benefit is the same as a cost. there’s one additional concept I want to introduce. If costs are everything bad. You might be wondering how we would relate two totally different things (like a Gold cost and the number of Attack Points you get from equipping a sword). But for example. Add up the costs for an object. That way I only have to add numbers and never subtract them. and all in-game effects can be put in terms of one or the other. Adding is easier. So. We want the costs and benefits to be equal. Undercosted Let’s assume for now that we can somehow relate everything back to a single resource cost so they can be directly compared. and our goal is to equalize everything. How do you know whether to call something a “cost” or a “benefit”? For our purposes. but if most swords in the game prevent you from casting all spells. Maybe it offers some combination of increases to your various stats. However. Maybe it does a lot of damage. go ahead. I’ll start with a simple statement: in a transitive mechanic. We will get to that in a moment. because everything has a monetary cost. the end result is the same. When we think of costs we’re usually thinking in terms of resource costs. Add up the benefits. Overpowered vs. another thing we said last week is that a central resource should be the win or loss condition for the game. this is an extension of that concept. that is part of a cost as well because it’s less powerful in some situations. Overcosted vs. it is common to make the central resource something artificially created for that purpose (some kind of “currency” in the game) rather than a win or loss condition. and vice versa. If the sword can only be equipped by certain character classes. And let’s say that we have something that provides too many benefits for . and as you might expect. if the sword is only half as effective against demons. I’m defining it more loosely to be any kind of drawback or limitation. If the costs are greater than the benefits. and we want to balance two things where one might be better than the other but it costs more. Or maybe a sword prevents you from casting Level 1 spells (obviously a cost). so if something would be a “negative cost” I’ll call it a benefit. But when I use this term. If the sword disintegrates after 50 encounters. or if it does 10% of damage dealt back to the person wielding it. not running out of Gold Pieces). But first. and we see that is no longer the case here (the loss condition for an RPG is usually running out of Hit Points. Underpowered vs. What if the sword does 2x damage against dragons? This is clearly a benefit (it’s better than normal damage sometimes). it’s too good: add more costs or remove some benefits. only in specific situations). numerically. Maybe it lets you use a neat special ability. everything has a set of costs and a set of benefits. In games that deal with costs. I find it easiest to keep all numbers positive and not negative. If the costs are less than the benefits. But if you want to classify things differently. Last week we talked about relating everything to a single resource in order to balance different game objects against each other. because that is a setback in the game. remove costs or add benefits. the math works out the same anyway. Whether you add to one side or subtract from the other. Costs and Benefits With all that said. let’s assume we have a game with some kind of currency-like resource. but it’s also a limitation on that benefit (it doesn’t do double damage all the time. or if it prevents the wielder from using magic spells… I would call all of these things “costs” because they are drawbacks or limitations to using the object that we’re trying to balance.Gathering. then benefits are everything good. like a sword that costs 250 Gold. it’s too weak. So it does include resource costs. this is a less limiting limitation that provides a kind of net benefit. Some things are a combination of the two. The goal is to get those two numbers to be equal. ADAM in BioShock.

There is a more general term for an object that is simply too good (although the cost or benefits could be adjusted): we say it is above the curve. Neither of these is usually obvious. then it’s not hard to say that a sword that gives +10 Attack should cost 250 Gold. you don’t get a choice. How do we know whether to reduce the benefits. you need to figure out how all game effects (positive and negative) relate to your central resource cost. meaning that it is specifically the level of benefits (not the cost) that must be adjusted. On the bright side. Occasionally you may also run into some really unique effects that can’t easily be added to. Reducing the cost further is impossible. Creating a Cost Curve The first step. or modified. After that. it’s balanced. usually when you’re operating at the extreme edges of a system. if we know that each point of extra Attack provides a linear benefit and that +1 Attack is worth 25 Gold. creating new content that is balanced is pretty easy: just put everything into your formula and you can pretty much guarantee that if the numbers add up. add up all the benefits (again. the benefits are a package deal. or both? In most cases. How do you know how much each resource is worth? That is what we call a cost curve. converting them to their equivalent in Gold).its costs. Likewise. so you have no choice: you must increase the benefits. this means you have to take every possible effect in the game. whether it be a cost or a benefit. or overcosted if it is too expensive. once you have this information about your game. I define these terms because it is sometimes important to make the distinction between something that is undercosted and something that’s overpowered. Sometimes. and compare. What do curves have to do with anything? We’ll see as we talk about our next topic. increase the costs. We have a special term for this: we say the object is underpowered. removed. and the only thing you can really do is adjust the cost. In this case. and the reason it’s called a “cost curve” and not a “cost table” or “cost chart” or “cost double-entry accounting ledger” is that you need to figure out a relationship between increasing resource costs and increasing benefits. an object that is too weak is below the curve. Yes. In both cases the object is too good. Defining the relationship between costs and benefits . that the level of benefits must be reduced (and that a simple cost increase is not enough to solve the problem). Likewise. and you don’t want to mess with it. so it is possible you’ve found an effect that is just too weak at any cost. Figure out what you want to stay the same… and then change the other thing. For more complicated objects. Sometimes you just have this really cool effect that you want to introduce to the game. Cost Curves Let’s return to the earlier question of how to relate things as different as Gold. Sometimes it’s more important that you have an object within a specific cost range because you know that’s what the player can afford when they arrive in the town that sells it. Attack Points. it is a lot of work up front. you just can’t reduce the cost anymore. How do we compare them directly? The answer is to put everything in terms of the resource cost. Magic Points. For example. we might call the object undercosted if it is too cheap. that is. we can do either one. if you have an object that’s already free. If an object has an automatic “I win / you lose” effect. It is up to the designer what is more important: having the object stay at its current cost. and find the relative values of all of these things. Yes. For example. In such cases we say it is overpowered. or having it retain its current benefits. it would have to have such a high cost that it would be essentially unobtainable. but the remedy is different. some objects are just too powerful to exist in the game at any cost. or any other kinds of stats or abilities we attach to an object. add up all the costs (after putting them in terms of Gold).

you might see an increasing curve (such as a triangular or exponential curve). It all depends on your design goals. starting with common ones that are used a lot. it’s usually an obvious decision to dump all of your gold into increasing your one or two most important stats while ignoring the rest. hoarding resources has an opportunity cost. The amount of currency you receive from exploration or combat encounters is increasing over time. such as to create more interesting choices. the designer might want incremental games for other design reasons. players might consider exploring other options. you could fall hopelessly behind. This can be particularly true in games where purchases are limited: wait too long to buy your favorite building in Puerto Rico and someone else might buy it first. even if a new set of armor costs you twice as much as your old one. Thus. where something twice as good actually costs considerably more than twice as much. if you’re getting more than twice as much Gold per encounter as you used to earlier. If a third of your deck is cards that generate Mana. Then. we might expect to see a shift in the cost curve at or around five mana.The two might scale linearly: +1 cost means +1 benefit. and then one mana every three turns thereafter. where suddenly each additional point of Mana is worth a lot more. You see this a lot in RPGs. as a numeric relationship between costs and benefits. Some games have costs on a decreasing curve instead. any kind of cost curve will be potentially balanced. Since your opening hand is 7 cards and you typically draw one card per turn. in some turn-based strategy games. so incremental gains get more and more expensive as you get more powerful. but different kinds of curves have different effects. number of turns. this means a player would typically gain one Mana per turn for the first four turns. players are gaining new resources at a constant rate throughout the game. single formula or relationship. mid-game and late-game phases. so bringing out a lot of forces early on provides an advantage over waiting until later to bring out only slightly better stuff. In the short term. everyone else is buying stuff and advancing their positions. For example. This relationship is pretty rare. in particular your desired game length. Additionally. At any rate. You might have a custom curve that has sudden jumps or changes at certain thresholds. but if each additional point in a stat costs progressively more. For example. Either way. where each additional benefit costs more than the last. where the really huge things dominate. (If you were . None of these are necessarily “right” or “wrong” in a universal sense. you’ll get (on average) one Mana-genrating card every three card draws. and if you don’t make purchases to keep up with them. As a result. and overall flow of the gameplay. in a typical Collectible Card Game. this instead puts emphasis on the late game. if all stat gains cost the same amount. try to figure out how much that one thing costs. this is one of your most important tasks when balancing transitive systems: figuring out the exact nature of the cost curve. it puts a lot of focus on the early game: cheap cards are almost as good as the more expensive ones. they must account for this opportunity cost by making something that costs twice as much be more than twice as good. From there. it would actually take you less time to earn the gold to upgrade. For example. In some games. your primary resource is Mana and you generally are limited to playing one Mana-generating card per turn. wait too long to build new settlements in Settlers of Catan and you may find that other people have built in the best locations. or. If instead you feature a decreasing cost curve where the cheap stuff is really weak and the expensive stuff is really powerful. Costs might be on an increasing curve. In cases like this. Some games have custom curves that don’t follow a simple. to guide the play of the game into definite early-game. Defining basic costs and benefits The next step in creating a cost curve is to make a complete list of all costs and benefits in your game. for example. which would explain why some of the more expensive cards have crazy-huge gameplay effects. If a game has an increasing cost curve where higher costs give progressively smaller gains. in Magic: the Gathering. For example. identify those objects that only do one thing and nothing else. if the designer wants resource-hoarding to be a viable strategy.

What about a spell that gives both bonuses at the same time? In some games. In other games. Once you know how to cost most of the basic effects in your game and how to combine them. Another thing you’ll eventually need to examine are benefits or costs that have limitations stacked on them. the mana cost should increase by 1. If you see any jargon that you don’t recognize. we first examine the most basic creatures: those with no special abilities at all. The reason I’m choosing this game is that CCGs are among the most complicated games to balance in these terms – a typical base or expansion set may have hundreds of cards that need to be individually balanced – so if we can analyze Magic then we can use this for just about anything else. Of the 116 creatures in the set. since you get multiple bonuses for a single action. For your game. There’s actually a sixth “type” called colorless which means any color you want. we’ll examine Creature cards specifically. or is it more or less than half? If a benefit requires you to meet conditions that have additional opportunity costs (“you can only use this ability if you have no Rogues in your party”). I’ll explain as we go. Approach these the same way: isolate one or more objects where you know the numeric costs and benefits of everything except one thing. and usually some kind of special ability. 11 of them fall into this category (I’ll ignore artifact creatures for now. Note that by necessity. something with a cost of “G4” means five mana. For example. and the other four can be anything (Green or otherwise). the combined cost is exactly the sum of the separate costs. since those have extra metagame considerations). We . if I tell you that the Flying ability gives a benefit equivalent to 1 mana. For those few parts of the game you do need to know. Power and Toughness. Power and Toughness. consider this your spoiler warning. that is fine for our purposes. just a Mana cost. Green (G). start combining them. this gives you a lot of power. and also how much it costs to have a spell that grants a defense bonus. one of which must be Green. get a feel for how different effects combine and how that influences their relative costs. the combined cost is less than the separate costs. Maybe you know how much it costs to have a spell that grants a damage bonus. is that really half of the cost compared to if it worked all the time. because they are the type of card that is the most easily standardized and directly compared: all Creatures have a Mana cost (this is the game’s primary resource). At some point you will start also identifying non-resource costs (drawbacks and limitations) to determine how much they cost. Black (B). so if you haven’t seen the set and are waiting for the official release.unsure about the exact mathematical nature of your cost curve. since both bonuses are not always useful in combination or might be situational. we’re going into spoiler territory here. all you need to know is that if you add Flying to a creature. If a benefit only works half of the time because of a coin-flip whenever you try to use it. Thus. and then use basic arithmetic (or algebra. I’m going to use some analysis to derive part of the cost curve for Magic 2011. if you prefer) to figure out the missing number. you don’t need to know (or care) what Flying is or what it does. To do this. unique effects that are not easily compared. and Blue (U). one at a time. Let us start by figuring out the basic cost curve. what is that tradeoff worth in terms of how much it offsets the benefit? An Example: Cost Curves in Action To see how this works in practice. Red (R). For those of you who have never played Magic before. one thing you should understand about Mana costs is that there are five colors of Mana: White (W). the recent set that was just promoted recently for Magic: the Gathering. In other games. For convenience. something like this will probably help you figure that out. the cost for a combined effect is more than their separate costs. continue identifying how much new things cost. As you’ll see. From there.) Once you’ve figured out how much some of the basic costs and benefits are worth. you won’t need to understand much of the rules in order to go through this analysis. Other card types tend to only have special. assume you don’t need to know it. Before I go on.

so it’s only fair to give it a price break since it’s so single-minded. R1 gets you a 2/1 creature. 2/1 • R1. 2/1 G1. Comparing the Black creatures. 2/2 G4. 2/1 R1. 3/2 B3. 2/2 Apparently. 2/5 From these comparisons. we could. 3/2 B3. so an equivalent creature is cheaper in White than Red.would expect that colored Mana has a higher cost than colorless. 3/5 W1. to compensate for them having fewer capabilities in other areas. 2/2 B2. (Such is the difficulty of deriving the cost curve of an existing game: if the balance isn’t perfect.) Either way. 3/5 U4. since we can’t use different colors interchangeably. is a color that’s notorious for having really big creatures and not much else. 2/1 (that is. a cost of one White mana. but only if we assume that the game designers made some balance mistakes. we can see some patterns here just by staying within colors: • • • • W. adding 1 colorless mana is equivalent to adding +1 Power. toughness of 1) W4. and it’s never perfect. 2/1 W4. your math may be slightly off unless you make some allowances. In reality. that 1 colorless (cost) = 1 Power (benefit) = 1 Toughness (benefit). 5/4 Looking at the smallest creatures. so their creatures might be reasonably made weaker as a result. but the equivalent-cost G1 gets you a 2/2. we immediately run into a problem with three creatures (I’m leaving the names off. then. 2/1 • G1. We can also examine similar creatures across colors to take a guess: • • • • W. power of 2. 2/2 U4. Still. We might guess. 1/3 B2. so you get more creature for Green than Red. 2/1 W1. Green. since it is more restrictive. it means we can’t assume every creature is balanced on the same curve. I would guess the designers did this on purpose to give some colors an advantage with creatures. Red and Blue have lots of cool spell toys. all colors are not created equal: you can get a 2/1 creature for either W (one mana) or R1 (two mana). since names aren’t relevant when it comes to balance): • W. 2/5 U1. 4/2 Comparing the White creatures. we might guess that Red and Blue seem to have an inherent -1 Power or . 4/2 R3. 3/3 R1. adding 1 colorless is equivalent to adding +1 Toughness. for example. This complicates our analysis. Here are the creatures with no special abilities: • • • • • • • • • • • W. Or rather. Likewise.

and that all cards have an automatic cost of 1 just for existing – the card takes up a slot in your hand and your deck. Green seems to get a larger bonus than White. 3/5 (5 additional power/toughness for 4 additional colorless mana) G1. looks like this: • Cost of 0 provides a benefit of 1. so far. our first colored mana (U) is 2. Our baseline cost is 1. So this creature is exactly 1 below the curve. Our costs are: • • • • Baseline cost = 1 (start with this. We know that each point of power and toughness provides a corresponding benefit of 1. +1 benefit for each additional colored mana? It seems to be up to a point. even if its mana cost is zero. W for 2/1. we will choose colorless mana to be our primary resource. How much would a 4/3 Blue creature cost? The benefit is 1 (Blue) + 4 (Power) + 3 (Toughness) = 8. What if I proposed this card: W3 for a 1/4 creature. shows a total of 3 benefits (2 power. So we would expect the cost to be U4. 2/2 (4 power/toughness for G1) G4. and 2 for the colorless mana).-1 Toughness “cost” compared to White. We don’t have quite enough data to know if this is accurate. The benefit is 1 (power) + 4 (toughness) = 5. and could be balanced by either dropping the cost to W2 or increasing it to 2/4 or 1/5. since getting your fifth mana on the table is harder than the first four. presumably to compensate for the difficulty in getting that fifth mana on the table. Is that balanced? We can add it up: the cost is 1 (baseline) + 2 (W) + 3 (colorless) = 6. A mana cost of G2 provides a cost of 5 (1 as a baseline. Is the cost curve linear. From all of this work. 5/4 (5 additional power/toughness for 3 additional colorless mana) As predicted earlier. Or. Black and Green. We might infer that W must have a cost of 3. • The fifth point of mana provides a double benefit (triple for Green). 2 for the colored G mana. 2/1 (3 power/toughness for W) W4. up to 4 mana. What would a 4/1 Green creature cost? The benefit is 5 (4 Power + 1 Toughness). • Increased total mana cost provides a linear benefit. so it should at least do something useful. Our cost curve. and if we add four colorless mana that costs an extra 4… but that also makes for a total mana cost of 5. 1 toughness). Our most basic card. to justify its existence. just for existing) Each colorless mana = 1 Each colored mana = 2 Total mana cost of 5 or more = +1 (or +2 for Green creatures) Our benefits are: • +1 Power or +1 Toughness = 1 • Being a Red or Blue creature = 1 (apparently this is some kind of metagame privilege). with each point of colorless representing a numeric cost of 1. Since we have a definite linear relationship between colorless mana and increased power/toughness. which would give an extra +1 to the cost for a total of 8. but there appears to be a jump around 4 or 5 mana: • • • • W. using some knowledge of the game. There may be other valid sets of cost and benefit numbers that would also fit our observations. . But if these are accurate. we might instead guess that W has a cost of 2. we can take our first guess at a cost curve. we could already design some new cards. there may be an additional cost bump at 5 mana.

Deathtouch G. Protection from Demons and Dragons WW2. 5/5. First Strike. 2/2. Lifelink R3. And we can guess from the B1. Flying. we don’t know what happens when we go above 5 (or below 1) total mana. Flying. 3/2. Trample GG. 3/2. We could take a random guess based on our intuition of the game. First Strike WW3. 2/1. 2/3. due to crossing the 5-mana threshold) U3. but also how we are limited: we don’t know what happens when we have several colored mana. Flying WW. and we don’t know how to cost any special abilities. 2/2. Trample G3. 2/1. 7/7. 3/5. Flying BB. In particular. Trample How do we proceed here? The easiest targets are those with only a single ability. 3/3 R3. but Red?) It appears that perhaps Red and Blue get their high-cost-mana bonus at a threshold of 4 mana rather than 5. First Strike. which in our math has a benefit of 1. Additionally. Flying. It’s pretty clear from looking at all of those that Flying has the same benefit of +1 power or +1 toughness. We can also make some direct comparisons to the earlier list of creatures without abilities to derive benefits of several special abilities: • • • • B2.So you can see how a small amount of information lets us do a lot. Vigilance W1. Defender. Lifelink card and our existing math that Lifelink is also a benefit of 1. Flying WW3. Protection from White B2. 6/4. like all the White cards with just Flying. Swampwalk B1. 2/2. we might instead guess that the U3 creature is slightly above the curve. We find another strange comparison in Green: . 3/5 (an extra +1 cost but +2 benefit. Swampwalk R3. Lifelink. We run into something curious when we examine some red and blue creatures at 4 mana. there are 18 creature cards in this set that only have standard special abilities on them: • • • • • • • • • • • • • • • • • • W3. Haste Swampwalk and Haste (whatever those are) also have a benefit of 1. 2/4. 4/4. 3/2 B2. First Strike. Reach GG4. 3/2. 2/4 Flying (identical total cost to the W3 but +1 benefit… in Blue?) R3. 3/2. 2/2. Flying W4. 3/2. Protection from Black W2. 3/3 (identical total cost and benefit to the W3. Flying may be cheaper for Blue than it is for White… but given that it would seem to have a cost of zero here. 0/3. 2/1. Flying U3. Haste GG5. but first let’s take a look at some more creatures. 2/4. Compare the following: • • • • W3. 2/2. Reach GG3.

• G3, 2/4, Reach (cost of 6, benefit of 6+Reach?) • G4, 5/4 (cost of 8, benefit of 9?) At first glance, both of these would appear to be above the curve by 1. Alternatively, since the extra bonus seems to be consistent, this may have been intentional. We might guess that Green gets a high-cost bonus not just at 5 total mana, but also at 4 total mana, assuming that Reach (like the other abilities we’ve seen) has a benefit of 1. (In reality, if you know the game, Reach gives part of the bonus of Flying but not the other part, so it should probably give about half the benefit of Flying. Unfortunately, Magic does not offer half-mana costs in standard play, so the poor G3 is probably destined to be either slightly above or below the curve.) Let’s assume, for the sake of argument, that the benefit of Reach is 1 (or that the original designers intended this to be the benefit and balanced the cards accordingly, at least). Then we can examine this card to learn about the Defender special ability: • G, 0/3, Defender, Reach The cost is 1 (baseline) + 2 (G mana) = 3. The benefit is 3 (toughness) + 1 (Reach) + ? (Defender). From this, it would appear Defender would have to have a benefit of negative 1 for the card to be balanced. What’s going on? If you’ve played Magic, this makes sense. Defender may sound like a special ability, but it’s actually a limitation: it means the card is not allowed to attack. We could therefore consider it as an additional cost of 1 (rather than a benefit of -1) and the math works out. We’ve learned a lot, but there are still some things out of our immediate grasp right now. We’d love to know what happens when you have a second colored mana (does it also have a +2 cost like the first one?), and we’d also like to know what happens when you get up to 6 or 7 total mana (are there additional “high cost” bonus adjustments?). While we have plenty of cards with two colored mana in their cost, and a couple of high-cost Green creatures, all of these also have at least one other special ability that we haven’t costed yet. We can’t derive the costs and benefits for something when there are multiple unknown values; even if we figured out the right total level of benefits for our GG4 creature, for example, we wouldn’t know how much of that benefit was due to the second Green mana cost, how much came from being 6 mana total, and how much came from its Trample ability. Does this mean we’re stuck? Thankfully, we have a few ways to proceed. One trick is to find two cards that are the same, except for one thing. Those cards may have several things we don’t know, but if we can isolate just a single difference then we can learn something. For example, look at these two cards: • GG4, 6/4, Trample • GG5, 7/7, Trample We don’t know the cost of GG4 or GG5, and we don’t know the benefit of Trample, but we can see that adding one colorless mana that takes us from 6 to 7 gives us a power+toughness benefit of 4. A total cost of 7 must be pretty hard to get to! We can also examine these two cards that have the same mana cost: • WW3, 5/5, Flying, First Strike, Lifelink, Protection from Demons and Dragons • WW3, 4/4, Flying, Vigilance From here we might guess that Vigilance is worth +1 power, +1 toughness, First Strike, Lifelink, and the Protection ability, making Vigilance a really freaking awesome special ability that has a benefit of at least 4. Or, if we know the game and realize Vigilance just isn’t that great, we can see that the 5/5 creature is significantly above the curve relative to the 4/4. We still don’t know how much two colored mana costs, so let’s use another trick: making an educated guess, then trying it out through trial and error. As an example, let’s take this creature:

• GG, 3/2, Trample We know the power and toughness benefits are 5, and since most other single-word abilities (Flying, Haste, Swampwalk, Lifelink) have a benefit of 1, we might guess that Trample also has a benefit of 1, giving a total benefit of 6. If that’s true, we know that the cost is 1 (baseline) + 2 (first G), so the second G must cost 3. Intuitively, this might make sense: having two colored mana places more restrictions on your deck than just having one. We can look at this another way, comparing two similar creatures: • G1, 2/2 • GG, 3/2, Trample The cost difference between G1 and GG is the difference between a cost of 1 (colorless) and the cost of the second G. The benefit difference is 1 (for the extra power) + 1 (for Trample, we guess). This means the second G has a cost of 2 more than a colorless mana, which is a cost of 3. We’re still not sure, though. Maybe the GG creature is above the curve, or maybe Green has yet another creature bonus we haven’t encountered yet. Let’s look at the double-colored-mana White creatures to see if the pattern holds: • WW, 2/2, First Strike, Protection from Black • WW2, 2/3, Flying, First Strike • WW3, 4/4, Flying, Vigilance Assuming that Protection from Black, First Strike, and Vigilance each have a +1 benefit (similar to other special abilities), most of these seem on the curve. WW is an expected cost of 6; 2/2, First Strike, Protection from Black seems like a benefit of 6. WW3 is a cost of 10 (remember the +1 for being a total of five mana); 4/4, Flying, Vigilance is also probably 10. The math doesn’t work as well with WW2 (cost of 8); the benefits of 2/3, Flying and First Strike only add up to 7. So, this card might be under the curve by 1. Having confirmed that the second colored mana is probably a cost of +3, we can head back to Green to figure out this Trample ability. GG, 3/2, Trample indeed gives us a benefit of 1 for Trample, as we guessed earlier. Now that we know Trample and the second colored mana, we can examine our GG4 and GG5 creatures again to figure out exactly what’s going on at the level of six or seven mana, total. Let’s first look at GG4, 6/4, Trample. This has a total benefit of 11. The parts we know of the cost are: 1 (baseline) + 2 (first G) + 3 (second G) + 4 (colorless) + 1 (above 4 mana) + 1 (above 5 mana) = 12, so not only does the sixth mana apparently have no extra benefit but we’re already below the curve. (Either that, or Trample is worth more when you have a really high power/toughness, as we haven’t considered combinations of abilities yet.) Let’s compare to GG5, 7/7, Trample. This has a benefit of 15. Known costs are 1 (baseline) + 2 (first G) + 3 (second G) + 5 (colorless) + 1 (above 4 mana) + 1 (above 5 mana) = 13, so going from five to seven mana total has an apparent additional benefit of +2. We might then guess that the benefit is +1 for 6 mana and another +1 for 7 mana, and that the GG4 is just a little below the curve. Lastly, we have this Deathtouch ability that we can figure out how, from the creature that is GG3, 3/5, Deathtouch. The cost is 1 (baseline) + 2 (first G) + 3 (second G) + 3 (colorless) + 1 (above 4 mana) + 1 (above 5 mana) = 11. Benefit is 8 (power and toughness) + Deathtouch, which implies Deathtouch has a benefit of 3. This seems high, when all of the other abilities are only costed at 1, but if you’ve played Magic you know that Deathtouch really is a powerful ability, so perhaps the high number makes sense in this case. From here, there are an awful lot of things we can do to make new creatures. Just by going through this analysis, we’ve already identified several creatures that seem above or below the curve.

(Granted, this is an oversimplification. Some cards are legacy from earlier sets and may not be balanced along the current curve. And every card has keywords which don’t do anything on their own, but some other cards affect them, so there is a metagame benefit to having certain keywords. For example, if a card is a Goblin, and there’s a card that gives all Goblins a combat bonus, that’s something that makes the Goblin keyword useful… so in some decks that card might be worth using even if it is otherwise below the curve. But keep in mind that this means some cards may be underpowered normally but overpowered in the right deck, which is where metagame balance comes into play. We’re concerning ourselves here only with transitive balance, not metagame balance, although we must understand that the two do affect each other.) From this point, we can examine the vast majority of other cards in the set, because nearly all of them are just a combination of cost, power, toughness, maybe some basic special abilities we’ve identified already, and maybe one other custom special ability. Since we know all of these things except the custom abilities, we can look at almost any card to evaluate the benefit of its ability (or at least, the benefit assigned to it by the original designer). While we may not know which cards with these custom abilities are above or below the curve, we can at least get a feel for what kinds of abilities are marginally useful versus those that are really useful. We can also put numbers to them, and compare the values of each ability to see if they feel right. Name That Cost! Let’s take an example: W1, 2/2, and it gains +1 power and +1 toughness whenever you gain life. How much is that ability worth? Well, the cost is 4, the power/toughness benefit is 4, so that means this ability is free – either it’s nearly worthless, or the card is above the curve. Since there’s no intrinsic way to gain life in the game without using cards that specifically allow it, and since gaining life tends to be a weak effect on its own (since it doesn’t bring you closer to winning), we might guess this is a pretty minor effect, and perhaps the card was specifically designed to be slightly above the curve in order to give a metagame advantage to the otherwise underpowered mechanic of life-gaining. Here’s another: W4, 2/3, when it enters play you gain 3 life. Cost is 8; power/toughness benefit is 5. That means the life-gain benefit is apparently worth 3 (+1 cost per point of life). Another: UU1, 2/2, when it enters play return target creature to its owner’s hand. The cost here is 7; known benefits are 5 (4 for power/toughness, 1 for being Blue), so the return effect has a benefit of 2. And another: U1, 1/1, tap to force target enemy creature to attack this turn if able. Cost is 4, known benefit is 3 (again, 2 for power/toughness, 1 for Blue), so the special ability is costed as a relatively minor benefit of 1. Here’s one with a drawback: U2, 2/3, Flying, can only block creatures with Flying. Benefit is 5 (power/toughness) + 1 (blue) + 1 (Flying) = 7. Mana cost is 1 (baseline) + 2 (U) + 2 (colorless) = 5, suggesting that the blocking limitation is a +2 cost. Intuitively, that seems wrong, when Defender (complete inability to block) is only +1 cost, suggesting that this card is probably a little above the curve. Another drawback: B4, 4/5, enters play tapped. Benefit is 9. Mana cost is 1 (baseline) + 2 (B) + 4 (colorless) + 1 (above 5 mana) = 8, so the additional drawback must have a cost of 1. Here’s a powerful ability: BB1, 1/1, tap to destroy target tapped creature. Mana cost is 7. Power/toughness benefit is 2, so the special ability appears to cost 5. That seems extremely high; on the other hand, it is a very powerful ability, it combos well with a lot of other cards, so it might be justified. Or we might argue it’s strong (maybe a benefit of 3 or 4) but not quite that good, or maybe that it’s even stronger (benefit of 6 or 7) based on seeing it in play and comparing to other strong abilities we identify in the set, but this at least gives us a number for comparison. So, you can see here that the vast majority of cards can be analyzed this way, and we could use this

Still. which means you don’t have as much intuition for what the curve is or what kinds of effects are really powerful or really weak. In fact. if one object is way too strong. Maybe you don’t have a big playtest budget. Lastly. but only a . If you’re making a new game. and there are only a few Snakes in the game in isolated locations. and then verify that those creatures are in fact on the curve (and that anything you’ve identified as intuitively too strong or too weak are correctly shown by your math as above or below the curve). through a combination of your designer intuition and playtesting. If you’re in this situation. This means you have to plan on doing a lot of heavy playtesting for balance purposes. I can offer a couple of basic pieces of advice. a sort of design “reverse engineering” to figure out how the game is balanced. you can then take pretty good guesses at what you don’t know. a sufficiently overpowered object ruins the balance of the entire game. and you need to make sure the project is scheduled accordingly. Worst case. In general. Cost curves for new games So far. we’ve looked at how to derive a cost curve for an existing game. you might have a few options on how to structure your cost curve. that is a very small benefit but it is certainly not a drawback. With an existing game you have to keep all the numbers in line with everything that you’ve already released. Maybe your publisher is holding a gun to your head. if it is a choice between two equally good things. telling you to ship now. it’s better to make an object too weak than to make it too strong. Another thing that makes it harder to create a cost curve for a new game is that you have the absolute freedom to balance the numbers however you want. Rules of Thumb How do you know if your numbers are right? A lot of it comes down to figuring out what works for your particular game. If you have a sword that does extra damage to Snakes. and its benefit is always at least a little bit greater than zero. to identify other cards (those you don’t have an opinion on yet) as being potentially too good or too weak. and you’re not in a position to playtest thoroughly. Not all of the cards fit on the curve. the cost of the choice must be at least the cost of the more expensive of the two benefits. the worst thing that happens is no one uses it. the player takes the better (more expensive) benefit every time. you could then propose a cost curve and set of numeric costs and benefits. By using those “feels balanced” creatures as your baseline. However. but if you play the game for awhile you’ll have an intuitive sense of which cards are balanced and which feel too good or too weak. sometimes you have to take a guess. a limited or restricted benefit is never a cost. but all of the other objects in the game can still be viable – this isn’t optimal. it will always get used. but it is at least relatively straightforward. but it’s not game-breaking. after the core mechanics are fairly solidified. effectively preventing everything else that’s actually on the curve from being used since the “balanced” objects are too weak by comparison.technique to get a pretty good feel for the cost curve of what is otherwise a pretty complicated game. Whatever the case. if you give players a choice. This is not necessarily an easy task. that choice is a lot more interesting than choosing between an obviously strong and an obviously weak effect. you haven’t played it in its final form yet. even if you’re a player and not a game designer. First. If it’s too weak. so you don’t have many degrees of freedom. creating a cost curve is much harder. you’ve got something that might be a little above or a little below the curve. so it should be costed at least as much as what the player will choose. Since the game doesn’t exist yet. A sufficiently underpowered object is ruined on its own. try to make those choices give about the same benefit. if you give the player a choice between two benefits. Using what you do know. as it can be quite tedious at times. you can use this technique to help you identify which cards you’re likely to see at the tournament/competitive level. and you might have to err on one side or the other. Second.

It might increase quickly or slowly depending on how good a balancing job you do. Since then. This means. You now have to change all twenty cards that use that mechanic. others will be a little below. So. you may have thousands of valid ways to design your cost curve – far more than you’ll have time to playtest. Some will be a little above.” and doing that is dangerous because a lot of players will choose the “leave” option. let’s say you’re making a 200-card set for a CCG. the ones that used to be dominant. this isn’t necessarily a bad thing. Anything where your game has new stuff added to it over time. Suppose you decide that drawing an extra card is a benefit of 2 at the beginning. because when players perceive that we are purposefully increasing the power level of the game just to force them to buy new stuff. but after some playtesting it becomes clear that it should actually be a benefit of 3. which I call the “escalation of power” problem. you have to accept right now that a few things will be a little better than they’re supposed to… even if the difference is just a microscopic rounding error. with a sufficiently large and skilled player base. And players will adapt to an environment where the best-of-the-best is what is seen in competitive play. no one will use it. however. expansion sets for strategy games. and so on. and one of the new mechanics you’re introducing is the ability to draw extra cards. there are no constraints.handful of options will actually make any sense in the context of everything you’ve already done. As an example. was a game where the cost curve was generated after three sets had already been released. however. When making a new game. the escalation-of-power . And if players keep buying from us on a regular basis. and players become accustomed to that as the “standard” cost curve. rather than just being a single standalone product. If you use the “old” cost curve and produce a new set of objects that is (miraculously) perfectly balanced. that is specific to persistent games that build on themselves over time – CCGs. and I still don’t have a good answer for how to do this in any kind of reasonable way. “buy or leave. you can just make these changes meticulously and one at a time until your balance is perfect. While your goal is to get everything as close to the cost curve as possible. There’s another nasty problem when designing cost curves for new games: changes to the cost curve are expensive in terms of design time. MMOs. the game designer faces a problem. and you can see where multiple continuing changes to the cost curve mean redoing the entire set several times over. The most balanced CCG that I’ve ever worked on. in the sense that it basically forces players to keep buying new stuff from you to stay current: eventually their old strategies. you are simply not going to be perfect. In the real world. However. Every single object in your game will not be perfectly balanced along the curve. over time. In order to make your new set viable. and 20 cards in the set use this mechanic in some way or other. but you will see some non-zero level of “power inflation” over time. Facebook games. If you have infinite time to playtest. because none of it is as good as the best (above-the-curve) stuff from previous sets. The problem is. the power level of the cost curve increases. Over time. will fall behind the power curve and they’ll need to get the new stuff just to remain competitive. Knowing this. you have to create a new cost curve that’s balanced with respect to the best objects and strategies in previous sets. the things that give an edge (no matter how slight that edge) will rise to the top and become more common in use. there’s a thin line here. do the math where you can. take your best initial guess… and then get something into your playtesters’ hands as early as you can. this is an unsolved problem. because no one ever gets game balance right on the first try. that gives them an opportunity to exit our game and find something else to do. you’ll need to grit your teeth. in any given set. Now. With a new game. sequels where you can import previous characters. Keep in mind that you will get the math wrong. it was the newer sets released after we derived the cost curve that were really good in terms of balance (and they were also efficient in terms of development time because the basic “using the math and nothing else” cards didn’t even need playtesting). There’s one other unsolved problem. I’ve tried to develop new games with a cost curve in mind. that’s a good thing. We’re essentially giving an ultimatum. so you have as much time as possible to learn about how to balance the systems in your game.

and flip to the section that gives a list of equipment. compute the average for each individual die and then add them all together). For your reference. if you’re rolling multiple dice. some weapons may have “special abilities” like longer range. +1 cost if the card requires 5 or more mana (White. and of course we want the new stuff we offer them to be compelling in its own right (because it’s fun to play with. As you do this. +2 benefit for Deathtouch. and 3 for the second colored mana. the average value of that die is (n+1)/2. You probably didn’t need me to tell you that. • Benefits: 1 per point of power and toughness. here’s the math we have currently (note that you may decide to change some of this as you evaluate other cards): • Mana cost: 1 (baseline). 2 for the first colored mana. or doing extra damage against certain enemy types. Swampwalk. In each case. Find a spoiler list for Magic 2011 online (you shouldn’t have to look that hard). you’ll get practice applying that curve to identify objects (whether those be cards. First. Keep in mind that your game already has a cost curve. while we know the cost curve will increase over time. build as much of the rest of the cost curve as you can. If You’re Working On a Game Now… If you are designing a game right now. and starting with the math we’ve identified here. Take a look at whatever Dungeons & Dragons Players Handbook edition you’ve got lying around. Lifelink. Think of this as an opportunity to learn more about the balance of your game. maybe you like RPGs. Note that depending on the edition. Trample. and that game has any transitive mechanics that involve a single resource cost. although finding the relevant general design stuff in the sea of articles on the minutiae of specific cards and sets can be a challenge (and it’s a big archive!). . Reach. along with their Gold costs. • High cost bonus: +1 cost if the card requires 4 or more mana (Red. but I’m saying it anyway. Here you’ll have to do some light probability that we haven’t talked about yet. Then. Haste. particularly all the basic weapons in the game. identify the cards that you think are above or below the curve. You may also find some interesting reading in Mark Rosewater’s archive of design articles for Magic. First Strike. Second. whether you are aware of it or not. then try to derive the more complicated ones. • Special costs: +1 cost for the Defender special ability. 1 for each colorless mana. • Special benefits: +1 benefit for Flying. Green gets both bonuses). Homework I’ll give you three choices for your “homework” this week. we want that to be a slow and gradual process so that older players don’t feel overwhelmed. Protection from White. Option 1: More Magic 2011 If you were intrigued by the analysis presented here on this blog. weapons. and Green creatures only – yes.problem is not an excuse for lazy design. and try to figure out the cost curve for weapons. 1 for Red and Blue creatures. Protection from Black. or whatever) that are too strong or too weak compared to the others. Blue and Green creatures only). and an additional +1 cost for each total mana required above 5. so start with the simple melee weapons and once you’ve got a basic cost curve. you will get to practice the skill of deriving a cost curve for an existing game. see if you can derive the cost curve. so nyeeeah. continue it. relate the average weapon damage to the Gold cost. to figure out the average damage of each weapon (hint: if you roll an n-sided die. Black. Option 2: D&D If CCGs aren’t your thing. Remember to only try to figure out the math for something when you know all but one of the costs or benefits. there are two purposes here. Vigilance. and yes that means the “average” may be a fraction. not just because it’s uber-powerful).

you may have to fudge things a bit by just making up some numbers). You’ll have to find a FAQ (or experiment by playing the game or carefully analyzing gameplay videos on YouTube) to determine the dps for each weapon. Then. damage. and occasionally a special ability such as area-effect damage or dual-wield capability. fire rate. or reading comments in player forums)? . (For some things like “accuracy” that can’t be easily quantified. is this consistent with your intuition (either from playing the game. and each weapon has a lot of stats: effective range. take the amount of damage and multiply by fire rate (in shots-per-second) and that is dps. to compute dps. For this exercise.If you find that this doesn’t take you very long and you want an additional challenge. analyze. and see if you can find a relation between damage and AC. so there’s no single resource used to purchase anything. take a look at the FPS genre. This is a little different because there isn’t really an economic system in the game. range. Option 3: Halo 3 If neither of the other options appeals to you. Relate everything else to dps to try to figure out the tradeoffs between dps and accuracy. in particular Halo 3. use damage per second (“dps”) as your primary resource. do the cost curve for armors in the game as well. Which weapons feel above or below the curve based on your cost curve? How much dps would you add or remove from each weapon to balance it? And of course. there is a variety of weapons. and each special ability. However.

like a run of high or low numbers or something. So if you’ve got that custom die and you want players to roll three of them and add the results. and you can balance the game on that assumption. even though there’s no 19sided die inside there (I’ll actually talk a bit more about numbers from computers next week). 3. also known as d6s. The first is that each side is equally likely to be rolled (I’m assuming you’re using a fair die and not a rigged one). because we assume all results are equally likely. if there is a number before the “d” that is the number of dice rolled – so for example. A four-sided dreidel (also known as a teetotum) is equivalent to a d4. and we’ll talk later about why that is. When most people think of dice. meaning that previous rolls do not influence later rolls. A random-number generator in a computer might create a random number from 1 to 19 if the designer wants it to. I’ve seen two designs for a d7. really they are all equivalent: you have an equal chance of choosing one of several outcomes. you know they’ll roll an average of about a total of 5. This Week’s Topic Up until now. you might have a d30 or d100 lying around. d12. But most gamers have encountered plenty of other dice: foursided (d4). we need to understand the nature of that randomness and how to modify it to get the results that we want. and last week we really went deep into transitive mechanics and took them about as far as I can go with them. nearly everything we’ve talked about was deterministic. 1. d20… and if you’re really geeky. but . What if you have custom dice? For example. Understanding the nature of randomness is important for game designers because we make systems in order to craft certain player experiences. 1. that is shorthand. Now. The spinner that comes with Chutes & Ladders that goes from 1 to 6 is equivalent to a d6. While all of these things look different. If a system includes a random input. So. Dice and Independence As I said before. in Monopoly you roll 2d6. Each die roll is what we call independent. 2. eight-sided (d8). This is a special case. But so far we’ve ignored a huge aspect of many games. the non-deterministic aspects: in other words.5. This is true no matter how many dice are rolled. A standard coin can be thought of as a d2. randomness. What’s the average roll for this die? 1+1+1+2+2+3 = 10. 2. The average roll for a standard d6 is 1+2+3+4+5+6 = 21. In case you haven’t seen this terminology before.66. Dice have some interesting properties that we need to be aware of.Level 4: Probability and Randomness Readings/Playings Read this article on Gamasutra by designer Tyler Sigman. they are thinking of six-sided dice. which means the average is 21/6 = 3. one which looks like a die and the other which looks more like a seven-sided wooden pencil. I affectionately refer to it as the “Orc Nostril Hair” article. divided by 6. just add up all the sides and divide by the number of sides. If you roll dice enough times you definitely will see “streaks” of numbers. but it provides a pretty good primer for probabilities in games. divided by the number of sides (6). “d” with a number after it means a die with that many sides. when I say “dice” here. so it behaves sort of like this weird d3 where you’re more likely to get a 1 than a 2 and more likely to get 2 than 3. Dice Let’s start with something simple: a die-roll. so we need to know how those systems work. equals 5/3 or about 1. I’ve seen one game with a d6 with special labels: 1. we’re going on the assumption that each roll is equally likely. if you want to know the average value of a roll (also known as the “expected value” by probability geeks). We have plenty of other random-number generators that aren’t globs of plastic but that still serve the same function of generating a random number from 1 to n.

but because that’s just what tends to happen in large sets of rolls. It is not more likely because the die is “running hot”. the 5d2 roll will tend to create more rolls of 7 and 8 than any other results. Divide the first number by the second and you’ve got your probability. more faces on a die is also more random since you have a wider spread. The more you roll a die. “fresh” d6. so if you roll a 6 twice. Didn’t I just say that dice don’t run hot or cold? And now I’m saying if you roll a lot of them.5. 4. assume that we’re swapping out dice each time. doing the math for a random die roll is pretty straightforward. so we’re due for something else”. It is not less likely because “we already got two 6s. Same range. If you’re only making a single roll or a small number of rolls. or 50%. For example. but that takes more computation than I want to get into today (I’ll get into it later on). . but you might also get a bunch of low numbers later. each roll is equally likely. Then. and even the same average value (7. if you roll it twenty times and get 6 on each time. independent of the others. 3. the die is a piece of plastic. the more you’ll tend towards the average. the probability of rolling another 6 is… exactly 1/6. You want to roll 4 or more on 1d6. Rolling 5d2 also generates a number between 5 and 10. If it helps. the odds of getting a 6 on the twenty-first roll are actually pretty good… because it probably means you have a rigged die!) But assuming a fair die. mostly you do that by computing something called “standard deviation” and the larger that is the more random it is. And the answer is. Making Dice More or Less Random Now. if you roll a standard d6 and get two 6s in a row. the game will feel more random if you use more sides of the dice. but the nature of the randomness is different. at least in terms of finding the average roll. This is not because previous numbers “force” the die to roll what hasn’t been rolled before. First. or the more dice you roll. a way of saying that 1d6+4 is “more random” than 5d2 in that it gives a more even spread. Not because the die is being influenced by previous rolls (seriously. Wait a minute. There are 6 total possible results (1. It’s because a small streak of 6s (or 20s or whatever) ends up not being much influence when you roll another ten thousand times and get mostly average rolls… so you might get a bunch of high numbers now. remove that “hot” die from play and replace with a new. (Of course. 5. 5. or 6). If you roll a single die. the more things will tend towards the average. they tend towards an average? What’s going on? Let me back up. 3 of the results (4. While I’m on the subject. There are also ways to quantify “how random” something is. but I need to be clear about that before we move on. or 6) are a success. rolling 1d6+4 (that is. Examples Here’s a very simple example. or 8. each roll is equally likely. I haven’t come up 2 in awhile”). Computing Probability through Counting You might be wondering: how can we figure out the exact probability of getting a specific roll? This is actually pretty important in a lot of games. count the number of ways to roll dice that get the result you actually want. fewer dice rolled = more random. The more you roll. All I’m asking you to know is that in general. rolling a standard 6-sided die and adding 4 to the result) generates a number between 5 and 10. over time. collectively. So your probability is 3 divided by 6. count the total number of ways to roll dice (no matter what the result). But the single d6 roll has an equal chance of rolling a 5. multiply by 100 if you want the percentage. is doesn’t exactly have a brain to be thinking “gosh. Your small streak in a large ocean of die-rolls will be mostly drowned out.it doesn’t mean the dice are “hot” or “cold”. So. you’ll roll each side about as often as the others. we count two things. or 10. 2. I apologize. Of those.5 in both cases). That means if you roll a lot. let’s talk about how you can get different numbers from different dice. For those of you that knew this already. because if you’re making a die roll in the first place there is probably some kind of optimal result. or 0. and over time it all tends to go towards the mean.

In this case the easiest way to do it is to stop doing math and start using a computer. There are a LOT of different individual results of eight dice. the more it tends towards the average. 6 (4+2). And then you can count the ways to roll an even number: 2 (1+1). 8 (2+6). Combining Independent Trials . also 0. use this formula: =FLOOR(RAND()*6)+1 When you don’t know the answer so you just try it a lot. You want to roll an even number on 2d6. and since neither die is influenced by the other you multiply 6 results by 6 to get 36). j++) { for (int k=1. Monte Carlo Simulations What if you have too many dice to count this way? For example. 8 (5+3). j<=6. say you want to know the odds of getting a combined total of 15 or more on a roll of 8d6. If it helps. i++) { for (int j=1. Perhaps unexpected. think of each die as having a different color. and there are two ways to do this. but the difference is which number appears on the first die. it still takes a really long time. 10 (4+6). To roll 1d6 in Excel. 4 (3+1).Here’s a slightly more complicated example. The great thing about this is. 6 (5+1). so maybe you have a red die and a blue die in this case. 10 (5+5). for (int i=1.5 or 50%. k++) { … // insert more loops here if (i+j+k+… >= 15) { wincount++. you can simulate it in Excel by having it roll 8d6 a few thousand times and take the results. 6 (3+3). 6 (2+4). The first way gets you an exact answer but takes a little bit of programming or scripting. 8 (3+5). there’s a name for that: Monte Carlo simulation. 8 (6+2). Your code might look something like this: int wincount=0. is that it’s easy to double-count. If you don’t know programming but you just need a ballpark answer and not an exact one. so just counting by hand takes too long. For example. There are 36 total results (6 for each die. 12 (6+6). evaluating and counting up all the iterations total and also all the iterations that are a success. there are actually two ways to roll the number 3 on 2d6: 1+2 and 2+1. Those look the same. The tricky thing with questions like this. } } } float probability = wincount/totalcount. 4 (2+2). the more times you do something. It turns out there are exactly 18 ways to do this out of 36. } totalcount++. 10 (6+4). and yet we know the answer will be “pretty good” because like we learned before. 8 (4+4). k<=6. i<=6. totalcount=0. we don’t have to know the math behind why it works. Even if we find some tricks to group different sets of rolls together. and which appears on the second die. and then have it spit out the answers at the end. but kind of neat. 4 (1+3). and it’s a great thing to fall back on when you’re trying to do probability calculations and you find yourself in over your head. 6 (1+5). You basically have the computer run through every possibility inside a for loop.

5%… So. so if you add those together you’d expect a 100% chance of getting Heads. there’s another way to look at this. so you’d want a pretty big jackpot if you were putting that in your game. and the fifth roll 1/6. and you win if you roll at least one 6. but we know that’s not true. if you fail one roll it affects the outcome of the entire dice game. When you have separate. Here’s an example of independent rolls: say you have a dice game where you roll a series of d6s. For example. The probability of each is 50%. do not ever add independent probabilities together. so the result of one roll doesn’t affect other rolls. Multiply those together and you get about 33%. And then you might roll two 6s. what are the chances you make all of them? As we said before. you take each of the individual probabilities and multiply them together. that doesn’t make you any more or less likely to succeed on future rolls. Multiplying these together. If you make all five rolls successfully. or three. No matter what you do. If instead you multiply. On the first roll. because it’s only all of them added together to give you a single result. and there are 6 different ways to choose which die is showing 6. 3/6. You might roll a single 6.If you’re asking for several repeated but independent trials. Another way to think about this: if you use the word “and” to describe several conditions (as in. and fifth roll requires a 6. So you have about a 1 in 3 chance of losing. On the second roll you have to get 3 or higher. Example Let’s go back to our d6 game where you have to roll higher than 2. figure out the individual probabilities and multiply. the rolls are independent. fourth roll is 5 or higher. You lose if none of the dice are showing 6. then higher than 3. Here we have six independent trials. or more. The third. How do you tell the difference between something that’s dependent and something that’s independent? Basically. for example. by turning it around. The first roll succeeds 5/6 times. This is a common mistake. you win. Here. and you’re wondering what is the probability that you’ll get Heads twice in a row. Because of this. However. which means one of the dice is showing 6 and the others are all showing 1-5. then it is independent. because you could get Tails twice. In this series of 5 rolls. imagine a 50/50 coin flip. these are independent trials. so we just compute the odds of each roll and then multiply together. we can consider the probability of each roll separately. Negation Here’s another useful trick: sometimes it’s hard to compute a probability directly. 2/6. if you roll really well on your second roll. and each of those is a separate computation. each of which has a probability of 5/6 (the die can roll anything except 6). we get about 1. Since you’re summing the dice together. and it gets out of hand pretty quickly. What are your chances of winning? There are a lot of things to compute here. Third roll requires 4 or higher. The fourth. Here’s an example: suppose we make another game where you roll 6d6. rolling a total of 15 on 8d6 is not something that can be split up into several independent rolls. independent probabilities and you want to know what is the probability that all of them will happen. we have an extra trick we can use that makes things a little easier. and so on up to 6. which is the correct probability of getting Heads twice. but you can figure out the chance that the thing won’t happen much easier. you have to get 2 or higher to stay in the game. Yes. . The second roll. To see why it doesn’t work. winning this game is pretty rare. but each individual roll is not influenced by the others. what you get on one die affects the required results of the others. you get 50%*50% = 25%. 4/6. if you can isolate each individual die-roll (or series) as a separate event. “what is the probability that some random event happens and that some other independent random event happens?”).

and multiply by the result. the probability of rolling a 4. if you were analyzing the probability of getting all of the hands in Poker. And vice versa. which was sort of like a weighted 1d3. Let’s go back to our original single standard d6 roll. at first glance this is sort of like that 1. in one special situation. So we multiply each side’s result by the probability of that result (1/6 for each side in this case). that means a 67% (or 2 in 3) chance of winning. All we have to do is figure out the probability of each spin. so you need to go back and check your figures. so this is a good reality check to make sure you didn’t miss anything. When you are trying to find the probability for several non-overlapping success criteria in a single trial. Some results have very large sections where the spinner can land so they happen more often. Unlike the spinner in Chutes & Ladders or A Game of Life. the Nuclear War spinner has results that are not equally probable. But occasionally you end up with a situation where there are different outcomes that have different chances of coming up. For a 6-sided die. then add all of these together. while other results are tiny slivers that you only land on rarely. there’s this spinner in one of the Nuclear War card game expansions that modifies the result of a missile launch: most of the time it just does normal damage plus or minus a few. but what are we really doing there? We could say this another way. if you add them all up you should get exactly 100% (or at least really close – if you’re using a calculator you might get a very slight rounding error. or that you got the probabilities of some hands wrong. it means there are probably some hands that you haven’t considered. that’s what we’re doing the whole time: multiplying each outcome by the probability of that outcome. One important trait here is that you add up all possible outcomes for a game. For example. because that’s how dice are supposed to work. you’ve done your math wrong. 1. Another way of thinking about this: when you use the word “or” in your probability (as in. I said you should never add probabilities together when you’re doing independent trials. the odds of not winning are 100% minus 67%. or whatever. the combined probabilities should add up to exactly 100%. For example. For a normal die. each side has exactly 1/6 chance of being rolled. add them together. find the smallest unit that everything is a multiple of. so all we have to do is divide all these sections evenly. 2. or blows up on the launchpad and damages you. If they don’t. The most obvious lesson here is that if you take a probability and negate it. or 33%. but occasionally it does double or triple damage. but there’s an easier way. And this will give us the average result if we add these all together. but if you’re doing exact numbers by hand it should be exact). figure out the individual probabilities and add them together. If you don’t. So if you can’t figure out one thing but it’s easy to figure out the opposite. 2. and that would technically work. Now. we said to add up all of the sides and then divide by the number of sides. Can we do this with the Nuclear War spinner? Sure we can. we’ve been assuming that every single side on a die comes up equally often. And that’s one way to do it. And really. figure out that opposite and then subtract from 100%. For example. Doing this. just subtract from 100%. . we get (1*1/6) + (2*1/6) + (3*1/6) + (4*1/6) + (5*1/6) + (6*1/6). Combining Conditions Within a Single Independent Trial A little while ago.Turning it around again. “what is the probability that you will get a certain result or a different result from a single random event?”). Uneven Probabilities So far. Are there any cases where you can add probabilities together? Yes. If the odds of winning are 67%. and then make this into a d522 roll (or whatever) with multiple sides of the die showing the same thing for the more common results.5) as we got before. 3 die we were talking about earlier. which gives us the same result (3. 1. 5 or 6 on 1d6 is equal to the probability of rolling 4 plus the probability of rolling 5 plus the probability of rolling 6.

If you count up the number of times you lose. There are 3 ways to roll four. you win 16 times out of 36… slightly less than 50%. If you were designing this game. there are 20 ways to lose. There’s five ways to do that. you get a huge score bonus. you lose 1/18 of a dollar per play). you lose your wager. 6. that’s good to know so you can design the scoring system accordingly. three dice of the remaining ones after that can show 4. and 3 more ways to roll ten. So. there are 16 winning rolls out of 36. and roll each possible roll exactly once. so we multiply 6x6x6x6x6x6 = 46656 (a much larger number!). you win double your wager. for every 36 plays (you could also say that on average. it only happens very rarely! . does that mean these are actually even odds? Not so fast. total. but sometimes that’s impractical and we’d like a math formula. and any of them can show the 1. we multiply all the different. How many ways are there to do that? Six – there are 6 dice. But what is the chance of winning? We can start by figuring out how many times you win: • • • • • • There are 36 ways to roll 2d6. If you roll the lowest three numbers (2. Since you play 36 times and win $18. here’s a game you might be able to find in some casinos: you place a wager. four remaining dice can show 3. on average. Dividing 720/46656 gives us a probability of about 1. 3 or 4) or the highest four numbers (9. you win twice as much. The extreme ends are special: if you roll 2 or 12. Now. not 18. You start each round by rolling 6d6. total. 10. There are 2 ways to roll three and eleven. under normal conditions. In most cases. and at the end you’re left with a single die that must show 6 (no choice involved in that last one). So if you play 36 times for $1 each. Rolling a 2+4 is the same as rolling a 4+2. Choose it and put it aside. so that’s like winning twice! So if you play this game 36 times with a wager of $1 each time. like if you’re rolling dice but you win more on some rolls than others. and 1 way to roll twelve. you’ll win $18 total (you actually win 16 times. What’s the probability that will happen? There are a lot of different ways to have one of each! The answer is to look at it this way: one of the dice (and only one) has to be showing 1. This is a pretty simple game. If you roll anything else (5.Another Example This technique of computing expected value by multiplying each result by its individual probability also works if the results are equally probable but weighted differently. 7 or 8). If you’re lucky enough to roll one of each result. we have to divide 720 by the number of ways to roll 6d6. Here’s an example problem from a dice game called Farkle. you win an amount equal to your wager. 1-2-3-4-5-6 (a “straight”). independent choices: 6x5x4x3x2x1 = 720 – that seems like a lot of ways to roll a straight. but two of those times it counts as two wins). you’ll win a total of $18 from the times when you win… but you’ll lose $20 from the twenty times you lose! As a result. You can see how easy it is to make one misstep and get the wrong probability here! Permutations So far. one of the remaining dice has to show 2. As an example. To get the probability of rolling a straight. Choose that and put it aside. We can see why Farkle gives you such a high score bonus for rolling a straight. 11 or 12). two of the remaining dice after that can show 5. but two of those times. How many of these are winning rolls? There’s 1 way to roll two. Adding these all up. you come out very slightly behind: you lose $2 net. How many ways can we do that? Each die can show 6 sides. Ah. and roll 2d6. There are 4 ways to roll nine. To figure out how many ways there are to roll a straight.5%. we just manually count the number of different ways to do something. Continuing along these lines. all of our die rolls assume that order doesn’t matter.

even though that someone is never you or anyone you know. because each card you draw influences what’s left in the deck. we almost never roll exactly one of each! We can see from this. so in this case “deck of cards” is mechanically equivalent to a bag of tiles where you draw a tile and don’t replace it. It’s only if I draw cards and don’t replace them that pulling a 1 on my first roll makes it more likely I’ll draw 6 next time (and it will get more and more likely until I finally draw . the 10 of Hearts.000 daily players. but probability professors seem to have a love of them for some reason). Is there a problem? It depends. multiple times per day. which it isn’t. It shows just how infrequently we actually roll exactly according to probability in the short term. Sure. Dude. 1/10 * 1/10 * 1/10 * 1/10 is 1 in 10. at least! Incidentally. If enough people play each week. the odds have changed because you’ve already removed a heart from the deck. If you’ve ever worked on an online game with some kind of random-number generator before. that is equivalent to a d6 die roll. so this should almost never happen. and you want to know the probability that the next card is also a heart. this is why it seems like every few weeks at least. How many of those kill four monsters in a row? Maybe all of them. but let’s be conservative and say that half of them are just there to trade stuff in the auction house or chat on the RP servers or whatever. say.000. and removing them from the deck. In a small series of die-rolls. What’s the chance this will happen to someone? On a scale like that. looking at them. Cards and Dependence Now that we’ve talked about independent events like die-rolling. This is what the player is trying to tell you. Each of these is an important property. another reason why expecting dice to roll what hasn’t been rolled yet “because we haven’t rolled 6 in awhile so we’re about due” is a fool’s game. You do the math. we have a lot of powerful tools to analyze the randomness of many games. we call this dependent probability. no result influences the future ones. and you get 100. so clearly your die-roller is busted. Your Random Number Generator Is Broken… This brings us to a common misunderstanding of probability: the assumption that everything is split evenly in the short term. so only half of them actually go out monster-hunting. or an urn where you’re drawing colored balls from (I’ve never actually seen a game that involves drawing balls from an urn. and those drops are only supposed to happen 10% of the time. someone wins the lottery. we expect there to be some unevenness. and I shuffled and drew a card and then reshuffled all six cards between card draws. the odds are you’ll have at least one obnoxiously lucky sap somewhere… but that if you play the lottery. Note that when I say “cards” here I am talking about any game mechanic where you have a set of objects and you draw one of them without replacing. six cards numbered 1 through 6. which is pretty infrequent. If you have a standard 52-card deck and draw. Each card that you remove changes the probability of the next card in the deck. If I had a deck with. with cards I’m assuming that you are drawing cards. Properties of Dependence Just to be clear.This result is interesting for another reason. you’d expect it to happen several times a day. if we rolled a few thousand dice. say. you’ve got worse odds of winning than your odds of being hired at Infinity Ward. you’ve probably heard this one: a player writes tech support to tell you that your random number generator is clearly broken and not random. How many players are on your server? Let’s say you’ve got a reasonably popular game. and they know this because they just killed 4 monsters in a row and got 4 of the exact same drop. Things get a little more complicated when we talk about drawing cards from a deck. Since each card draw is influenced by the card draws that came before. But rolling just six dice. we would see about as many of each of the six numbers on our rolls.

Multiplying these together. Probability of a Joker is 2/54. For each additional card you reveal. we’re still in the running to draw a pair. and take 51 cards away without revealing them.5%) is the final answer. the probability that the last card is the Queen of Clubs is still 1/52. No matter what we draw for our first card. the next time that guy sitting across the table from you in Texas Hold ‘Em says “wow. so the probabilities haven’t really changed. and we should get exactly 100%. we have 78/1431 (a little more than 5. if you shuffle a standard deck. What do we do with these two results? Since they do not overlap. then the probability of a match on the second card is 1/53. If we really wanted to be careful. If the first card is a Joker (2/54). another pocket pair? Must be my lucky day” you know there’s a pretty good chance he’s bluffing. reveal 51 cards and none of them is the Queen of Clubs. The fact that we are looking at the cards is also important. This is something that may sound counterintuitive. but feel free to do it yourself to confirm. but probably the easiest is to say this: what’s the probability that the first card you draw makes you totally ineligible to draw a pair? Zero. we have 1/1431 – less than a tenth of a percent. how does just flipping a card over magically change the probabilities? But it does. you get more information. rather than 3. with . so the first card doesn’t really matter as long as the second card matches it. I won’t do the math here for you. If instead you shuffle a standard deck. we add! 79/1431 (still around 5. probability of Something Else is 52/54. Your first card is either going to be a Joker. It’s called that because there used to be this game show called Let’s Make a Deal. because you can only calculate the probability of unknown stuff based on what you do know. What’s the probability that you’ve drawn a pair? There are a few ways to compute this. Calculating probabilities for dependent events follows the same principles as independent. If I pull a card from the deck but don’t look at it. called the Monty Hall problem. for example. Multiplying these together (we can do that since they’re separate events and we want both to happen). and we still want to know the chance of drawing a pair? Occasionally your first card will be a joker.5%). and 3 of them match (normally it’d be 4 out of 52. I have no additional information. and adding those together with the probability of winning. The Monty Hall Problem This brings us to a pretty famous problem that tends to really confuse people. Example You shuffle a standard 52-card deck. How do we figure this out? By splitting up the probabilities and then multiplying each possibility. If the first card is Something Else (52/54). or until I reshuffle). so we have a 100% chance that we can still get a pair after drawing the first card. the probability of a match on the second card is up to 3/53. and we want to know the probability of either of them. or Something Else. except it’s a little trickier because the probabilities are changing whenever you reveal a card.it. you know with 100% certainty that this is what the missing card is. So. So you have to do a lot of multiplying different things together rather than multiplying the same thing against itself if you want to repeat a challenge. or drawing Something Else and not matching. (So. but you already removed a “matching” card on your first draw!) so the probability ends up being exactly 1/17. What’s the probability that the second card matches? There are 51 cards remaining in the deck. we could calculate the probability of all other possible results: drawing a Joker and not matching. and draw two cards. Really. and there will only be one matching card in the rest of the deck. in combination.) What if we add two jokers so it’s now a 54-card deck. all this means is we have to put together everything we’ve done already.

He tries to give you every opportunity to win. Chance of winning: 1/3. but he can always do that. Of course you’d want to switch! If you’re still wondering about this and need more convincing. he reveals one of them to not be a prize. there’s also a simulator where you can give it any number of doors from 3 to 50 and just play on your own. clicking here will take you to a wonderful little Flash app that lets you explore this problem. Monty Hall knew this. If you haven’t seen this problem before. Monty Hall. and there’s only one prize and two doors you didn’t choose. Your odds of winning with your first pick are obviously 1/3. So here’s what he’d do to change the game a little. They’d give you a door of your choice… for free! Behind one door. if you’ve got a car and then you give it away for a goat. as long as you’re good at guessing how much their sponsored items actually cost. Monty Hall. So. that is exactly what revealed information does. before we do that. You can actually play.your host. and I think everyone here would agree to that. After all. I tend to draw these kinds of conclusions. so they wouldn’t just have an empty door. he’d always offer you the chance to switch. just by revealing a door we’ve magically changed the odds? But as we saw with our card example earlier. no nothing. Oh. or is it the same? What do you think? The real answer is that switching increases your chance of winning from 1/3 to 2/3. now it’s… Drew Carey? Anyway…) is your friend. which is basically what Monty Hall is doing. His goal was to make you look like an idiot on national television. no matter what he can always reveal a door without a prize. it doesn’t change the odds of your first pick at all – it’s still 1/3 –but that means the other door now has a 2/3 chance of being the right one. And here’s where we get into probability: does switching doors increase your chance of winning. you’re going to look pretty dumb. Let’s look at it another way. Anyway. they’d have something silly-looking back there like a goat. He was like Bob Barker’s evil twin. being the generous guy he is. so that doesn’t really change anything. Since Monty knows where the prize is. and the other . If you’ve never seen the show. which does happen 1/3 of the time. If you picked the door with the prize behind it. or to have it actually run a few thousand simulations and give you how many times you would have won if you stayed versus when you switched. you’re told. Maybe I’m being overly harsh. and they would actually call them Door Number 1. in practice on the actual show. he gives you the chance to trade your Door Number 3 for whatever’s behind Door Number 2 instead. because that’s the kind of evil guy he is. those other two doors are worthless. and the odds were stacked in his favor. Redux Now. which is exactly what he wants. one of the biggest memes from the show was that you’d be given a choice of three doors. He wants to give away cash and fabulous prizes. Except the goal is to humiliate you. there’s no prize at all. you were playing a game against him. starting with something like 10 doors and eventually working down your way to 3. This is counterintuitive. When that new door is revealed. Door Number 2 and Door Number 3. he’ll only offer you the chance to switch about half of those times. you’re probably thinking: wait. let’s look at one of the other doors that you didn’t choose. Behind the other doors. And now. and Monty would get ready to reveal if you won or not… but wait. let’s reveal Door Number 1 to show you that there was no prize there. If you were on the show. But if you pick a door with no prize behind it. the host (used to be Bob Barker. or a giant tube of toothpaste. is a fabulous prize like a Brand New Car. but when your chance of being selected as a contestant seems directly proportional to whether you’re wearing a ridiculous-looking costume. it was sort of like this inverse The Price Is Right. or decrease it. he was the enemy. because he was good at math even if his contestants weren’t. Sure. Monty Hall wasn’t like that. You choose a door. you’d pick your door. In The Price Is Right. you chose Door Number 3? Well. or something… something that was clearly not a Brand New Car. I offer to swap you for both of the other doors.

It’s more a brain teaser. so whether you switch or stay your overall odds are 1/3 throughout the whole game… no better than if you just guessed and he showed you the door. half of the time he’ll offer to switch. Turn and River in Texas Hold ‘Em). and he’s offering you the chance because he knows you guessed right at the beginning and you’ll take the bait and fall into his trap? Or maybe he’s being uncharacteristically nice. we already know that the 1/3 of the time when he gives you your goat and you leave didn’t happen. Remember. and goading you into doing something in your own best interest. Of the remaining 2/3 of the time (you pick wrong initially). you’ll guess wrong initially but be given the choice to switch. and at least one of them is a girl. A third of the time you’ll guess right initially. is that most of the formats involve slowly revealing cards in between rounds of betting (like the Flop. so that would skew the odds a bit where if you know one of their kids is already a girl. If he offers an exchange. that the odds are slightly higher they’ll have more . Incidentally. otherwise he has a 50/50 chance of giving you your goat or giving you the chance to switch. and the other 1/3 means we guessed wrong. Add the two non-overlapping win states together and you get 1/3. this is no longer a game of math and now a game of psychology. Monty manages to offer a choice (sometimes) while still keeping the overall probability of winning at 1/3. because it means our chance of winning has now changed. and there’s no mathematical advantage to keeping or switching.half he’ll just show you your Brand New Goat and boot you off the stage. The Sibling Problem And that brings us to another famous problem that tends to throw people. and 50% of that time you’ll win (1/3 x 1/2 = 1/6). Let’s analyze this new game. because he hasn’t given away a car in awhile and his producers are telling him the audience is getting bored and he’d better give away a big prize soon so their ratings don’t drop? In this way. Suppose he follows this algorithm: always let you switch if you picked the door with the car. a third of the time you’ll just lose outright. and 50% of that time you’ll win (also 1/6). this is one of the same reasons Poker can be so interesting. The question is this: I have a friend with two kids. the Siblings problem. This is about the only thing I’m writing about today that isn’t directly related to games (although I guess that just means I should challenge you to come up a game mechanic that uses this). 1/3 of the time you picked wrong and he offers the switch. and 1/3 of the time you picked right and he offers the switch. Did Monty offer you a choice because he thinks you’re a sucker who doesn’t know that switching is the “right” choice. without any of this switching business at all! So the point of offering to switch doors is not done for the purpose of changing the odds. Now what are your chances of winning? 1/3 of the time. but simply because drawing out the decision makes for more exciting television viewing. so basically 1/3 of the time you get your goat and leave. but a fun one. Like in Poker. and that you’ll stubbornly hold onto the door you picked because psychologically it’s worse to have a car and then lose it? Or does he think you’re smart and that you’ll switch. there’s a 50/50 chance of having a boy or a girl. And a third of the time. you pick the prize right away and he offers you to switch. so if we’re given a choice at all it means our probability of winning is now 50/50. because you start off with a certain probability of winning and that probability is changing in between each betting round as more cards are revealed. That is useful information. half the time he won’t. Half of 2/3 is 1/3. where Monty can choose whether or not to give you the chance to switch. Of the 2/3 of the time where we’re given a choice. What is the probability that the other one is also a girl? Assume that in the normal human population. and in order to solve it you really have to be able to understand conditional probability like we’ve been talking about. and assume that this is universally true for any child (in reality some men actually do produce more X or Y sperm. 1/3 means we guessed right.

Adding it up. A is a non-Tuesday girl (again. If You’re Working on a Game Now… If a game you’re designing has any randomness. compare the actual numbers to the numbers you were expecting. Redux It gets weirder. Again assuming the children are named A and B.girls. we would expect the answer would be something like 1/2 or 1/4 or some other nice. or A is girl and B is boy. they name their kids A and B. ask yourself if this is something that’s dependent (like cards) or independent (like dice). this is a great excuse to analyze it. but we have to take care not to double-count this). 7 possibilities). The actual answer is 1/3. ludologist Jesper Juul has a nice explanation of this problem on his website. The actual answer is 13/27. And lastly. 6 possibilities). if you’re making an RPG and looking at the probability that the player will hit a monster in combat. A is a boy (again. For example. what does Tuesday have to do with anything? But again. again because we don’t know which child it was. of course. and one is a girl who was born on a Tuesday. Of those. Make sure your probabilities sum to 100%. a child is equally likely to be born on any of the seven days of the week. so you wouldn’t usually want them to miss a lot… maybe 10% of the time or less? If you’re an RPG designer you probably know better than I. which isn’t just unintuitive. Only one of those three scenarios involves two girls. Usually in console RPGs. Let’s say the parents are Sesame Street fans and so no matter what the sex. • B is a Tuesday girl. this is totally counterintuitive. what? The trick here is that the information we were given narrows down the possibilities. what makes sense to you in the context of the game. one for each day of the week that B could be born on). • B is a Tuesday girl. there are four possibilities that are equally likely: A and B are both boys. Intuitively. For that element. A and B are both girls. it’s plain old weird-looking. What’s the probability the other child is also a girl? You’d think the answer would still be 1/3. we know each one has a probability of 1/3. Assume that under normal conditions. and apparently designed for no other reason than to make your brain hurt. ask yourself what to-hit percentage feels right to you. and then there are conditions like hermaphrodism. we can eliminate the possibility that A and B are both boys. there are 27 different. but for our purposes let’s ignore that and assume that each kid is an independent trial with an equal chance of being male or female). Under normal conditions. misses by the player are very frustrating. A is boy and B is girl. Again. and the probabilities of each. or if both children were born on Tuesday. Suppose instead I tell you my friend has two children. round number that’s divisible by 2. B is a girl born on a different day of the week (6 possibilities). intuition fails. Is this . Since we know at least one of them is a girl. the combinations are: • A is a Tuesday girl. we count all valid combinations of children where at least one is a Tuesday girl. 13 possibilities involve two girls. Wait. Break down all possible results. If you’re still scratching your head. • A is a Tuesday girl. B is a boy (there are 7 possibilities here. since we’re dealing with a core 1/2 chance. so we have three (still equally likely) scenarios remaining. By the same logic as earlier. Choose a random element you want to analyze. The Sibling Problem. Then. so the answer is 1/3. but you should have some basic idea of what feels right. What’s going on here? Tuesday actually changes the odds. first ask yourself what kind of probability you’re expecting to see. • A and B are both girls born on Tuesday (1 possibility. equally likely combinations of children and days with at least one Tuesday girl. Since they’re equally likely and there are three of them.

think of it this way: suppose you played the game 36 times (wagering $1 each time). if you win. before you do. $1) on any number from 1 to 6. Try and solve it yourself. Each die is individually a 1/6 chance of winning. Game #2: Chuck-a-Luck There is a gambling dice game called Chuck-a-Luck (also known as Birdcage. it seems like this is an even-odds game. You need to be multiplying something. but instead of a 1 it has a Dragon on it (so the House die is Dragon-2-34-5-6). you actually win $3. You’re trying to roll higher than the House. a loss causes you to lose $1. If you both roll the same number. the winner is whoever rolls highest. If no dice show your number. or do you see signs that you need to adjust the numbers? And of course. The House is given a non-standard 1d6 – it’s similar to yours. Otherwise. and then a weird mechanic from a game I once worked on that provides a chance to try out a Monte Carlo simulation. There are only 36 possibilities for both dice. and remember. on average? Once you’ve used your intuition. intuitively. you can use these same calculations to figure out exactly how much to adjust it! Homework Your “homework” this week is meant to help you practice your probability skills. or less than once. It’s a simple casino game called Dragon Die. When you count out all possible results (you’ll probably find it easier to do this in Excel than by hand since there are 216 results). for every 3 games you play. you just lose your standard bet. Intuitively. And then realize how evil I am. Suppose I said this game was offered with a 2 to 1 payout. do you think the odds are better or worse than 2 to 1? Said another way. the house takes your $1 and you get nothing. But in reality. then the House automatically wins and you automatically lose. you keep your bet and get twice your bet in winnings. it’s a push. so adding all three should give you a 3/6 chance of winning. and a push is no change. or exactly once. you’re only allowed to add if you’re talking about separate win conditions from the same die. I’ll post all answers here next week. But how much of an advantage is it? You’re going to calculate it. because sometimes instead of rolling dice they’re placed in a wire cage that somewhat resembles a Bingo cage). You then roll 3d6. A win nets you $2 up. and it’s a dice gambling contest between you and the House. For each die that your number shows up on. Obviously. if you place on 1 and you roll triple 1s. You are given a standard 1d6. if you do find something to adjust. This is a simple game that works like this: place your bet (say. for a total of $3. If the House rolls a Dragon. But first. the actual dice-roll mechanics here are something I’m intentionally obfuscating. the . and you roll it. because the House has this Dragon advantage. Game #1: Dragon Die This is a dice game that I invented with some co-workers one day (thanks Jeb Havens and Jesse King!) specifically to mess with people’s heads on probability. so you should have no problem counting them all up. if you’re wondering. the odds are slightly against the player here.particular random die-roll or card-draw acting how you want it to. If you lose. Would you play? That is. do the math. do you expect to win more than once. if you bet $1 and win. if you calculate that way you’re adding when these are separate die-rolls. That is. So. but I’m sure you’ll all see through that once you sit down and look at it. you get $1 in winnings (and you get to keep your original bet). exercise your intuition. And then ask yourself how close your intuition was. If you’re not sure about this “2 to 1” business. it still looks at first like an even-odds game. and you both re-roll. But of course. And yes. you keep your $1 and get $2 extra. I have two dice games and a card game for you to analyze using probability. Count up your total winnings and losses and figure out if you come out ahead or behind. So.

there are actually a lot more than 4 ways to get dealt a Royal Flush. you’ve got it wrong. since it’s 10% of 90%) that on the very next turn. So. and the total number of resources by whatever the variable ended up as. rolls a random number. but you can simulate it pretty easily. At the end of your turn. Otherwise it adds 5 to the variable. You should have a while loop that initializes a variable to zero. And so on. re-initialize the variable and try again. If that doesn’t happen (90% chance). and it’ll leave and no one gets anything. but if it stayed around then at the start of each of your turns. and a random player would gain 5 of each resource type for every token on the card. In particular. or whatever. how much? In particular. A “Royal Flush” is the 10-J-Q-K-A in the same suit. on average. it’ll leave play and someone will get 5 resources.odds of winning are actually slightly in favor of the House. Run the program a few times to see if the numbers you’re getting are about the same. or a Ten. there is a 10% chance you’ll put it into play. and so on. how much money do you expect to lose each time you play this game? All you have to do is add up the gains and losses for all 216 results. So there is a 10% chance you get 0 (0. then there is a further 10% chance (actually 9% at this point. what is the expected value of the number of total resources you’ll get from this card when it finally leaves play? Normally. If you know enough programming or scripting to feel comfortable doing this. there was this really interesting card called IMF Lottery. and there was a 10% chance it would leave play. then 20. At the end. So this is a way to practice your Monte Carlo technique. and there are four suits. either with programming or with some fudging around in Excel. if there’s still a lot of variation in your final numbers. and 10% of the time it exits the loop. so this could conceivably stay in play without leaving forever. and iterates. There’s a 9% chance you get 5 resources (that’s 9%*5 = 0. you can quickly see a problem: there is always going to be a chance that it will not leave play. which is why I’m telling you right now that if you think it’s even-odds. Then.81 resources total. The card didn’t start with any tokens. and that’s your Monte Carlo expected value. and that’s all they get. have it increment the total number of trials by 1. So the actual way you’ll be counting these. No ability to discard and draw. The techniques we learned today don’t give us a way to deal with infinite recursion.1*0 = 0). it gained a token. then the next turn it would be 15. One thing I’ll warn you about here: remember that you can draw those five cards in any order. Run this a few thousand times.1% chance you get 10 resources (8. so there’s no actual way to write out every single possibility. . there are a few traps you can fall into.1% chance) someone gets 10 resources. just a straight-up you get 5 cards and that’s what you get. so there are four ways to get a Royal Flush. Here’s how it worked: you’d put it into play. for an infinite number of turns. so this should be simple… but as you’ll see. Now. let’s try our hand at dependent probability by looking at a card game. divide total number of resources by total number of trials. So you might draw an Ace first. then divide by 216. so we’ll have to fake it. if you consider the cards to be dealt sequentially! Game #4: IMF Lottery This fourth question is one that can’t easily be solved through the methods we’ve talked about today. write a program to simulate this card. we’d approach this by finding the probability of each outcome. Let’s also assume a variant like 5-card Stud where each player is dealt 5 cards. end your turn.1%*10 = 0. so 8. let’s assume Poker with a 52-card deck. In a game I worked on that I’ve mentioned before called Chron X. no common cards. The question is. Game #3: 5-Card Stud Poker When you’ve warmed up with the previous two exercises. There’s an 8.45 resources). and multiplying by the outcome. Calculate the probability that you’ll get one. expected value). And then we add all of these up. If it leaves play on the turn after that (10% of the remaining 81%. the game would roll a percentile. When it finally breaks out of the loop.

which represents a single play of the card. A1. and this cell would just copy that value. at least.0. and then a value to return if it’s true. post your math… after doing your own Monte Carlo simulation to verify the answer. as I mentioned earlier. though. And you can be pretty sure that whatever you come up with is going to be about right. in Probability already and the problems above are too easy for you. double the number of trials and try again. A1 would be zero. So the following statement will return 5 ten percent of the time.5. A1 would be 0 (number of resources). so we can just check if RAND is less then 0.increase the number of iterations in the outer loop until you start getting some consistency. As before. and whatever cell is at the end gives you a final result (or -1 if it never left play after all of the turns you’re simulating).1 and not mess with this other stuff. Unsolved #1: IMF Lottery The first unsolved problem is the previous homework. provable answer mathematically (since this is an infinite series).1. it just returns a random decimal number between 0 and 1. here are two problems that I’ve wondered about for years. and then a value to return if it’s false. and copy and paste for a few hundred (or a few thousand) rows.1. of course. Unsolved #2: Streaks of Face Cards . IF takes in three values. Take this row of cells. post as a comment. If you don’t know programming (or even if you do). We might not be able to do an infinite test for Excel (there are only so many cells that fit in a spreadsheet). you can hit F9 to reroll all your random numbers. but we can at least cover the majority of cases. representing the second turn: =IF(A1>-1. but I don’t actually know how to come up with a definitive. Here you’ll want to make a good use of the IF and RAND statements. otherwise it’s -1. the rest of the time it is still -1. If you happen to know how to do these. I’d love to know how. but I don’t have the math skills to solve them. and 0 the other ninety percent of the time: =IF(RAND()<0.5.0) There are a lot of ways to set this up. but if I were doing it. In Windows.-1)) So if the first turn ended and the card left play right away. let’s say this is cell A1: =IF(RAND()<0.D. If there’s too much variety. Usually we combine it with FLOOR and some plusses or minuses to simulate a die roll. You can never have enough Excel skills as a game designer. Then. IF(RAND()<0.-1) Here I’m using negative one as shorthand for “this card hasn’t left play and given out any resources yet. In this case. I can do a Monte Carlo simulation (either in C++ or Excel) pretty easily and be confident of the answer for how many resources you get. and this cell proceeds to roll randomly again: 10% of the time it returns 5 (for the 5 resources). we just have a 10% check for the card leaving play. Otherwise A1 would be -1 (hasn’t left play yet). do that a few times and see if the values you get are similar to each other. For the next cell.” So if the first turn ended and the card left play right away. have a single cell where you take the average of the results of all the turns (Excel helpfully provides the AVERAGE() function for this). RAND takes no values. In order: a condition that’s either true or false. If you know how. I’d use a formula like this for the cell that represents the first turn. Unsolved Problems If you happen to have a Ph. Continuing this formula for additional cells simulates additional turns.1. this is an excuse to exercise your Excel skills.

So before you spout out a solution. it’s not” reaction. anywhere in the sequence?) You can simplify this. There’s a string of 416 bits. really sit down to think about it and examine it. . which means there are 128 of them in a 416-card shoe). they saw ten face cards in a row (a face card is 10. but then once I get into the details it suddenly falls apart and becomes impossible. because every person I’ve ever talked to about this (and this includes a few grad students in the field) has had that same “it’s obvious… no. This is a case where I just don’t have a technique for counting all of the numbers. wait. it seems like it should be really easy and obvious at first. Q or K. Each bit is 0 or 1. I could certainly bruteforce it with a computer algorithm. is a problem I was posed by a fellow gamer over 10 years ago. (Or. There are 128 ones and 288 zeros scattered randomly throughout. in an eight-deck shoe? Assume a random. somewhere. How many ways are there to randomly interleave 128 ones and 288 zeros. and how many of those ways involve at least one clump of ten or more 1s? Every time I’ve sat down to solve this problem. What is the probability that there is at least one run of ten or more face cards. They witnessed a curious thing while playing Blackjack in Vegas: out of an eight-deck shoe. J. if you prefer. but it’s the mathematical technique that I’d find more interesting to know. and again this is way beyond the scope of this blog post. fair shuffle.This problem. so there are 16 of them in a standard 52-card deck. work out the actual numbers yourself. what are the odds that there are no runs of ten or more face cards.

X-1-1. If you think of the Dragon as a “7” since it always wins. 5-4. the odds are the same no matter which of the six numbers you choose to pay out. or an expected loss of about 7. 4-6. 2-1). 4-2. so only 31 results end in some kind of resolution. 6-2. 7-6. 3-2. it must match the suit of the first card you drew. For the second card. 5-2. 5 ways to draw (6-6. and 10 ways to win (5-6. we simply add all results together and divide by 216 to get the expected win or loss percentage. so there are 25 ways to do this. 7-2. counting up to 75 total). 5-1. For the third card. 3-4. it is easier to do this in Excel than by hand. but 125 ways to lose (the reason for the difference is that while doubles and triples are more valuable when they come up on your number. there are 75 ways to roll a single win (1-X-X where “X” is any of the five losing results. The cards must be 10-J-QK-A but they can be in any order. The fourth card has 2 cards out of 49. Of those. 6-5. Since we play 216 times and 108 is half of 216. and only one way to roll a triple win (1-1-1).9%. If you count it up. 2-5. 3-6. 4-5. then X-1-X and X-X-1 for wins with the other two dice. there are a lot more ways to roll doubles and triples on a number that isn’t yours). 21 are a loss and 10 are a win. you find there are 21 ways to lose to the House (7-1. 2-4. note that the so-called “Dragon Die” is really just a “1d6+1” in disguise. 6-4. which we would keep repeating until we got a win or loss event. we multiply each of the 216 results by its likelihood. there are only 91 ways to win. Not so fast! We still must count up all the results when we lose. 7-3. so there are only 4 cards out of the remaining 51 in the deck that you can draw (4/51). 4-3. there are only 3 cards out of the remaining 50 that you need for that royal flush (3/50). 2-2). you will find that if you choose the number 1 to pay out. Each of those 125 losses nets you a $1 loss. 6-3. It turns out the answer is: a lot more than most people think! If you write out all 36 possibilities of 2-7 versus 1-6. (75 ways to win * $1 winnings) + (15 ways for double win * $2) + (1 way for triple win * $3) = $108 in winnings. The first card can be any of those 5 cards of any suit.Level 5: Probability and Randomness Gone Horribly Wrong Answers to Last Week’s Questions If you want to check your answers from last week: Dragon Die First off. 7-5. 4-4. 3-3. To get expected value. we get an expected value of negative $17 out of 216 plays. That may not sound like much (7. 3-1. 1-X-1. 3-5. 5-5. so really this is just asking how a +1 bonus to a 1d6 die roll affects your chance of rolling higher. so the House advantage of this game is actually 7. sequentially. Chuck-a-luck Since there are 216 ways to get a result on 3d6.9 cents is literally pocket change). so there are 20 cards total on your first draw that make you potentially eligible for a royal flush (20/52). though. 7-4. 5-3. the faces are 2-3-4-5-6-7. Since all numbers are equally likely from 1 to 6. one of the worst odds in the entire casino! Royal Flush In this game.9 cents each time you play this game with a $1 bet. There are likewise 15 ways to roll a double win (1-1-X. 2-6). Adding everything up. 4-1. 6-1. and . Out of the 216 ways to roll the dice. since all 216 die roll results are equally likely. and we lose more than 108 times. five each of three different ways = 15). 2-3. but keep in mind this is per dollar. you win slightly less than 1 time in 3. you’re drawing 5 cards from a 52-card deck. We ignore draws. In other words. so this game only gives a 10/31 chance of winning. since a draw is just a re-roll. at first this appears to be evenodds.

this divides to 0. IMF Lottery If you check the comments from last week. but also how players perceive the probability in our games and how that differs from reality.00015%. or 1 in 649. The assumption is that adding some randomness to a game can be good. If these are the only games we look at. the more we think of the game as hardcore. this is something I touched on briefly last week: most people are really terrible at having an intuition for true odds. but we still think of it as a game where skill dominates.e. The answer is 45 resources. And yet.200. is itself counterintuitive. the more casual the audience. which as we’ll see is not always trivial. Computers Most computers are deterministic machines. the game of Blackjack is also random. an awful lot of players will perceive the game as being unfair. this actually ends up being rather intuitive… but. they are just ones and zeros. but we don’t normally think of it as a game that requires a lot of skill. Yet somehow. so that we can take that into account when designing the play we want them to experience. I took a brief time-out from the rest of the class where we’ve been talking about how to balance a game. each hand of Poker is highly random. we might go so far as to think of a corresponding split between casual and hardcore: the more luck in a game. we see that more as a game of chance than Poker. This Week’s Topic Last week. Meanwhile. we get 480 / 311. as we’ve seen from probability. So the fact that this one problem ends up with an answer that’s intuitive. or if they plan to include electronic components in their board games that have any kind of randomness. For example. several folks found solutions for this that did not require a Monte Carlo simulation. If you want a decimal. Multiplying all of these together. to draw a baseline for how much probability theory I think every designer needs to know. we must be careful to understand not just true probability. as game designers. Understanding the difference between random and pseudorandom has important implications for video game designers. and computers. but too much randomness is a bad thing… and maybe we have some sense that all games fall along some kind of continuum between two extremes of “100% skill-based” (like Chess or Go) and “100% luck-based” (like Chutes & Ladders or Candyland). Since it has a 10% chance of leaving play each turn. and even board game designers if they ever plan to make a computer version of their game (which happens often with “hit” board games). just so we can compute some basic odds. a little more than one ten-thousandth of one percent). This week I’m going to blow that all up by showing you two places where the true odds can go horribly wrong: human psychology. Tic-Tac-Toe has no randomness at all. but aside from counting cards. Human Psychology When I say psychology. following deterministic algorithms to convert some ones and zeros into other ones and zeros. the more a game’s outcome relies on skill. we must get a nondeterministic value (a “random number”) from a deterministic system. even though in reality they aren’t.the final card must be 1 card out of 48. This is done through some mathematical sleight-of-hand to produce what we call pseudorandom numbers: numbers that sorta kinda look random. 0. most things are not intuitive. . This is not always the case. So even if we actually make the random elements of our games perfectly fair. i. the card will stay in play for an average of 10 turns. that’s worth discussing and challenging.740. there’s this implicit assumption that we’ve mostly ignored for the past week.875. Therefore.0000015 (or if you multiply by 100 to get a percentage. For most of us. But First… Luck Versus Skill Before we get into psychology and computers. however. if that. this means seeing a “natural” Royal Flush from 5 cards is a once in a lifetime event.

We think of these as skill-based games. you’ll feel like the game just robbed you. In Blackjack. you miss. suppose instead the designer thought they’d be clever and add a small perturbation on bullet fire from a cheap pistol. a successful player computes the odds to come up with a probability calculation that they have the winning hand. based on a number of factors that are difficult to have complete control of. the dealer’s showing card. and they factor that into their bet along with their perceived reactions of their opponents. they’re running. If you are actually a player on a sports team. pull the trigger… and miss. Action Games Randomness works a little differently in action-based video games (like typical First-Person Shooter games). just because it wins more than it loses. we see these as games of skill. You are memorizing and following a deterministic pattern. as far as you’re concerned the outcome is a random event. but sports fans can still gamble on the random (from their perspective) outcome of the game. by definition you have no control over the outcome of a game. they might not notice. we talk about percentage chances of a player making a goal or missing a shot or whatever. it just makes you feel like you’re being punished . and what kind of influence the player’s choices have on the outcome of the game. where the players are using skill in their movement and aiming to shoot their opponents and avoid getting shot. because it decided to randomly make the bullet fly too far to the left. and in fact these games seem largely incompatible with randomness. where a clearly skill-based game seems like it involves random dierolls? The reason for the paradox is that it all depends on your frame of reference. The game is going so fast. and you’re sure you have the perfect shot and still miss. sports gamblers make cash bets on the outcomes of games. as if these were not games of skill but games of chance. In Poker. you sneak up behind them. Poker vs. This is why on the one hand sports athletes get paid based on how much they win (it’s not a gamble for them). given the shuffle of the cards. Blackjack does have some skill. There is enough chaos in the system without random die-rolls: if I’m shooting at a moving target. How would the player react to that? Well. It’s the same reason we don’t think of the casino as having a lot of “skill” at Craps or Roulette. You line up someone in your sights. Now. As more cards are revealed. and expect that over time you do as well as you can. Knowing when to hit or stand or split or double down based on your total. you figure you must have just not been as accurate as you thought (or they dodged well). If you are a spectator. you’re running. that’s not fun and it doesn’t make the game more interesting. to make it less accurate. Blackjack Why the difference in how we perceive Poker and Blackjack? The difference is in when the player makes their bet. Or if they’re standing still. Professional Sports What about the sports problem. you place a bet at the beginning of the hand before you know what cards you’re dealt. What’s going on here? There are a few explanations. and you generally don’t have the option to “raise” or “fold” as you see more cards revealed. I’ll either hit or miss. enthusiasts track all kinds of statistics on players and games. You simply place your bet according to an algorithm. by contrast. but it’s a very different kind of skill than Poker. but you are not making any particularly interesting decisions. you squeeze off a few shots. The player’s understanding of the odds and their ability to react to changes has a direct relation to their performance in the game. On the one hand. and (if you’re counting cards) the remaining percentage of high cards in the deck… these things are skill in the same sense as skill at Pac-Man. Yet. the player adjusts their strategy.Then we get into physical contests like professional sports. the game is won or lost partly by your level of skill.

adding luck (or skill) is more straightforward. Very young children may not be able to handle lots of choices and risk/reward or short/long term tradeoffs. And Now… Human Psychology As we saw last week. • Increase the range of randomness. This evens the playing field slightly. Now. thus the randomness is less likely to be evenly distributed). but you have to be very careful about how you do it. without headshots.arbitrarily for being a good shot. so they actually prefer to think less and make fewer decisions so they can devote more of their limited brainpower to chatting with their friends. you might be thinking… wait. The tools we gained from last week should give you enough skills to be able to assess the nature of randomness in your game. with action games. Shifting Between Luck and Skill As we just saw. human intuition is generally terrible when it comes to odds estimation. the gun doesn’t let you zoom in enough at a distance to be really sure if you aren’t off by one or two pixels in any direction… so from a distance. To increase the level of luck in the game: • Change some player decisions into random events. adding more of a luck-factor to the game is tricky but still possible. more strategic games. just by accident if you’re shooting in the right general direction… so sometimes. Does that mean luck plays no role in action games? I think you can increase the luck factor to even the playing field. you’re moving. You . they’re moving. by creating these unlikely (but still possible to perform by accident) events. • If you want to increase the level of skill in the game. Sometimes it’ll happen anyway. circle-strafing. With slower. but (except in extreme cases) not so much that they feel like they can’t ever win against a stronger opponent. but they can handle throwing a die or spinning a spinner and following directions. There is no single right answer here for all games. sometimes it means removing it. and make appropriate changes. What is the “best” mix of luck and skill for any given game? That depends mostly on your target audience. a head shot isn’t something that most players can plan on. Here’s a common example of how a lot of FPSs increase the amount of luck in the game: headshots. or something like that. so it makes it more likely that the weaker player will see some successes. and sometimes it means keeping it there but changing the nature of the randomness. such as changing a 1d6 roll to a 1d20 roll. but in a lot of games (especially older ones) that kind of accuracy just really isn’t possible in most situations. depending on the game. through no fault of the player. With headshots. if it takes many shots to score a frag. the head has a tiny hit box. at least. Casual gamers may see the game as a social experience more than a chance to exercise the strategic parts of their brains. Sometimes that means adding more randomness. • Increase the impact that random events have on the game state. The idea here is that if you shoot someone in the head rather than the rest of the body. at least in part. it’s an instant kill. hitting a small target and getting bonus damage because you’re just that good… right? In some cases that is true. they’ll get a headshot. recognize that a certain mix of luck and skill is best for your individual game. and any other number of techniques that allow them to outmaneuver and outplay a weaker player. • Reduce the number of random events in the game (this way the Law of Large Numbers doesn’t kick in as much. Competitive adults and core gamers often prefer games that skew more to the skill side of things so they can exercise their dominance. the weaker player will occasionally just get an automatic kill by accident. isn’t that a skill thing? You’re being rewarded for accuracy. which is usually what you want as a designer. do the reverse of any or all of the above. and part of your job as a game designer is to listen to your game to find out where it needs to be. the more skilled players will almost always win because they are better at dodging.

but one is that any time a plane crashes anywhere. he placed this from experience at somewhere around 3:1 or 4:1… that is. Selection Bias When asked to do an intuitive odds estimation. and they did win exactly that percentage of the time. With a random reward. we tend to remember our epic wins much more easily than we remember our humiliating losses (another trick our brains play on us just to make life more bearable). lotto winners are highly publicized while the millions of losers are not. if the player had a 75 to 80% chance of winning or greater. leading us to assume that winning is more probable than it actually is. if a player chooses a difficulty that’s too hard for them. And interestingly. where players will assume it is much more likely than it actually is. People tend to assume they’re above average at most things. Players also have a self-serving bias. What does any of this have to do with games? For one thing. car crashes. even though an auto fatality is far more likely. This usually gives pretty good results. rare events that are sufficiently powerful tend to stick in our minds (I bet you can remember exactly where you were when the planes struck the towers). intuitively. by contrast. the more we assume that event is likely or probable. I’d guess around 95% of the time. This is dangerous. We sometimes see rare events happen more often than common ones due to media coverage. Attribution Bias In general. Many people also have a flawed understanding of how probability works. you’ll probably have a decent intuition for how often it actually comes up. As you might guess. it’s international news. it would feel wrong to them. There are a few reasons for this. where does our intuition come from? The first heuristic most people use is to check their memory recall: how easy is it to recall different events? The easier it is to remember many examples. Self-Serving Bias There’s a certain point where an event is unlikely but still possible. or other tricks. players have a tendency to internalize the event. in games where players can set their difficulty level or choose their opponents. we want a player to succeed a certain percentage of the time. players will tend to overestimate their own win percentage / skill. many people are more afraid of dying in a plane crash than dying in an auto accident. In Sid Meier’s GDC keynote this year. so absent of actual hard statistics. if the screen displayed a 75% or 80% chance. players are much more likely to accept a random reward than a random setback or punishment. and tune the difficulty of our games accordingly. are so common that they’re not reported… so it is much easier to remember a lot of plane crashes than a lot of car crashes. to believe that they earned . they interpret these random events very differently. dynamic difficulty adjustment. they are perfectly okay winning a quarter of the time if they are at a 1:3 disadvantage. and their implications to gamers and game designers. Why would it be easier to recall rare events than common events? For one thing. if you’re rolling a weighted die a few hundred times and seem to remember the number 4 coming up more often. His playtesters expected to win nearly all of the time. Another example is the lottery. that probably ties into what I said before about how everyone thinks they’re above-average. Let’s dig into these errors of thought. So while players are not okay with losing a quarter of the time when they have a 75% win advantage. In general. as we saw last week with the gambler’s fallacy (expecting that previous independent events like die-rolls have the power to influence future ones). this kind of intuition will fail whenever it’s easier to recall a rare event than a common one. we can try (for example) to force players into a good match for their actual skills – through automated matchmaking. both games seem to give better odds of winning than they actually do. they’ll struggle a bit more and be more likely to give up in frustrations. For example.might have noticed this with Dragon Die or Chuck-a-Luck. By being aware of this tendency. like they were losing more than they should.

Because people look at that. players tend to externalize the event. because after all it’s the computer so it can theoretically do that. probably the biggest. if a person’s previous item was Heads. One statistic I found from the literature is that if you ask a person to generate a “random” list of coin flips from their head. this will feel wrong to a lot of people. also from Sid’s keynote. Here’s another example. so clearly this was a good decision on their part. Anchoring Another way people get odds wrong is this phenomenon called anchoring. so they feel like they should have this overwhelming advantage. but they were the ones who chose to make the choices that led to the die roll. it is actually pretty likely you’ll get 4 Heads or 4 Tails in a row (since the probability of 4 in a row is 1 in 8). they don’t expect unlikely events to happen multiple times in a row. If they’re emotionally invested enough in the game. but they’re okay with losing a 2:1 contest about a third of the time? It turns out that if they lose two 2:1 contests in a row. in games and in life. This can lead players astray in less obvious ways. and vice versa (this is assuming you’re simply asking them to say “Heads” or “Tails” from their head when they are instructed to come up with a “random” result. Basically. a small amount of base damage and then a bunch of bonuses may underestimate how much they do. to show you why as game designers we need to be keenly aware of this. Here’s an example. The idea is. Sure. The Gambler’s Fallacy Now we return to the gambler’s fallacy. which seems like a big number so they feel like they’ve got a lot of power there… and it feels a lot bigger than 10. and concentrate on it. say.) It also means that a player who has. maybe it was a lucky die roll. if you got to a casino and look at any random slot machine. even though by the laws of probability they should. In a string of merely 10 coin flips. which is that people expect random numbers to look random. if they have a 10:20 disadvantage. whatever the first number is that people see. and it gives them the idea that their chance of winning is much bigger than it actually is. (Naturally. So for example. not when they’re actually flipping a real coin). Remember how players feel like a 3:1 advantage means they’ll almost always win. Sid Meier mentioned a curious aspect of that during his keynote. they blame the dice or cards.the reward through superior decision-making and strategy in play. they latch onto that and overvalue it. they say they were just unlucky. people handle losing very differently from winning. or something). they have a 60% chance of picking Tails for the next flip. and their calculated risk paid off. If it happens too much. they might even accuse other players of cheating! With video games the logic and random-number generation is hidden. most attention-grabbing thing on there is the number of coins you can win with a jackpot. Suppose you design a video game that involves a series of fair coin flips as part of its core mechanics (maybe you use this to determine who goes first each turn. Playtesters – the same ones that were perfectly happy losing a third of the time when the had a 2:1 advantage just like they were supposed to – would feel the game was unfair if they lost a third of the time when they had a 20:10 advantage. but if you ask someone to give you ten “random” numbers that are either 0 or 1 from their head. Probability . Some players will actually believe that the AI is peeking at the game data or altering the numbers behind their back and cheating on purpose. they are perfectly happy to accept one win in three. Specifically. they might go so far as to say that they don’t like the game because it’s unfair. Why? Because the first number they see is that 20. Long streaks make people nervous and make them question whether the numbers are actually random. With a random setback. so we see some even stranger player behavior. such as a high-stakes gambling game. they will probably not give you even 3 in a row. they tend to not be very random.

both the theorists and sports fans were wrong. Maybe the player has more confidence after making a few successful shots. As it turned out. Whatever the case. Worse. Insist on playing to best 3 of 5. they were “running hot” and more likely to score additional baskets and not miss. if the player sees six losses in a row. so when trying to “play randomly” they will actually change values more often than not. Maybe the other team felt that player was more of a threat. but presumably there is some kind of negative psychological effect. like rolling dice. Who knows? Fair enough. you can win more than half the time at RockPaper-Scissors by knowing this. where becoming “on fire” is actually a mechanic that gives the player a speed and accuracy advantage… and some cool effects like making the basket explode in a nuclear fireball. this is something that works against us when players can build up a win streak in our games – especially if we tie that to social rewards. or leaderboards that give special attention to players with long streaks. Against a non-championship opponent. regardless of what happened in the players’ previous attempts. said the probability theorists. and it will take them awhile to un-learn their (incorrect) first impression that they are more likely to win a flip. right from the beginning. the greater the chance of a miss (relative to what would be expected by random chance). or something. Why? I don’t think we know for sure.tells you that 1 out of every 32 plays. it slightly increased their chance of missing next time – the longer the streak. so there’s no reason why previous baskets should influence future ones at all. If a player’s performance overall falls on some kind of probability curve (usually a . and played a more aggressive defense when that player had the ball. Why is this dangerous? Because at best. they’ll start to feel like the game is cheating. If the coins come up in their favor. If a player sees this as their first introduction to your game. so they looked at actual statistics from a bunch of games to see if previous baskets carried any predictive value for future performance. you can bet that player is probably going to think your game is unfair. Who says they’re completely independent events? Psychology plays a role in sports performance. their first reaction was that each shot is an independent event. People assume that long streaks do not appear random.2 Million units. making it more likely they’ll continue to perform well. they may perceive this event as so unlikely that the “random” number generator in the game must be busted somehow. If a player made several baskets in a row. The Hot-Hand Fallacy There’s a variant of the Gambler’s Fallacy that mostly applies to sports and other action games. even if each game is truly an independent random event. the first six coin flips will be exactly the same result. and that causes them to play better. The Hot-Hand fallacy is so called because in the sport of Basketball fans started getting this idea that if a player made two or three baskets in a row. Maybe the previous baskets are a sign that the player is hyper-focused on the game and in a really solid flow state. we know that a win streak is anomalous. suppose your game is a modest hit that sells 3. so you probably won’t lose (the worst you can do is draw). Not so fast. one hundred thousand players are going to experience a 6-long streak as their first experience on their first game. To see how much of a problem this can potentially become. That’s a lot of players that are going to think your game is unfair! The Gambler’s Fallacy is something we can exploit as gamers. said Basketball fans. Maybe the crowd’s cheering broke the player’s flow state. Maybe the player got tired. such as achievements.) When probability theorists looked at this. Since you know your opponent is unlikely to repeat their last throw. They expected that a player would be exactly as likely to make a basket. (We even see this in sports games like NBA Jam. Maybe the fans cheering them on gives them a little extra mental energy. In that case. trophies. or 4 of 7. because your opponent probably won’t do the same thing twice. on subsequent rounds you should throw whatever would have lost to your opponent’s last throw. they won’t complain… but when regression to the mean kicks in and they start losing half the time like they’re supposed to. or maybe the player gets overconfident and starts taking more unnecessary risks.

But is there anything we can do to take advantage of this knowledge. It’s as if the designer has deliberately built a system that automatically punishes the player after every reward. because they know they can do better. players will complain because according to them and their flawed understanding of probability. We Have a Problem… To sum up. marketing and advertising. or having four out of five cards for a royal flush come up in a video poker game. that will let us make better games? When Designers Turn Evil One way to react to this knowledge is to exploit it in order to extract large sums of money from people. Game designers who turn to the Dark Side of the Force tend to go into the gambling industry. the game feels wrong. a negative random result is assumed to be bad luck (or worse. and our brains will assume that the other items around them are also less expensive by comparison… even if they’re actually not. they’ll feel frustrated. when the streak inevitably comes to an end. an “unlikely win” is still correctly interpreted as an “unlikely but possible win” when the odds are against the player. odds are their next game will fall lower on the curve. random numbers.bell curve) and they happen to achieve uncharacteristically high performance in a single game or play session or whatever. making it seem to people like winning is more likely than it really is. Another example of anchoring is a car dealership that might put two nearly identical models next to . what do we do about this? We can complain to each other about how all our stupid players are bad at math. • Anchoring: players overvalue the first or biggest number seen. • Self-serving bias: an “unlikely loss” is interpreted as a “nearly impossible loss” when the odds are in the player’s favor. These give the players a false impression that they’re closer to winning more often than they actually are. Thus. Another thing they can do if they’re more dishonest is to rig their machines to give a close but not quite result more often than would be predicted by random chance. your local grocery store probably puts big discount stickers all over the place to call your attention to the lower prices on select items. As designers. or political strategy. cheating). The player is probably erroneously thinking their skills have greatly improved. such as having two Bars and then a blank come up on a slot machine. (I say this with apologies to any honest designers who happen to work in these industries. when they start losing again. However. the whole thing is tainted in the player’s mind. when players encounter our random systems: • Selection bias: improbable but memorable events are perceived as more likely than they actually are. and your game produces fair. • Attribution bias: a positive random result is assumed to be due to player skill.) Gambling Lotteries and casinos regularly take advantage of selection bias by publicizing their winners. here are the problems we face as designers. The lesson here is that if you expose the actual probabilities of the game to your players. • Hot-hand fallacy: assuming that a string of identical results increases the chance that the string will continue. • Gambler’s fallacy: assuming that a string of identical results reduces the chance that the string will continue. For example. Marketing and Advertising Marketers use the principle of anchoring all the time to change our expectations of price. Houston. increasing their excitement and anticipation of hitting a jackpot and making it more likely they’ll continue to play.

make games for entertainment? We must remember that we’re crafting a player experience. and especially those that are not in the player’s favor. Political Strategy There are a ton of tricks politicians can use to win votes. Then. If we do include a streak mechanism. Being very clear about why the bad event happened (and what could be done to prevent it in future plays) helps to keep the player feeling in control. this makes long streaks of losses unlikely. we need to take into account that our players will not intuitively understand the probabilities expressed in our game. under the hood we can actually roll it as if it were a 95% chance. If we tell the player they have a 75% chance of winning. Scam Artists The really evil game designers use their knowledge of psychology to do things that are highly effective. players get certain bonuses if they continue a kill . you know. especially those with major game-changing effects. so that the players don’t notice when they’re on a streak in the first place (and therefore won’t notice. What About Good Game Designers? All of that is fine and good for those of us who want to be scam artists (or for those of us who don’t want to get scammed). If we want that experience to be a positive one. Shoppers see the first big price and anchor to that. giving the player a gameplay advantage to counteract the greater chance of a miss after a string of hits. the player may think that they did something wrong (and unsuccessfully try to figure out what). otherwise. Skewing the Odds One way to do this is to tell our players one thing. and make another prediction. Half of the letters predict the stock will go up. and modify our designs accordingly. or they may feel annoyed that their strategy was torn down by a single bad die-roll and not want to play anymore. and they give you tons of money. because the events are so unlikely that they probably won’t happen again anyway. But what about those of us who still want to. one thing we can do is embed them in a positive feedback loop. For example. And those people figure there’s no way that could just be random chance.each other. one thing to do is to downplay the importance of “streaks” in our games. when the streak ends). and make campaign promises that they’ll protect you and keep your family safe. In general. and super-long streaks impossible. one with a really big sticker price and one with a smaller (but still big) price. If the player gets a failure. the other half say it’ll go down. then they see the smaller price and feel like by comparison they’re getting a really good deal… even though they’re actually getting ripped off. cumulatively. we can make the next failure less likely. you take that half where you guessed right. Odds are. they’re right. whatever actually happens. A common one these days is to play up people’s fears of vastly unlikely but well-publicized events like terrorism or hurricanes. in Modern Warfare 2. or feel as bad. Random Events We can use random events with great care. One scam I’ve heard involves writing a large number of people offering your “investment advice” and telling them to watch a certain penny stock between one day and the next. Countering the Hot Hand To counter the hot-hand problem (where a streak of wins makes it more likely that a player will screw up). we can avoid hosing the player to a great degree from a single random event. and eventually you get down to a handful of people for whom you’ve predicted every single thing right. and highly illegal. you must have a working system. and actually do something else. Repeat that four or five times. and you skip town and move to Fiji or something.

If we understand the nature of these flaws. but the actual results as well. and that’s to expose not just the stated probabilities to the player. this doesn’t sit well with everyone.streak (killing multiple enemies without being killed themselves). straight pieces until just after you need it. we teach our players through the game’s designed systems. are we not doing a disservice to our players? Are we not taking something that we already know is wrong and teaching it to players as if it were right? One objection might be that if players aren’t having fun (regardless of whether that comes from our poor design. Post-GDC. then the designer must design in favor of the player – especially if they are beholden to a developer and publisher that expect to see big sales. you could look over there and see if the game was really screwing you. and it could prove it with cold. A Question of Professional Ethics Now. we can change our game’s behavior to conform to player expectations. but over time it would balance out. or if it was just your imagination. or their poor understanding of math). based on their personal values and the specific games they are designing. hard facts displayed to the player in real time. for example. and something each individual designer must decide for themselves. we know that players have a flawed understanding of probability. occasionally you might get a little more of one piece than another on average over the course of a single level. if you ask players to estimate their win percentage of a game when it’s not tracked by the game. If we bend the rules of probability in our games to reinforce players’ flawed understanding of how probability works. and even a nuclear strike. and most of the time you’d get about as much of each brick as any other. air support. it’s more likely their streak will continue because they are now more powerful. or how players perceive it as working? One More Solution I can offer one other solution that works in some specific situations. With each bonus. So if you felt like you were getting screwed. The game was fair. including better weapons. Summary In short. versus giving them an accurate representation of reality? • What’s more important: how the game actually works. This will make the game feel more fun and more fair to the players. they’ll probably estimate higher than the real value. If their wins. Take a moment to think about these questions yourself: • Are the probabilities in our games something that we can frame as a question of professional ethics? • How do we balance the importance of giving the player enjoyment (especially when they are paying for it). they have a much more accurate view of their actual skill. is incredibly popular and profitable… even though it will mercilessly punish any player that dares harbor any flaws in how they think about probability. losses and percentage are displayed to them every time they go to start a game. right? My favorite version of Tetris to this day. And if you kept an eye on those over time. To this day. the arcade version. isn’t that dishonest? As game designers. This was. I think it is still an open question. For example. you could let the player know after each hand if they actually held the winning hand or not. How could this work in other games? In a Poker video game against AI opponents. You never seem to get one of those long. and it used the right half to keep track of how many of each type of brick fell down. one of the big takeaways from Sid Meier’s GDC keynote this year. What if you have a random number generator? See if this sounds familiar to you: you’re playing a game of Tetris and you become convinced at some point that the game is out to get you. had a neat solution to this: in a single-player game. But is this necessarily valid? Poker. you’d see that yes. your game took up the left half of the screen. essentially. and keep ongoing . there were at least a few people that said: wait.

mechanical shuffles are potentially less random than human shuffles. they have to be very careful about these things. or the person throwing the dice has practiced how to throw what they want! What about cards? All kinds of studies have shown that the way we shuffle a deck of cards in a typical shuffle is not random. even those algorithms can . and thus a little more or less likely to land on certain numbers. and in fact if you shuffle a certain way for a specific number of times (I forget the exact number but it’s not very large). Casinos have largely moved to automated shufflers to deal with these problems. long-term House advantage. And so on. a lot of 6-sided dice have curved edges. which is why you’ll notice that casino dice are a lot different from those white-colored black-dotted d6s you have at home. have the game keep track of how frequently each number or combination is rolled. depending on how you shuffle either the top or bottom card will probably stay in the same position after a single riffle-shuffle. and let the player access those statistics at any time. Even without stacking the deck deliberately. it just looks that way.) If you’re making a version of the board game RISK that has simulated die rolls in it. which is why they want to be very sure that their dice are as close to perfectly random as possible. Also. 20sided dice can be slightly oblong rather than perfectly round (due to how they’re manufactured).track of their percentage of winning hands. for example. the deck will return almost exactly to its original state. Even an honest shuffle isn’t perfectly random. as it gives you some knowledge of your opponents’ bluffing patterns. if you think about it. most of the things we use even in physical games for randomness are not perfectly random. they actually stack the deck according to a randomized computer algorithm. so a careful gambler can use a hidden camera to analyze the machine and figure out which cards are more likely to clump together from a fresh deck. the point is that all shuffles are not equally likely. so their center of gravity is a little off. making it slightly less likely to roll the numbers on the edges. giving themselves an advantage. Pseudorandom literally means “fake random” and in this case it means it’s not actually entirely random. some of the latest automated shufflers don’t riffle-shuffle. so you have to shuffle a certain number of times before the deck is sufficiently randomized. A dealer who doesn’t shuffle enough to sufficiently randomize the deck – because they’re trying to deal more hands so they get lazy with the shuffling – can be exploited by a player who knows there’s a better-than-average chance of some cards following others in sequence. but as we’ll see shortly. so that they know the deck shuffling is fair. which is worth billions of dollars just from their honest. and if those curves are slightly different then it’ll be slightly more likely to keep rolling when it hits certain faces. for example. you can run into a few problems with randomness (aside from complaints of carpal tunnel from your dealers). 6-sided dice with recessed dots may be very slightly weighted towards one side or another since they’ve actually got matter that’s missing from them. (This might be more controversial with human opponents. card magicians use this to make it look like they’re shuffling a deck when they’re actually stacking it. And that’s not even counting dealers who collude with players to do this intentionally. but those have problems of their own. One slightly unfair die can cost the casino its gambling license (or cost them money from sufficiently skilled players who know how to throw dice unfairly). These kinds of things are surprisingly reassuring to a player who can never know for sure if the randomness inside the computer is fair or not. In Las Vegas. If the cards are shuffled manually. Shuffling cards is another thing casinos have to be careful of. Balls with numbers painted on them in a hopper that’s used for Bingo might be slightly more or less likely to come up because the paint gives them slightly different weights. These days. Now. All this is without considering dice that are deliberately loaded. When Randomness Isn’t Random This brings us to the distinction of numbers that are random versus numbers that are pseudorandom.

and use that to gain an unfair advantage. Paying through the nose for machine-shopped dice is a bit much for most of us that just want a casual game of Catan. But… this is important… if you give it the same seed you’ll get exactly the same results. So we have to be careful when creating these systems. it just starts picking random numbers with its formula sequentially from there. So instead what you do is you have to tell the computer which one to take. We do this through a little bit of mathematics that I won’t cover here (you can do a Google search for pseudorandom number algorithms on your own if you care). for better or worse. even though it might look that way to a casual player. and so you just take one of the results from that function and call it a random number. so as gamers we have to accept that our games aren’t always completely random and fair… but they’re close enough. All you need to know is that there are some math functions that behave very erratically. as we said. and then next time you need a random number it’ll take the next one in sequence. one player might generate a random number and forget to inform their opponent. and those problems can cost the casinos a lot of money if they’re not careful. Pseudorandomness in Online Games: Keeping Clients in Sync You have to be extra careful with random numbers in an online game. because as I mentioned in the introduction. both machines would have to generate that number so that their random number seed was kept in sync. you determine that randomly. If your game is meant to be played competitively. There’s not necessarily a lot we can do about this. The number you tell the computer to start with is called a random number seed. and then it’ll start with that one. you pick a random number seed that’s just the fractional milliseconds in the system clock (from 0 to 999). How do you know which result to take from the function? Well. You have to choose carefully though. So you only have to seed it once. like the number of milliseconds that have elapsed since midnight or something. and now their random number generators are out of sync. mind you. The point here is. high and low voltage going through wires or being stored electromagnetically in memory or on a disk somewhere. they affect all players equally. What could happen was this: you would of course have one player (or the server) generate a random number seed. even the events in physical games that we think of as “random. you can’t really do that. it’s deterministic! Usually we get around this by picking a random number seed that’s hard for a player to intentionally replicate. the opponent’s machine (with a different random number) rolls a failure. Occasionally due to a bug. and that seed would be used for both players in a head-to-head game.” aren’t always as random as we give them credit for. if your players’ machines are generating their own random numbers. The game continues for a few turns until suddenly. and once you give the computer one seed. If. a sufficiently determined player could study your game and reach a point where they could predict which random numbers happen and when. and so on. then the next. your computer is pretty much stuck with this problem that you have to use a deterministic machine to play a non-deterministic game of chance. Then. this is no longer actually random. so we disregard. when either player needed a random number.have problems. a computer doesn’t have any kind of randomness inside it. for example. one player takes an action that requires a random element. at least not without going to great expense to get super-high-quality game components. their machine rolls a success. without an apparent pattern. It’s all ones and zeros. then really your game only has 1000 ways of “shuffling”. Just kidding. which most of us aren’t. the two clients . which is enough that over repeated play a player might see two games that seem suspiciously identical. And unless you’re willing to get some kind of special hardware that measures some kind of physical phenomenon that varies like some kind of Geiger counter tracking the directions of released radiation or something. it’s completely deterministic. Psuedorandom Numbers Computers have similar problems. Remember. But because we told it where to start. I’ve worked on games at two companies that were architected that way.

so they’re not really cheating either. for networked handheld console or phone games where there is no server and it’s direct-connect. Is this a good compromise? Oh. I hope you gave the player a map and told them exactly how far apart they can save . because it is convenient for the player. Save Anywhere. you’ve now created a new problem: after the player saves. the honest players now complain that your save system won’t let them walk away when they want. you’ll get the same result every time! First. (A better way to do this for PC games is to put all the game logic on the server and use thin clients. and keep trying until they find a combination of actions that works. And they hire a hit man to kill you in your sleep because you deserve it. but in reality they have to redo too much to be able to fully optimize every last action. let’s limit where the player can save. that doesn’t eliminate the problem. for example. but it erases their save when they start. and now they have to start the entire game over from scratch. so that they’ll have to go through some nontrivial amount of gameplay between saves. Now they can theoretically exploit the save system. they will eventually succeed. so you say. This seems to work… until the power goes out just as an honest player reaches the final boss.) Pseudorandomness in Single-Player Games: Saving and Loading You also have to be careful with pseudorandom numbers in a single-player game. by the way. The original Tomb Raider did this. and keep reloading from save until they succeed. Save Points So you say. Many games do this. they know exactly what the enemy AI will do on every turn. or maybe choosing their combat actions in a different order or something.compare checksums and fail because they now have a different game state. Save Anywhere. maybe something where they’re highly unlikely to succeed but there’s a big payoff if they do. and they’re not really playing the game that you designed at that point… but on the other hand. they’re using the systems you designed. it just makes it a little harder. Save Anywhere Suppose you have a game where the player can save anywhere. if you try to save and reload. This is a largely unsolved problem in game design. This allows some exploits. but you can at least pick your poison. However. You can’t really win here. Your carefully balanced probabilities suddenly become unbalanced when a player can keep rerolling until they win. Limited Times You give the player the ability to save anywhere. that the game holds them hostage between save points. and both players become convinced the other guy is trying to cheat. the player just has to find one other random thing to do. nothing stops the player from saving just before they have to make a big random roll. Oops. but limit the total number of saves. like maybe drinking a potion that restores a random number of HP. Saved Seed Okay. any time. If you regenerate your random number seed each time they reload from save. Quicksave Maybe you think to try a save system where the player can save any time. evil designer. because of the potential for exploits. okay. Second. but at least not on every last die-roll. so they can’t do the old save/reload trick. let’s fix that: what if we save the random number seed in the saved game file? Then. designate one player’s device to handle all these things and broadcast the game state to the other players. And then we run into a problem with the opposite type of player: while this mechanism quells the cheating. because once you start with the same random number seed the game now becomes fully deterministic! Sometimes this foreknowledge of exactly how an enemy will act in advance is even more powerful than being able to indefinitely reroll. you evil.

Are Your Pseudorandom Numbers Pseudorandom Enough? There is also. so you still end up equally likely to be in any position. generate a whole number between 1 and 52… or 0 and 51. Swapping multiple times makes no difference. Ironically. the lower the chance that a card will remain in its original position. • Generate a second pseudorandom number. For example. depending on what language you’re using). When Pseudorandom Numbers Fail Even if you choose a good random number seed. the shuffle itself isn’t. and plot that graph to see if there’s any noticeable patterns (like the numbers showing up in a lattice pattern. with equal frequency. you’re going from one random place to another. it takes an obnoxiously long time to get anything resembling a random shuffle. Call this number B. • Swap the cards in positions A and B in the deck. So any card that’s swapped is equally likely to end up anywhere. so this is something you should be thinking about as a designer while designing the load/save system… because if you don’t. so it is that much more likely to stay where it is. the question of whether your pseudorandom number generator function itself actually produces numbers that are pretty random. no matter how many times you repeat this there is always a slightly greater chance that you’ll find each card in its original position in the deck. and even if you ignore player exploits and bugs. but no matter how much you swap you can never get that probability all the way down to zero. A simple test for this is to use your generator to generate a few thousand pairs of random coordinates on a 2d graph. However. there are other ways that randomness can go wrong if you choose poor algorithms. so all positions are equally likely for any card that has been swapped at all. Second. as a baseline (this is good). there is some non-zero chance that a card will not be swapped. You can expect to see some . or with noticeable clusters of results or empty spaces). in which case it will remain in its original position. finding the perfect save system is one of those general unsolved problems in game design. of course. either due to rounding error or just a poor algorithm. just from the perspective of what kind of system is the most fun and enjoyable from the player. and it’ll probably be based on whatever’s easiest to code and not what’s best for the game or the player. and drew some BIG ARROWS on the map pointing to places where the big battles are going to happen. than anywhere else. it’ll swap to a random position. • Generate a pseudorandom number that corresponds to a card in the deck (so if the deck is 52 cards. first off. or if there are actually some numbers that are more or less likely than others. Think of it this way: if a card is ever swapped. and you’re swapping random pairs one at a time. the same way. and that’s just for deterministic games! When you add pseudorandom numbers. because the card positions start out fixed. suppose you have a deck of cards and want to shuffle them. then it’ll be left to some programmer to figure out. Here’s a naïve algorithm that most budding game programmers have envisioned at some point: • Start with an unshuffled deck. God help you. Call this number A. The more times you perform swaps.on average. you can see how the problem can get much thornier. Pick Your Poison As I said. • Repeat steps 2-4 lots and lots of times. And then your players will complain that the game is too easy because it gives them all this information about where the challenge is. so a player doesn’t have to replay large sections of the map just because they didn’t know ahead of time where the best locations were to save. this means that the most likely shuffle from this algorithm is to see all cards in exactly the same position they started! So you can see that even if the pseudorandom numbers generated for this shuffle are perfectly random (or close enough). The problem here is.

because as we learned that’s how random numbers work.clustering. choose a pseudorandom number between 1 and 60). this means choose one card randomly to put on the bottom of the “shuffled” deck. • Keep repeating this until eventually you get down to position #1. I’ll give you a hint: one works and one doesn’t. I’ve given you something you can do right now to improve the balance of a game you’re working on. to protect the innocent). and swap with card #60. (There are other more mathematical ways to calculate the exact level of randomness from your generator. the purpose here isn’t to emulate a human shuffle. it was more likely to get shuffled to the top and show up in your opening hand. but I’ll also explain both for you in case you don’t know how to program. putting it on the bottom of the deck. But if your programmer. and swap that card with position #59. In both cases. But if you repeat the experiment a few times you should see clusters in different areas. This is a way of using Monte Carlo simulation to do a quick visual test for if your pseudorandom numbers are actually random. including those that have been chosen . For this week’s “homework” I’m going to go over two algorithms for shuffling cards. quick-and-dirty “good enough” test for game design purposes. and also a task you can do later for practice. swap that with position #58. but that requires actual math. This is clearly different from how humans normally shuffle a deck. and those all use established algorithms that are known to work. • Choose a random card from all available cards (in a 60 card deck. that means a card from 1 to 60). we saw the perennial complaints from the player base that the deck shuffler was broken. This week I’m going to reverse the order. Algorithm #2 The second algirhtm is similar. and that there were certain “hotspots” where if you placed a card in that position in your deck. Get some practice first. • Start with an unshuffled deck. • Now. I’ll actually give you the source code. that is. but remember.) Most of the time this won’t be an issue. of course. and that’s what most programmers use. a random ordering of cards. Take the card in that position. this is an easier. which swaps with itself (so it does nothing). it’s to get a random shuffle. I have seen some variant of both of these used in actual working code in shipped games before (I won’t say which games. • Then take another random card from the remaining ones (between 1 and 58). take a random card from the remaining cards (between 1 and 59). then lock it there in place. and so on. Most programming languages and game libraries come with their own built-in pseudorandom number generation functions. and then we’re done. Swap with position #60. Keep in mind that this might look like a programming problem. and do those ways line up evenly with the different permutations of cards in a deck? Algorithm #1 The first algorithm looks like this: • Start with an unshuffled deck. this is something you will want to test carefully! Homework In past weeks. • Choose a random card from all available cards. putting it on top of the previous one. feels the need to implement a custom pseudorandom number generation function. but really it’s a probability calculation: how many ways can you count the different ways a shuffling algorithm shuffles. • Choose a random card from all available cards (so if the deck has 60 cards. What I want you to do is think about both of these algorithms logically. Essentially. but with two minor changes. and then figure out if they work. then apply it to your project once you’re comfortable with the idea. for some reason.

• Choose another random card from 1 to 60. examine how your random number seed is stored in the game state. but you should be able to do this just by trial and error. anyway) and one actually favors certain shuffles over others. then choosing from 1. you’re choosing from 3 cards. In particular. take a look at the pseudorandom numbers and how your program uses them. is to take a look at the random mechanics in your game (if there are any) and ask yourself some questions: • Is the game dominated more by skill or luck. If You’re Working on a Game Now… Video Games Once you’ve done that. Swap with position #59. make sure you’re doing it in a way that’s actually random. and swap with position #58. and you’re done. whether you’re designing a board game or video game. or is it an even mix? • Is the level of skill and luck in the game appropriate to the game’s target audience. look at both algorithms and figure out how many different ways there are for each algorithm to produce a shuffle. Hints How do you approach this. If you want to go through the math with larger decks. does your save-game system work in such a way that a player can save. when there are too many different shuffles in a 60-card deck to count? The answer is that you start much simpler. be my guest. as random as your pseudorandom number generator is. choose another random card from 1 to 60). that is. That’ll make it more random. And if it works that way for a 3-card deck. then choosing from 3. Next. if you’re working on a game that involves computers. attempt a high-risk random roll. So in the first case. Compare the list of actual possible orderings of the deck. and keep reloading from save until it succeeds? All Games.already (so. Repeat this entire process (steps 2-5) fifty times. • Oh yeah – one last thing. assume it’s similarly random (or not) for larger decks. Lastly. if you use a series of pseudorandom numbers to do something like deck shuffling. but you shouldn’t have to. call them A. check that by graphing a bunch of pairs of random coordinates to check for undesirable patterns. all the different unique ways the deck can be shuffled… then compare to all the different ways a deck can be shuffled by these algorithms. Digital or Non-Digital Another thing to do. There are mathematical tricks for doing this that we haven’t discussed. Assume a deck with only three cards in it (assume these cards are all different. then choosing from 3. and if you’re using a nonstandard way to generate pseudorandom numbers. figure out how many ways there are to order a 3-card deck. In the second case you’re choosing from 3. if it’s a multiplayer game. B and C if you want). First. is it stored separately in different clients or on a single server? If it’s a single-player game. swap that card with position #1. or should the game lean a bit more to one side or the other? • What kinds of probability fallacies are your players likely to observe when they play? Can you design your game differently to change the player perception of how fair and how random your game is? Should you? . You’ll find that one of them produces a random shuffle (well. then choosing from 2. • Keep repeating this until eventually you choose a random number from 1 to 60.

so to count them we multiply: 3x2x1 = 6. We spent two weeks near the start of the course talking about balancing transitive games. This second method is the easiest way to randomly order a list in Excel. then repeating it 50 times is not going to make it any more random. we would expect six outcomes (or a multiple of six). Thus. we have things that are transitive. Therefore. CAB. If you actually go through the steps to enumerate all six possibilities. (pseudo)random. but their value changes over time or depends on the situation. all equally likely. And if the shuffle isn’t random. Immediately we know there must be a problem. Analyzing Algorithm #1: First you choose one of three cards in the bottom slot (A. for a truly random shuffler. one-on-one. and then two more weeks talking about probability. the remaining card is put on top (no choice involved). (The other algorithm is to generate a pseudorandom number for each card. might actually run such a simulation for a larger deck in order to gain a slight competitive advantage. B. and then one of the three to go on top. using RAND(). so this is just a waste of computing resources. As before these are separate independent trials. then these are: ABC.” If you’re sufficiently determined. it isn’t any more valuable. then put the cards in order of their numbers. If you’re fighting fifty enemies all clustered together in a swarming mass. So it would be perfectly valid to stop here and declare this algorithm “buggy. This algorithm is correct.Level 6: Situational Balance Answers to Last Week’s Question If you want to check your answer from last week: Analyzing Card Shuffles For a 3-card deck. BCA. sort of. there are six distinct shuffling results. and you’d do better to fix the underlying algorithm rather than covering it up. Finally. If the cards are A. What is situational balancing? What I mean is that sometimes. you’ll find they correspond to the six outcomes above. How do you balance . it’s 50x more valuable. This Week This is a special week. or C). One example is area-effect damage. then repeating may or may not make it any better than before.) Analyzing Algorithm #2: First of all. other things being equal. These are separate. and show which shuffles are more or less likely and by how much. without having to go any further at all. If you’re only fighting a single enemy. Then you choose one of the two remaining cards to go in the middle (if you already chose A for the bottom. and in fact is one of the two “standard” ways to shuffle a deck of cards. ACB. it depends. then one of the three to go in the middle. BAC. CBA. You would expect something that does 500 damage to multiple enemies at once is more valuable than something that does 500 damage just to a single target. with each of these results being equally likely. B. What about the inner loop? First we choose one of the three cards to go on bottom. independent trials. RANK() and VLOOKUP(). and C. since 6 does not divide evenly into 27. then you would choose between B and C). so we multiply 3x3x3 = 27. and other times you’re only fighting a single lone boss. we know that some shuffles must be more likely than others. and put a nice big bow on it. But how much more valuable is it? Well. This week we’re going to tie the two together. upon learning the algorithm. This week is about situational balancing. A competitive player. Maybe at some points in your game you have swarms of 50 enemies. if a single shuffle is truly random. you could actually trace through this algorithm all 27 times to figure out all outcomes.

Each round. very fast. and compare the total to the target’s AC. Here’s how it works: each character has two stats. In each of these cases. and there are a few ways to do that. in which case the value of Karma is dependent on your ability to combine it with other card effects that you may or may not draw. all on its own. and even if we do have unlimited budgets we still have to start somewhere. and you need to squeeze one more action out of the deal to kill it before it kills you. so we at least need to make our best guess.” which defaults to 0) and their Armor Class (or “AC.0 and up. What’s the central resource? Here’s my solution (yours may vary). Unfortunately. What follows is a very. they miss and nothing further happens. think of healing effects in most games. all depending on the context. consider an effect that depends on what your opponent does. there’s a card in Magic: the Gathering called Karma. but not a complete answer. If the attacker’s total is greater or equal. since the answer to “how valuable is it?” is always “it depends!” the best way to approach this is thorough playtesting to figure out where various situations land on your cost curve. but which can make the difference between winning and losing if you’re fighting something that’s almost dead. is that equivalent to +1 AC? Or is one of those more powerful than the other? If I were interviewing you for a job right now. the card’s ability to do damage changes from turn to turn and game to game. But assuming these are equal (or equivalent). which is why I call it situational balancing. Playtesting: The Ultimate Solution? There are actually a lot of different methods of situational balancing. what would you say? Think about it for a moment before reading on. very oversimplified version of the d20 combat system. Whether you have to hit an enemy once or 5 times or 10 times to kill it. which was used in D&D 3. Against a player with no Swamps at all. I realized that I didn’t know how much damage you did. finding the right cost on your cost curve depends on the situation within the game. playtest” is good advice. This is actually something I was asked once on a job interview (and yes. So. if I gave you an extra +1 to your attack. add their BAB. as long as you are equally vulnerable. or how many hit points you had (that is. you’re going to hit the enemy a certain . Against a player who has 24 Swamps in their deck. how many times could you survive being hit. we use probability to figure out the expected value of the thing. So it might be balanced. even though it has a benefit that changes? The short answer is. First. So “playtest. they hit and do damage. Or. it’s worthless unless you have other cards in your deck that can turn their lands into Swamps. it doesn’t actually matter. which is why I’ve spent two weeks building up to all of this. which are completely worthless if you’re fully healed already. and a simple answer. which is why I’m devoting an entire tenth of this course to the subject. For example. this single card can probably kill them very dead. But as before. and how many times would you have to hit something else to kill it). you should be hitting about 55% of the time. each character gets to make one attack against one opponent. so I know it must be useful for something. or underpowered or overpowered. we don’t always have unlimited playtest budgets in the Real World. otherwise. Here’s the question: are BAB and AC balanced? That is.something like that? Or. their Base Attack Bonus (or “BAB. they roll 1d20. A Simple Example: d20 Let’s start with a very simple situation. playtest. I got the job).) In either case. How do we balance something that has to have a fixed cost.” which defaults to 10). and you’re almost dead. that does 1 damage to your opponent each turn for each of their Swamps in play. To attack. The long answer is… it’s complicated. by default with no bonuses. (Well. the card is totally worthless.

Let’s ignore this extreme for the time being. +1 BAB is more powerful because each of us is attacking every round. and I’m in a party of four adventurers ganging up on a lone giant? Here. (This is from my experience. for example. and there are four enemies surrounding me? Now I only get to attack once for every four times the opponents attack. Using the central resource to derive balance So. If both me and the enemy have a 5% chance of hitting each other. Or what if it’s the other way around. This is probably why the default is +0 BAB. in each case. our relative hit percentages are changed by exactly the same amount.) This means that in everyday use. But either way. this is an oversimplification. Either way. as a function of how much you outnumber or are outnumbered. Aside from making your stats more balanced. if you wanted AC and BAB to be equivalent and balanced with each other. (One exception is if either hit percentage goes above 100% or below 0%. one on one. Hit percentage is the central resource that everything has to be balanced against. you can change AC on your cost curve to be more valuable and thus more costly. 10 AC. because I’m making a roll that involves my AC four times as often as I make a roll involving my BAB. And they’re going to hit you a certain percentage of the time. we’ll hit each other just about every turn. at which point extra plusses do nothing for you. it does not reflect on the actual balance of D&D at all. Another thing you can do. the two are indeed equivalent. Now. in most D&D games. is to change the mix of encounters in your game so that the player is outnumbered about half the time. this also adds some replay value to your game: going through the game with high BAB is going to give a very different experience than going through the game with a high AC. so +1 AC is much more powerful here. Implications for Game Design If you’re designing a game where you know what the player will encounter ahead of time – say. are AC and BAB balanced? +1 BAB gives me +5% to my chance to hit. and they outnumber the enemy about half the time. In our simplified d20 system. the two stats are not equivalent on the cost curve. And if we attached numerical cost and benefit values to hit percentage. What it comes down to is this: you want your hit percentage to be higher than theirs.percentage of the time. at least. and +1 AC gives me -5% to my opponent’s chance to hit. it feels more epic that way. some encounters are going to be a lot harder than others. AC is more powerful than BAB. we’ll exchange blows about as often as not.) What if I’m not fighting one on one? What if my character is alone. GMs are fond of putting their adventuring party in situations where they’re outnumbered. giving the player a different perspective and different level of challenge in each encounter. and the value of defending is higher if you’re outnumbered. so if I’m fighting against a single opponent on my own. so there’s no real advantage to one or the other. an FPS or RPG with hand-designed levels – then you can use your knowledge of the upcoming challenges to balance your stats. If we both have a 95% chance of hitting each other. as I said. if you know that the player is mostly fighting combats where they’re outnumbered. The Cost of Switching What if D&D worked in such a way that you could freely convert AC to BAB at the start of a . on average we’ll both hit each other very infrequently. assuming the giant can only attack one of us at a time. so that it would take a lot of bonuses and be exceedingly unlikely to ever reach that point. but only one of us is actually getting attacked. we could even calculate how much more powerful these values are. In practice. But we can see something interesting even from this very simple system: the value of attacking is higher if you outnumber the opponent. even though the game behaves like they should be.

That said. And let’s further assume that half damage is actually a huge liability. let’s say that “half damage” actually makes the sword a net negative. But if instead you have instant real-time weapon change. and maybe a sword that doesn’t work actually has a cost of 250. and a knife for closequarters combat. But let’s also say that trolls are pretty rare. 10% of the time it’s twice as good. the calculation is pretty straightforward. contrived example to illustrate the math: suppose you have a sword that does double damage against Dragons. maybe only 5% of the encounters in the game are against trolls. or 30% more than a typical .5x powerful sword would have a benefit of 150. because it’s just that deadly to get caught with your sword down. The math says: 95%*150 + 5%*(-250) = 130. but how do the actual numbers work? Let’s see… Example: Inability to Switch Let’s take one extreme case. So in this case. it takes away your primary way to do damage. and whenever you acquire a new one it automatically gets rid of the old. So if a typical sword at this level has a benefit of 100 (according to your existing cost curve). Okay. you ask: in what situations does this object have a greater or lesser value. master-of-none weapon. but it only does half damage against Trolls. or overall strategies.0 + 10%*2. That’s fine as a general theory.5x as powerful as the other swords in its class. seems a lot more realistic – I mean. suppose we made the cost of switching weapons higher. Here’s a simple. and a single general-purpose weapon may end up becoming more powerful than a smorgasbord of situational weapons. and by how much? How often do you encounter those situations? Multiply and add it all together. Here. weapons. it’s the sum of weapon capabilities that matters rather than individual weapon limitations. What’s the lesson here? We can mess with the situational balance of a game simply by modifying the cost of switching between different tools. Let’s assume that in this game. where you can’t switch strategies at all. seriously. so doubling the damage of something makes it exactly twice as good. So. damage has a linear relationship to your cost curve. a sniper rifle to use at a distance against single targets. a 1. but there are plenty of games where you can swap out one situational thing for another. where you might be carrying several weapons at a time: maybe a rocket launcher against big slow targets or clusters of enemies. if you’re carrying ten heavy firearms around with you and can switch without delay. so to speak. Each of these weapons is situationally useful some of the time. so you have to rely on other sources that are less efficient. maybe a 10-second delay to put one weapon in your pack and take out another (which when you think about it. Here’s another example: you have a sword that is 1. and it greatly increases the chance you’re going to get yourself very killed if you run into a troll at a bad time. where exactly are you carrying them all?). So in this case. Now all of a sudden the limitations of individual weapons play a much greater role. An example might be an RPG where you’re only allowed to carry one weapon and equip one armor at a time.combat.0 = 110% of the cost. 90% of the time the sword is normal. 90%*1. It’s a lot like an expected value calculation. stat distributions. and suddenly a +1 bonus to either one is much more powerful and versatile relative to any other bonuses in the rest of the game. a pile of weapons where each is the perfect tool for a single situation is much better than a single jack-of-all-trades. First-Person Shooters are a common example. because this is your only option. maybe you can’t actually do that in D&D. so we have to look at it across all situations. “double damage against dragons” is a +10% modifier to the base cost. So this sword has a benefit of 130. Suppose 10% of the meaningful combats in your game are against Dragons. but as long as you can switch from one to another with minimal delay. So. and vice versa? Now all of a sudden they are more or less equivalent to each other.

so there’s less of a gain there. Let’s take an example. and from that point. you can see that there are actually a lot of ways you can change this.5x multiplier to 2x. because it might be used more or less often depending on what other tools the player has already acquired. not 1x to 2x. say by increasing the number of trolls or dragons the player encounters – either in the entire game. Maybe you have a variety of swords. It all depends on what’s right for your game. it’s usually not that simple. we run into a problem. it’s not really much of a drawback. say I’ve purchased a sword that does double damage against Dragons and 1. because there is no opportunity cost to gaining a new capability. However. you can also change the frequency of situations. except in most cases we ignore the bad parts. In practice. just as you normally would with transitive mechanics. I can probably buy about half the swords in the game and have at least some kind of improved multiplier against most or all monsters. In this case. • Give a discount: One way is to actually change costs on the fly. Again. or adjusting the base abilities to cover every other situation. maybe adjusting the damage in those special situations to make it better or worse when you have those rare cases where it matters. and figure out what this new object will add that can’t be done better by something else. the limitations don’t matter nearly as much as the strengths of each object. the player may pick up new objects in a different order on each playthrough. we would quickly realize that they do not actually give equal value to the player at any given point in time. You can obviously change the cost and benefit of an object. if the player is only expected to use this sword that loses to trolls in one region in the game that has no trolls. (After all. the player may be able to use suboptimal strategies if they haven’t acquired exactly the right thing for this one situation (in fact. I’m now going from a 1. is it?) Another Example: No-Cost Switching Now let’s take the other extreme. and ten monster types in your game. extra swords have diminishing returns. and the monster types are all about as powerful and encountered about as frequently. some players love that kind of thing). and is completely ineffective against a third type of monster. because you can just switch away from those. each of which does major extra damage against a specific type of monster. we look at the benefits of all of the player’s objects collected so far. It doesn’t take a mathematician to guess that these swords should all cost the same. How do you balance a system like that? There are a few methods for this that I could think of. If I fully optimize. Also. Now there’s a sword out there that does double damage against Trolls. but that sword is no longer quite as useful to me as it used to be. For example. In this case. a lot of design “knobs” you can turn to mess with the balance here. Work it into your narrative that the more swords you buy from this merchant. End result: you don’t actually know how often something will be used. it’s probably better for most games to be designed that way). even if the rest of the game is covered in trolls. the more he discounts additional swords because you’re such a good customer (you could even give the player a “customer loyalty card” in the game and have the merchant put stamps on it. a slight bump in damage against a second type of monster. In a lot of games.sword. But with situational balance. Suppose there are ten of these swords. and there is your additional benefit from the new object. Multiply the extra benefit by the percentage of time that the benefit is used.5x damage against Trolls. So it is a similar calculation. and also how likely they are to use this new toy in situations where it’s not perfect (they haven’t got the perfect toy for that situation yet) but it’s at least better than their other toys. where you can carry as many situational objects as you want and use or switch between them freely. . Playing through this game. and probably a few more I couldn’t. or just in the area surrounding the place where they’d get that special sword.

for games where you can switch between objects but there’s some cost to switching (a time cost. it’s the other way around. • Let the increasing money curve do the “discount” work for you: Maybe if the player is getting progressively more money over time. Tricky! • Discount swords found later in the game: Or. there will be a law of diminishing returns here. so they’re more versatile. but maybe you want to make a new unit type who are strong against both fliers and footmen. open space. so we come back to the fact that versatility comes in two flavors: • The ability of an individual game object to be useful in multiple situations • The ability of the player to swap out one game object for another. What about when the objects themselves are versatile? This happens a lot in real-time and turn-based strategy games. it’s a random guess. or something – ah. they’ll already have access to other ones. Now what’s the best strategy? Taking the versatile weapon that’s mildly useful in each case but not as good as the best weapon means you’ll win against people who guessed poorly and lose against people who guessed well. So. maybe archers are really strong against fliers and really weak against footmen (a common RTS design pattern). You can then cost them differently because you know that when the player finds certain swords.• Let the player decide: Or. On a map with large. and it’s up to the player to decide how many is enough. This kind of choice is actually not very interesting: the players must choose blindly ahead of time. consider that part of the strategy of the game. So you’ll never get caught with a totally ineffective weapon if you’ve got a machine gun. Obviously. but at a cost). but not as strong as archers. you could balance everything assuming the player has nothing. Suppose instead you have a random map. if they are buying multiple items in the game that make their character more versatile and able to handle more situations. and then most of the game comes down to who guessed right. If you know ahead of time you’re playing on a small map with tight corridors and lots of twists and turns. which means that yes. knives and swords are usually the best weapons when you’re standing next to an opponent. so that you know they’ll probably buy certain ones earlier in the game and other ones later. a money cost. keeping the costs constant will itself be a “diminishing” cost to compensate. Taking another example. for example. since each sword takes the player less time to earn enough to buy it. knives are going to be a lot more useful than sniper rifles. So maybe an archer can take down a flier and take next to no damage. but this new unit would go down to about half HP in combat with a flier (it would win. This new unit isn’t as good against fliers. where individual units may have several functions. so there’s a 50/50 chance of getting either a map optimized for close quarters or a map optimized for distance attacks. or they can take multiple weapons with them. you’ll use a method that lies somewhere between the “can’t change at all” and “can change freely and instantly” extreme scenarios. How much are these kinds of versatility worth? Here’s the key: versatility has value in direct proportion to uncertainty. but you’ll also never have the perfect weapon for the job if you’re operating at far or close range a lot. The Cost of Versatility Now we’ve touched on this concept of versatility from a player perspective. There is no best strategy here. a versatile weapon that can serve both roles (even if only mediocre) is much more valuable. in a first-person shooter. but a machine gun is moderately useful at most ranges (but not quite as good as anything else). Unless they’re given a mechanism for changing weapons during play in order to adjust to the map. while sniper rifles are great from a distance. and you can reduce the costs of the newer ones accordingly. you can spread out the locations in the game where the player gets these swords. but it is good for other things. If you have a single map with some tight spaces and some open areas. . or whatever).

and Everything Else. under most cases anyway). Let me explain each. I generally said that any kind of drawback or limitation is also a cost. the original sunk cost has to be balanced based on its expected value: how many Dragoons do you expect to build in typical play? When costing Dragoons. other than allowing you to build a special kind of unit. and that cost is in addition to each Dragoon. Strictly speaking. then effectively part of the cost of those buildings goes to other things so you can consider it to not even be part of the Dragoon’s cost. like them letting you build other kinds of units or structures or upgrades that you take advantage of. the less valuable versatility in a single object becomes. so I’m bringing it up now. The structure may not do anything practical or useful for you. you might see some special abilities you can purchase on level-up that aren’t particularly useful on their own. maybe they’re even completely . But still. before you gain access to the thing you want to buy in the first place. Broadly speaking. You can also look at this the other way. and so on… so it looks like it costs $10 but the actual cost is more because it has these shadow costs that a better-quality clock radio might not have. If you buy a cheap clock radio for $10. Economists call these shadow costs. in order to build certain kinds of units. there is an additional cost in time (and transportation) to go out and buy the thing. For example. you can’t build a Cybernetics Core without also building a Gateway for 150 minerals. so that’s part of the cost as well. they are a cost that’s hidden behind the dollar cost. One place you commonly see these is in tech trees in RTSs. we can split the cost of an object into two categories: the resource cost. there are two kinds of shadow costs that seem to come up a lot in situational balance: sunk costs and opportunity costs. before we move on with some in-depth examples. And if you get additional benefits from those buildings. I’m talking about some kind of setup cost that has to be paid first. which is a pretty huge cost compared to the listed cost of the unit itself! Of course. if you’re costing the prerequisite (such as those structures you had to build in order to buy Dragoon units): not just “what does this do for me now” but also “what kinds of options does this enable in the future”? You tend to see this a lot in tech trees. that is. and if it doesn’t go off one morning when you really need it to because the UI is poorly designed and you set it for PM instead of AM then missing an appointment because of the poor design costs you additional time and money.The more easily a player can exchange game objects. and by the way. In games. a lot closer to the listed cost. As an example. Oh. If you remember when doing cost curves. When the cost may be “amortized” (spread out) over multiple purchases. but you had to build a Cybernetics Core to build Dragoons and that cost 200 minerals. Sunk Costs By sunk costs. each Dragoon unit in StarCraft costs 125 minerals and 50 gas (that is its listed cost). because you only have to pay the build cost for those buildings once (well. I want to write a little bit about different kinds of costs that a game object can have. I should have talked about this when we were originally talking about cost curves. that one guy costs you a total of 475 minerals and 50 gas. use them for nothing else. you can see that if you have to pay some kind of cost just for the privilege of paying an additional cost. then the cost of each is reduced to 160 minerals and 50 gas each. in an RTS. you need to factor in the up-front costs as well. Shadow Costs Now. you first typically need to build a structure that supports them. you need to be careful to factor that into your analysis. For example. So if you build all these structures. so that’s what I’m talking about here. but in practice these seem to come up more often in situational balancing than other areas. if you build ten Dragoons. and then create a single Dragoon. in some RPGs or MMOs with tech trees. If it then breaks in a few months because of its cheap components and you have to go replace or return it then that is an extra time cost. MMOs and RPGs.

An example. How much should it cost? (“It depends!”) Okay. If we do it right. In this case. Or you pay for a one-way ticket on a ferry for 10 Gold. the choice to buy one of those discount cards at the real-world GameStop down the street requires a similar calculation). so it’s a kind of blind decision. is an opportunity cost. Versatility Example How do the numbers for versatility actually work in practice? That depends on the nature of the versatility and the cost and difficulty of switching. you know the answer: it depends on how much you know about . but never both and never neither – always one or the other. any situation where taking an action in the game prevents you from taking certain other actions later on. and vice versa. Basically. you don’t get the Tenpenny Tower quest. you also pay a cost later in decreased versatility (not just resources).worthless… but they’re prerequisites for some really powerful abilities you can get later. and you have to ask yourself whether you expect to ride the ferry more than five times. so any given deck basically had to only use good or bad but not both. or buy a lifetime pass for 50 Gold. as designers. Or you consider purchasing a Shop Discount Card which gives 10% off all future purchases. Still. but you only get to use them once. Here’s a contrived example: you’re going into a PvP arena. You can see sunk costs in other kinds of games. But remember that it’s not zero. you’re immediately locked out of all the Ice spells. because you’re basically asking the player to estimate how many times they’ll use something… but without telling them how much longer the game is or how many times they can expect to use the reusable thing. that’s up to you to figure out for your particular game situation. I’ve seen some RPGs where the player has a choice between paying for consumable or reusable items. which I’m calling an opportunity cost here. reducing your versatility. Protection From Elements. and we can do our own expected-value calculation and balance accordingly. This happens in questing systems. but a few that were “good guy” cards and a few that were “bad guy” cards. so you have to consider whether you’ll spend enough at that shop to make the discount card pay for itself (come to think of it. or a Potion Making Machine for 500 Gold. and you know that your next opponent either has an Ice attack or a Fire attack. we know the answer. it prevents you from learning something else. but it costs you 1000 Gold to purchase the discount in the first place. Opportunity Costs The second type of hidden cost. too: if you don’t blow up Megaton. It adds a constraint to the player. although that’s kind of expensive). How much is that constraint worth as a cost? Well. and be sure to factor this into your cost curve analysis. Let’s say both enchantments cost 10 Gold each. your action has a special kind of shadow cost: in addition to the cost of taking the action right now. This can lead to interesting kinds of short-term/long-term decisions. or a less powerful ability now to get a really powerful ability later. So for example. If you learn Fire magic. if you want to be sure. you can either buy a Potion for 50 Gold. and in that case you’d buy the machine if you expect to create more than ten Potions. Now. also from games with tech trees. our players will trust that the cost is relative to the value by the time they have to make the buy-or-not decision in our games. or a Protection From Fire enchantment which gives you protection from Fire attacks (or both. too. which gives you both enchantments as a package deal. what does it depend on? If you’ve been paying attention. where you could take a powerful ability now. and if you take a certain feat or learn a certain tech or whatever. might be if you reach a point where you have to choose one branch of the tech tree. suppose we offer a new item. and if you played any “good guy” cards it prevented you from playing “bad guy” cards for the rest of the game (and vice versa). You can buy a Protection From Ice enchantment which gives you protection from Ice attacks. These kinds of choices aren’t always that interesting. The consumables are much cheaper of course. These can even happen in tabletop games: one CCG I worked on had mostly neutral cards. is the cost of giving up something else.

Okay. you would normally spend 10 Gold right away. then Protection From Elements should cost 20 Gold. So. The “versatility” here offers no added value. take the expected number of things you’ll hit. while the area-effect attack doesn’t do that for awhile. The expected value here is (50%*10) + (50%*20) = 15 Gold. Ah. AoE actually makes itself weaker over time as it starts working! So this is something you have to be careful of as well: looking at typical encounters. a unit that is versatile offers some value (my opponent might have some tricks up their sleeve that I don’t know about yet) but not complete value (I do know SOME of what the opponent has. but you’re not completely sure. so that gives me a partial (but not complete) sense of what I’m up against. other things being equal. how much of a benefit is that splash damage? The answer is generally. and multiply. evenly distributed. and I can choose what units to build accordingly. let’s look at some common case studies. so splash damage is twice the benefit. and there’s a 50% chance you’ll guess right and only have to spend 10 Gold. . and then if the combat starts and you realize you guessed wrong. Also. in an RTS. how often enemies will be clustered together. in most games. and also how long they’ll stay that way throughout the encounter. then the package should cost the same as Protection from Fire: 10 Gold. a Fire attack. Here. I might see some parts of the army my opponent is fielding against me. if enemies come in clusters from 1 to 3 in the game. There’s no in-game difference between buying them separately or together. Case Studies in Situational Balance So. and a 50% chance you’ll guess wrong and have to spend an additional 10 Gold (or 20 Gold total) to buy the other one. enemies don’t lose offensive capability until they’re completely defeated. so that last one sounds kind of strange as a design. which then reduces the total damage output of your subsequent AoE attacks – that is. What might be a situation in a real game where you have some idea but not a complete idea of what your opponent is bringing against you? As one example. an area-effect attack might kill some of them off but not all. In this case. and you can’t switch protections once you enter the arena. Then the optimal cost for the package will be somewhere between the extremes. spreading out the damage slowly and evenly can be less efficient than using singletarget high-power shots to selectively take out one enemy at a time. A word of warning: “other things being equal” is really tricky here. so there’s also value in building troops that are strong against their existing army). If you know ahead of time that they will be. and it depends on the cost of switching from one to the other if you change your mind later. with all of that said. For example. reducing a cluster of enemies to a few corpses and a smaller group (or lone enemy). so just doing partial damage isn’t as important as doing lethal damage. doing twice the damage you would normally. because you already know the optimal choice. versatility offers you exactly the same added value as just buying both things individually. you can immediately call a time-out and buy the other one? In this case. so that is what the combined package should cost in this case. the same as buying both. depending on exactly how sure you are. but what if you have the option to buy one before the combat. If you have no way of knowing your next opponent’s attack type until it’s too late to do anything about it.your next opponent up front. because generally other things aren’t equal in this case. if the enemies you’re shooting at have varying amounts of HP. since the latter reduces the offensive power of the enemy force with each shot. then on average you’ll hit 2 enemies per attack. Single-target versus area-effect (AoE) For things that do damage to multiple targets instead of just one at a time. say. What if the game is partly predictable? Say you may have some idea of whether your opponent will use Fire or Ice attacks. Here.

then we would cost this the same as a card that did 4 or 5 damage against everyone. is you have to be very careful of what the extra benefit or liability is really worth. you lose nothing by simply not exercising your option to build it. this would be a good method. it should be costed with the assumption that whatever situation it’s built for. but it probably shouldn’t be half off like it would be if the player always had to use it… unless we want to really encourage its use as a sideboard card by intentionally undercosting it. Multiply the extra benefit (or liability) as if it were always in effect during all encounters. that should be the same across all weapons (so you might have a weapon that’s the most powerful in the game but only in a really rare situation. or at least that if you multiply the situational benefit by the expected probability of receiving that benefit. or situational cards in a CCG that can be “sideboarded” against relevant opponents. But in some of these cases. how often do you encounter the relevant type of enemy). Metagame “combos” . and then bring it out on subsequent games only if your opponent is playing Red. they simply don’t spend the resources. specialized units in an RTS that can be build when needed and ignored otherwise.e. actually happens 100% of the time. you will build it. trying to cost something in the game based on the metagame is really tricky. and I know that most decks are 2 or 3 colors so maybe 40-50% of the time I’ll play against Red in open play. because something like “double damage” or “half damage” is rarely double or half the actual value. a lot depends on exactly how the game is structured. and you can call those balanced if the numbers come out right). (If you must pay extra for the versatility of being able to build the situational unit in the first place. In a Magic tournament. But when it is useful. Metagame objects that you can choose to use or ignore Sometimes you have an object that’s sometimes useful and sometimes not. assuming it costs you nothing to earn the capability of building it. as we saw in that earlier example. In these cases. what units your opponent is building. that each one is useful in equally likely situations. with dragons and trolls. but it’s at the player’s discretion whether to use it – so if the situation doesn’t call for it. what cards are in your opponent’s deck. If the weapons are all free (no “resource cost”) but you can only select one main and one alternate. i. that cost is what you’d want to adjust based on a realistic percentage of the time that such a situation is encountered in real play. Examples are situational weapons in an FPS that can be carried as an “alternate” weapon. and you’ll know that it is useful in that case. Likewise with a specialized RTS unit. the only cost to you is a discretionary card slot in your sideboard.) With an alternate weapon in an FPS. not play with it your first game. If it’s useless most of the time. you do know what your opponent is doing. by the expected percentage of the time it actually will matter (that is. which is a metagame cost. Note that in these cases. So again. versus a weapon that’s mediocre but can be used just about anywhere. The trick here. they are objects that depend on things outside of player control: what random map you’re playing on. after playing the first game to best-of-3. you are virtually assured that the card will work 100% of the time. with no knowledge of whether the opponent is playing Red or not. As we learned a few weeks ago.Attacks that are strong (or weak) against a specific enemy type We did an example of this before. If the player must choose to use it or not before play begins. Played this way. you are allowed to swap some cards into your deck from a “Sideboard”. You could put this 10-damage-to-Red card aside. For example. So the best we can say is that it should cost a little less to compensate for the metagame cost. it’s tempting to cost them according to the likelihood that they will be useful. if I have a card that does 10 damage against a player who is playing Red in Magic. then you need to make sure the alternates are balanced against each other.

To understand how to balance these. support towers in a Tower Defense game that only improve the towers next to them. Mirror Universe has no drawback. where two cards are individually all but useless. this “attack buff” support ability would be . if uncountered. but it forms a powerful combo with something else? An example would be dual-wielding in an FPS. and this is part of their tech tree. in order to make that thing useful. perhaps marked down slightly because it requires a two-card combination (which is harder to draw than just playing a single card). it was of questionable value. How do you balance something like this? Let’s take a simple example. For example. then have you use it and plink them to death. • Mirror Universe was a card that would swap life totals with your opponent – not as risky as Lich since you’re in control of when to use it. Magic had two cards. which isn’t generally a winning strategy. Suppose your support character has a special ability that increases a single ally’s attack value by 10%. Lich does provide some other benefits (like drawing cards as an effect when you gain life) but with a pretty nasty drawback. if something isn’t particularly useful on its own. because it basically just helped you out if you were losing. The best answer for a situation like this might be to err on the side of making their combined cost equal to a similarly powerful game-winning effect. we first have to return to the concept of opportunity costs from earlier. and a kind of psychological benefit that your opponent might hold off attacking you because they don’t want to almost kill you. In this case. we just talked about situations where the player has no control. These are situational in a different way: they reward the player for playing the metagame in a certain way. and in any case they’re still a warm body that can distract enemies by giving them something else to shoot at). How do you split the cost between them – should one be cheap and the other expensive. and that they can only have one of these active at a time. and so on. then try to divide that up among the individual components based on how useful they are outside of the combo. you want to find the expected benefit of that ability so you can come up with an appropriate cost. • But combined… the two cards. we have a metagame opportunity cost: you have to take some other action in the game completely apart from the thing we’re trying to balance. On their own they do have some nonzero value (they can always attack an enemy directly if they have to. don’t really work that well in any other context. and find the character class in that group with the highest attack value. But what if they do have control… that is. if they can heal and buff themselves they might even be reasonably good at it. their costs are comparable. One is to take the combo in aggregate and balance that. because if it ever left play you would still have zero life. situational cards that you build your deck around in a CCG. this is a pretty extreme example.Now. depending on the situation. But their true value shows up in a group. Lich and Mirror Universe: • Lich reduced you to zero life points. These are hard to balance against one another directly. and thus lose the game immediately! Even without that risk. and cards that are the most useful when you’re losing mean that you’re playing to lose. or should they both be about the same? Weight them according to their relative usefulness. immediately win you the game by reducing your life to zero and then swapping totals with your opponent: an instant win! How do you cost this? Now. “support” character classes in an MMO or multiplayer FPS. and find our expected attack value for that character class. but combined with each other are allpowerful. we might assume a group of adventurers of similar level. but looking at what actually happened in the game. In a group. where they can take the best members of a group and make them better. To figure out what that’s worth. but added additional rules that effectively turned your cards into your life – this card on its own was incredibly risky. How about a slightly less extreme example? A support character class in an MMO can offer lots of healing and attribute bonuses that help the rest of their team. but still only useful when you’re losing and not particularly easy to use effectively. There are a few ways we could go about balancing things like this.

you’re designing the levels.” where it’s one or more players cooperating against the computer. the actual benefit is probably going to be more than either option individually. Either-or choices from a single game object Sometimes you have a single object that can do one thing or another. In these kinds of cases. Multi-class characters As long as we’re on the subject of character classes. Since the player must choose ahead of time what they want and they typically can’t change their class in the middle of a quest. What does it depend on? This is a versatility problem. they would pick and choose the most effective single character class and ignore the other! However.” where players are in direct conflict with each other) when it comes to situational balance. you’re designing the AI. or at least what capabilities their party is going to need that are not yet accounted for. if they knew exactly what to expect.worth about 10% of that value. Maybe you have a card in a CCG that can bring a creature into play or make an existing one bigger. PvE games are much easier. As the game’s designer. and the foreknowledge of the player regarding the challenges that are coming up later. so this support ability is almost always going to be operating at its highest level of effectiveness. and factoring that in as a “cost” to counteract the added situational benefit. so you’d have to figure the percentage of time that you can expect this kind of support character to be traveling with a group where this attack boost is useful. After all. a Level 10 single-class is usually about as powerful as a Level 7 or 8 dual-class. How much less powerful do you have to be so that multi-classing feels like a viable choice (not too weak). so a Level 5 Fighter/Thief is probably not as good as a Level 10 Fighter or Level 10 Thief. but not one that’s so overpowered that there’s no reason to single-class? This is a versatility problem. we can assume they’re going to use it under optimal conditions. Or you’re given a choice to upgrade one of your weapons in an FPS. not some kind of “average” case: if the players are in control of whether they use each part of the combo. the cost should be computed under best case situations. but not both (the object is typically either used up or irreversibly converted as part of the choice). Obviously it would be less useful if questing solo. you tend to be lower level and less powerful in all of those types than if you were dedicated to a single class. so they’re trying to prepare for a little of everything. and you can balance it accordingly. they are more constrained. Or you have a lump of metal in an RPG that can be fashioned into a great suit of armor or a powerful weapon. assuming the player knows the value of the things they’ll get (but they can only choose one).5x as powerful as multi-class and then adjusting downward from there as needed. or with a group that doesn’t have any good attackers. but less than all choices combined. The Difference Between PvE and PvP Designing PvE games (“Player versus Environment. so you’ll probably do well with a starting guess of making a single class 1. in that you have access to the unique specialties of several character types… but in exchange for that. you’re designing the environment. The player typically doesn’t know what kinds of situations their character will be in ahead of time. the cost/difficulty of changing their strategy in mid-game. You already know what is “typical” or “expected” in . the system. depending on the situation. how about “multi-class” characters that are found in many tabletop RPGs? The common pattern is that you gain versatility. player’s choice. Beyond that. and factor that into your numbers. they do probably have some basic idea of what they’re going to encounter. so it depends on the raw benefit of each choice. the opportunity cost for including an attacker in your party is pretty low (most groups will have at least one of those anyway). a rule of thumb is to figure out the opportunity costs for them setting up that situation. In this case. What do the Lich/Mirror Universe example and the support class example have in common? When dealing with situational effects that players have control over. the AI or whatever) is different than PvP games (“Player versus Player. That is.

since a more versatile player reduces the value of object-based versatility) – if the player takes this object but then wants to replace it with something else. and when enemies come into range the tower shoots at them. Simple. and whether they are factored in to the object’s resource cost. or is there a noticeable cost to doing so? How much of a liability is it if the player is stuck in a situation where the object isn’t useful? Now. do a thorough search for any shadow costs you may have. or the game’s systems overall? Homework For your “homework”. “Expected value” doesn’t really have meaning when you don’t know what to expect from your opponent. See if looking at that object in a new way has helped to explain why it felt too weak or too powerful. (I’ll suggest you don’t actually play it unless you absolutely have to. which was one of the games that popularized the genre of tower defense games. Does this give you more insight into other objects as well. we’re going to look at Desktop Tower Defense 1. In these cases. your goal is to maximize the total damage output of your towers per dollar spent. and which has some kind of conditional or situational nature to it. is it easy. and that’s something we’ll discuss in more detail over the next couple of weeks. right? .5 is a great game for analysis of situational game balance. because nearly everything in the game is situational! You buy a tower and place it down on the map somewhere. Even in games with procedurally-generated content where you don’t know exactly what the player will encounter. if something has been giving you trouble. consider how the versatility of the game’s systems and the versatility of the individual objects should affect their benefits and costs. Since you have a limited amount of money at any time in the game. you can do expected-value calculations for PvE games pretty easily to come up with at least a good initial guess for your costs and benefits when you’re dealing with the situational parts of your game. can the player do anything to make those situations more likely. if the object is only useful in certain situations. Buying or upgrading towers costs money. (Since situational effects are some of the trickiest to balance. after all) so you can figure out the expected probability that the content generator will spit out certain kinds of encounters. or use other objects or strategies when they need to. because players can vary their strategies. thus increasing the object’s expected value? How easy is it for the player to change their mind (the versatility of the player versus versatility of the object. and take the tower with the best damage-to-cost ratio. you know the algorithms that generate it (you designed them. What opportunities or versatility do you have to give up in order to gain this object’s capabilities? What other things do you have to acquire first before you even have the option of acquiring this object? Ask yourself what those additional costs are worth. If You’re Working on a Game Now… Choose one object in your game that’s been giving you trouble. on the surface. divide by cost.5. something that seems like it’s always either too good or too weak. so from the player’s perspective this is an efficiency problem. The situational nature of DTD So. Next. Is it something that’s useful in a wide variety of situations. Because of this. and you get money from killing the enemies with your towers. playtesting and metrics are the best methods we have for determining typical use. is that even possible… and if so. all you have to do is figure out how much damage a single tower will do. or only rarely? How much control does the player have over their situation – that is.) DTD 1. PvP is a little trickier. because it is obnoxiously addicting and you can lose a lot of otherwise productive time just playing around with the thing. and within what range.) First. it’s probably in that category anyway. consider the versatility of the object itself.terms of player encounters.

because everything depends on everything else! Your mission. since there’s this big obstacle that the enemies get to walk around rather than just having them march through a longer maze. placing towers in a giant block (to maximize the effectiveness of this boost tower) has a hidden cost itself. but improves all adjacent towers (either touching at a side or a corner) by +50%. Other towers do area-effect or “splash” damage. but slows down enemies that it shoots. For each slot. I want you to consider two towers: the Swarm tower (which only works against flying enemies but does a lot of damage to them) and the Boost tower (that’s the one that increases the damage of the towers around it). the most expensive versions of each tower have the most efficient damage-to-cost ratios. most towers can only shoot one enemy at a time. A fully-upgraded Boost tower costs $500 and does no damage. and costs $640 in the game. One of the tower types doesn’t do much damage at all. If you just place a tower in the middle of a bunch of open space. If you don’t have the time or skills to write a brute-force program. or don’t work against certain enemy types. the others don’t). what’s the optimal placement of these towers? To give you some numbers. should you choose to accept it… Since this is a surprisingly deep game to analyze. an alternative is to create an Excel spreadsheet that calculates the damage and cost for a single scenario. and try to get that as high as possible). in that it’s slightly less efficient in terms of usage of space on the board. a fully-upgraded Swarm tower does a base of 480 damage per hit. Even more interesting. So. so in practical terms a Boost tower does 240 damage for each adjacent Swarm tower. its total damage will be a lot higher. so if a cluster of enemies walks by. Some towers only work against certain types of enemies. so the benefit depends on what else is out there shooting. if you build a huge maze that routes everyone back and forth in range of the tower in question. Swarm tower or Boost tower in each of the twelve slots). particularly those that are spaced out because they move fast. in this little 4×3 rectangular block. is to write a bruteforce program that runs through all 3^12 possibilities (no tower. in order to totally destroy the flying enemies that come your way. which is great on clusters but pretty inefficient against individual enemies. Below that block. the prime spot to put these is right in the center of the map. And then there’s one tower that does absolutely nothing on its own. so there are some waves where some of your towers are totally useless to you even if they have a higher-than-normal damage output at other times.Except that actually figuring out how much damage your towers do is completely situational! Each tower has a range. The most certain way to solve this. for cost. Note that two Boost towers adjacent to each other do absolutely nothing for each other – they increase each other’s damage of zero by +50%. its total damage per enemy is a lot smaller (one enemy gets shot. Create a 4×3 block of cells that are either “B” (boost tower). Let’s assume you’ve decided to dedicate that twelvetower area to only Swarm and Boost towers. or blank. Assume all towers will be fully upgraded. Add up the total damage and cost for each scenario. create a second block of cells to compute the individual costs of each cell. trying to balance a game like this is really tough. The formula might be something like: . which keep them in range of other towers for longer. In particular. I’m going to constrain this to one very small part of the game. which is still zero. Furthermore. or 0 for an empty slot. Assuming that you’re trying to minimize cost and maximize damage. how long enemies stay within that range getting shot at depends entirely on where you’ve placed your towers. but boosts the damage output of all adjacent towers… so this has a variable cost-to-benefit ratio depending on what other towers you place around it. “S” (swarm tower). if you know any scripting or programming. 240*(number of adjacent Swarm towers) for a Boost tower. and 0 for an empty slot. the enemies will walk right by it and not be in danger for long. count a damage of 480 if a Swarm tower. Now. and keep track of the best damage-to-cost ratio (that is. divide total damage by total cost. 500 for each Boost tower. count 640 per Swarm tower.

=IF(B2=”S”,640,IF(B2=”B”,500,0)) Lastly, create a third block of cells to compute the damage of each cell: =IF(B2=”S”,480,IF(B2=”B”,IF(A1=”S”,240,0)+IF(A2=”S”,240,0)+IF(A3=”S”,240,0)+IF(B1=”S”, 240,0)+IF(B3=”S”,240,0)+IF(C1=”S”,240,0)+IF(C2=”S”,240,0)+IF(C3=”S”,240,0),0)) Then take the sum of all the damage cells, and divide by the sum of all the cost cells. Display that in a cell of its own. From there, all you need to do is play around with the original cells, changing them by hand from S to B and back again to try to optimize that one final damage-to-cost value. The final deliverable Once you’ve determined what you think is the optimal damage-to-cost configuration of Swarm and Boost towers, figure out the actual cost and benefit from the Swarm towers only, and the cost and benefit contributed by the Boost towers. Assuming optimal play, and assuming only this one very limited situation, which one is more powerful – that is, on a dollars-for-damage basis, which of the two types of tower (Swarm or Boost) contributes more to your victory for each dollar spent? That’s all you have to do, but if you want more, you can then take it to any level of analysis you want – as I said, this game is full of situational things to balance. Flying enemies only come every seventh round, so if you want to compute the actual damage efficiency of our Swarm/Boost complex, you’d have to divide by 7. Then, compare with other types of towers and figure out if some combination of ground towers (for the 6 out of 7 non-flying levels) and the anti-flying towers should give you better overall results than using towers that can attack both ground and air. And then, of course, you can test out your theories in the game itself, if you have the time. I look forward to seeing some of your names in the all-time high score list.

**Level 7: Advancement, Progression and Pacing
**

Answers to Last Week’s Question If you want to check your answer from last week: Well, I must confess I don’t know for sure if this is the right answer or not. In theory I could write a program to do a brute-force solution with all twelve towers – if each tower is either Swarm, Boost or Nothing (simply don’t build a tower in that location), then it’s “only” 3^12 possibilities – but I don’t have the time to do that at this moment. If someone finds a better solution, feel free to post here! By playing around by hand in a spreadsheet, the best I came up with was the top and bottom rows both consisting of four Swarm towers, with the center row holding four Boost towers, giving a damage/cost ratio of 1.21. The two Boost towers in the center give +50% damage to six Swarm towers surrounding them, thus providing a damage bonus of 1440 damage each, while the two Boost towers on the side support four Swarm towers for a damage bonus of 960 each. On average, then, Boost towers provide 1200 damage for a cost of 500, or a damage/cost ratio of 2.4. Each Swarm tower provides 480 damage (x8 = 3840 damage, total). Each tower costs 640, for a damage/cost ratio of 0.75 for each one. While this is much less efficient than the Boost towers, the Swarm towers are still worth having; deleting any of them makes the surrounding Boost towers less effective, so in combination the Swarm towers are still more cost-efficient than having nothing at all. However, the Boost towers are still much more cost-effective than Swarm towers (and if you look at the other tower types, Boost towers are the most cost-effective tower in the game, hands-down, when you assume many fully-upgraded towers surrounding them). The only thing that prevents Boost towers from being the dominant strategy at the top levels of play, I think, is that you don’t have enough cash to make full use of them. A typical game that lasts 40 levels might only give you a few thousand dollars or so, which is just not enough to build a killer array of fully-upgraded towers. Or, maybe there’s an opportunity for you to find new dominant strategies that have so far gone undiscovered… This Week In the syllabus, this week is listed as “advancement, progression and pacing for single-player games” but I’ve changed my mind. A lot of games feature some kind of advancement and pacing, even multiplayer games. There’s the multiplayer co-op games, like the tabletop RPG Dungeons & Dragons or the console action-RPG Baldur’s Gate: Dark Alliance or the PC game Left 4 Dead. Even within multiplayer competitive games, some of them have the players progressing and getting more powerful during play: players get more lands and cast more powerful spells as a game of Magic: the Gathering progresses, while players field more powerful units in the late game of Starcraft. Then there are MMOs like World of Warcraft that clearly have progression built in as a core mechanic of the game, even on PvP servers. So in addition to single-player experiences like your typical Final Fantasy game, we’ll be talking about these other things too: basically, how do you balance progression mechanics? Wait, What’s Balance Again? First, it’s worth a reminder of what “balance” even means in this context. As I said in the intro to this course, in terms of progression, there are three things to consider: • Is the difficulty level appropriate for the audience, or is the game overall too hard or too easy? • As the player progresses through the game, we expect the game to get harder to compensate

for the player’s increasing skill level because they are getting better; does the difficulty increase at a good rate, or does it get too hard too fast (which leads to frustration), or does it get harder too slowly (leading to boredom while the player waits for the game to get challenging again)? • If your avatar increases in power, whether that be from finding new game objects like better weapons or tools or other toys, gaining new special abilities, or just getting a raw boost in stats like Hit Points or Damage, are you gaining these at a good rate relative to the increase in enemy power? Or do you gain too much power too fast (making the rest of the game trivial after a certain point), or do you gain power too slowly (requiring a lot of mindless grinding to compensate, which artificially lengthens the game at the cost of forcing the player to re-play content that they’ve already mastered)? We will consider each of these in turn. Flow Theory If you’re not familiar with the concept of “flow” then read up here from last summer’s course. Basically, this says that if the game is too hard for your level of skill you get frustrated, if it’s too easy you get bored, but if you’re challenged at the peak of your ability then you find the game engaging and usually more fun, and one of our goals as game designers is to provide a suitable level of challenge to our players. There’s two problems here. First, not every player comes to the game with the same skill level, so what’s too easy for some players is too hard for others. How do you give all players the same experience but have it be balanced for all of them? Second, as a player progresses through the game, they get better at it, so even if the game’s challenge level remains constant it will actually get easier for the player. How do we solve these problems? Well, that’s most of what this week is about. Why Progression Mechanics? Before moving on, though, it’s worth asking what the purpose is behind progression mechanics to begin with. If we’re going to dedicate a full tenth of this course to progression through a game, progression mechanics should be a useful design tool that’s worth talking about. What is it useful for? Ending the game In most cases, the purpose of progression is to bring the game to an end. For shorter games especially, the idea is that progression makes sure the game ends in a reasonable time frame. So whether you’re making a game that’s meant to last 3 minutes (like an early-80s arcade game) or 3060 minutes (like a family board game) or 3 to 6 hours (like a strategic wargame) or 30 to 300 hours (like a console RPG), the idea is that some games have a desired game length, and if you know what that length is, forced progression keeps it moving along to guarantee that the game will actually end within the desired time range. We’ll talk more about optimal game length later in this post. Reward and training for the elder game In a few specialized cases, the game has no end (MMOs, Sims, tabletop RPGs, or progressionbased Facebook games), so progression is used as a reward structure and a training simulator in the early game rather than a way to end the game. This has an obvious problem which can be seen with just about all of these games: at some point, more progression just isn’t meaningful. The player has seen all the content in the game that they need to, they’ve reached the level cap, they’ve unlocked all of their special abilities in their skill tree, they’ve maxed their stats, or whatever. In just about all cases, when the player reaches this point, they have to find something else to do, and there is a sharp transition into what’s sometimes called the “elder game” where the objective changes from

games where the focus is either on relative power between parties. I look forward to seeing if he solves it… because if he does. I’m not saying it can’t be done. What happens in the elder game? In Sim games and FarmVille. this transition can be jarring. and adding an elder game. In PvE games (this includes both single-player games and multi-player co-op) you are progressing through the game to try to overcome a challenge and reach some kind of end state. since that’s what the game has been training them for. if you’re gaining power throughout the game and this serves as a reward to the player. where he insists you’ll reach the peak of your power early on. the elder game is artistic expression: making your farm pretty or interesting for your friends to look at. Two Types of Progression Progression tends to work differently in PvP games compared to PvE games. the elder game is high-level raids that require careful coordination between a large group. Now. how do you track the level of challenge that the player is feeling. In tabletop RPGs. Peter Molyneux has pointed out this flaw when talking about the upcoming Fable 3. then at the end you just feel like you’ve been ground into the dirt for the entire experience. The people who enjoy the earlygame progression may not enjoy the elder game activities as much since they’re so different (and likewise.progression to something else. so for most of these games your progress is seen in absolute terms. you’re trying to win against another player. and if I slip up and refer to PvP as “multi-player” and PvE as “singleplayer” that is just because those are the most common design patterns. which is interesting because in these games the “elder game” is actually a quest to end the game! What happens with games that end? In games where progression does end the game. I’m just using “PvP” and “PvE” as shorthand here. Challenge Levels in PvE When you’re progressing through a bunch of challenges within a game. so you know if it’s increasing too quickly or too slowly. but as we can see here it’s really just replacing one difficult design problem with another. human or AI. or setting up custom stories or skits with your sims. it sounded like he was treating this like a simple fix to an age-old problem. which can happen in games like Chess. and then have to spend the rest of the game making good on the promises you made to get there… which is a great tagline. or PvP areas where you’re fighting against other human players one-on-one or in teams. or absolute power with respect to the game’s core systems. In the interview I saw. the game ends right when you’re reaching the peak of your power. that will have major applications for MMOs and FarmVille and everything with an elder game in between. the elder game is usually finding an elegant way to retire your characters and end the story in a way that’s sufficiently satisfying. but really all he’s saying is that he’s taking the standard Console RPG progression model. In PvP (this includes multi-player PvP like “deathmatch” and also single-player games played against AI opponents). which means that Fable 3 will either live or die on its ability to deliver a solid elder-game experience that still appeals to the same kinds of players who enjoyed reaching that point in the first place. shortening it. succeed in ruling the world. This means you don’t really get to enjoy being on top of the world for very long. So that is really the core distinction I’d like to make. or exploring social aspects of the game like taking on a coordination or leadership role within a Guild. For players who are used to progression as a goal. If you’re losing power throughout the game. In MMOs. and whether the total challenge level is just right? . some people who would love the elder game never reach it because they don’t have the patience to go through the progression treadmill). there is also a problem: generally. but he’s got his work cut out for him. so the meaning of your progression is relative to the progression of your opponents. which isn’t much better.

as designers we actually do have some control over this. That’s significant. The four components of perceived difficulty First of all. Skill and power are interchangeable You can substitute skill and power. and so on. Written mathematically. you might think the player skill curve is not under our control. If we’re trying to measure the player perception of how challenged they are. Or a player who finds a game too easy can challenge themselves by not finding all of the power-ups in a game. and you only had this one number to look at to try to figure out what was causing it to go up or down. adding extra hit points or resource generation or otherwise just using the same AI but inflating the numbers. regardless of anything else. current speed. to an extent. and expecting that the player will need to either get better stats themselves or show a higher level of skill in order to compensate. Or it can provide power-based challenges. Third and fourth. so . There are some games with no skill component at all.This is actually a tricky question to answer.) Changing player skill Now. players come to our game with different pre-existing skill levels. (Or at least. based on our mechanics: • If we design deep mechanics that interact in a lot of ways with multiple layers of strategy. subtracted from the game’s skill challenges and power challenges. To keep the player’s attention once they get better. Even if the player isn’t very good at the game. there’s the player’s power level in the game. and those are exempted here. We do this all the time on the challenge side. it’s actually a combination of four things. and eventually it’ll be too easy. we can think of the challenge level as the sum of the player’s skill and power. the player will feel the game get harder or easier. increasing their Attack stat will let them kill things more effectively. However. After all. The game can create skill-based challenges which require the player to gain a greater amount of skill in the game. by increasing the hit points or attack power or other stats of the enemies in the game (or just adding more enemies in an area) without actually making the enemies any more skilled. Creating a stronger AI to challenge the player is a lot harder and more expensive. the easier the challenges will seem. so very few games do that (although the results tend to be spectacular when they do – I’m thinking of Gunstar Heroes as the prototypical example). and engine RPMs and multiplied them all together to get a single “happiness” rating. The more skilled the player is at the game. to see if it could be done). for example by introducing new enemies with better AI that make them harder to hit. over time the player will feel like the game is getting easier. either on the player side or the challenge side. we have this equation: PerceivedDifficulty = (SkillChallenge + PowerChallenge) – (PlayerSkill + PlayerPower) Example: perceived challenge decreases naturally How do we use this information? Let’s take the player’s skill. when any one of these things changes. because the “difficulty” felt by the player is not made up of just one thing here. and they learn at different rates. and the overall challenge in the game stay the same. there’s the level of the player’s skill at the game. At any rate. giving them a Hook Shot lets them reach new places they couldn’t before. Second. every game where the player’s skill can increase. which are how the game creates challenges for the player. which generally increases over time. it’s like if the dashboard of your car took the gas. but the player experiences it only as a single “am I being challenged?” feeling. doubling their Hit Points will still keep them alive longer. there’s the flip side of both of these. because it means that if everything else is equal. giving themselves less power and relying on their high level of skill to make up for it (I’m sure at least some of you have tried beating the original Zelda with just the wooden sword. So. that is if the player’s power level. every game must get harder in some way. This difference gives us the player’s perceived level of difficulty.

That is. the most common pattern I’ve seen looks something like this: within a single area like an individual dungeon or level. you start with a sudden jump in difficulty since the player is entering a new space after mastering the old one. either through level-ups or item drops. or hold their hand through it. until they reach the end of the level where there may be another sudden difficulty jump in the form of a boss. and so on. probably with certain well-defined jumps when the player finally masters some new way of thinking. popularized by Valve. and then you give them a harder challenge where they have to integrate the new toy into their existing play style and combine it with other toys. and then after that typically another sudden jump in player power when they get loot from the boss or reach a new area that lets them upgrade their character. since you can use power as a compensatory mechanism in either direction. they are ready for the next: each challenge is essentially a signpost that says “you must be at least THIS GOOD to pass. the player’s power increases. because you want the game to last longer? If you want the player to learn more slowly. Or this might simply be the tradeoff for making your game accessible: “A minute to learn. where part of the core vision is that you want the player to learn a new skill from the game. because as I said earlier. the player’s skill curve will be increasing for a long time. is to give the player some new weapon or tool or toy in a safe area where they can just play around with it. Some dungeons split themselves into several parts. a minute to master. you simply offer a set of progressively harder challenges. and then a skill plateau. We’ll talk more about this next week. but really you can just think of this as the same pattern repeated several times without a change of graphical scenery. String a bunch of these together and that’s the power progression in your game: the difficulty jumps initially in a new area. then a midboss.” Measuring the components of perceived challenge Player skill is hard to measure mathematically on its own. we will expect to see a short increase in skill as the player masters what little they can. One common design pattern. like when a Chess player first starts to learn book openings. What do you want these curves to look like? Part of it depends on what you expect the skill curve to be. and then you want them to stop playing so they can go on to learn other things. where these failures happen. • If our game is more shallow. Over time. Instead. you can instead use “skill gating” as I’ve heard it called. There are plenty of valid design reasons to do this intentionally. . • What if you don’t want the player to increase in skill quickly. it is combined with player power in any game that includes both. based on the number of tutorials and practice areas you provide. then introduce them immediately to a relatively easy area where they are given a series of simple challenges that let them use their new toy and learn all the cool things it can do. so you are at least guaranteed that if a player completes one challenge. you control when the player is gaining power. or when they start understanding the tradeoffs of tempo versus board control versus total pieces on the board. One common example is educational games. For now. Player power and power-based challenges are much easier to balance mathematically: just compare the player power curve with the game’s opposition power curve. with an easier part at the beginning.that mastering the basic game just opens up new ways to look at the game at a more abstract meta-level. I can say that the best way to get a handle on this is to use playtesting and metrics. or has a large luck component. you can ramp the player up more quickly so they can increase their skill faster. You have complete control over both of these. then a harder part. for example looking at how often players die or are otherwise set back. and then a final boss. you don’t necessarily teach the player how to play your game. By designing your levels to teach the player specific skills in certain areas. how long it takes players to get through a level the first time they encounter it. and also when their enemies are presenting a larger amount of power to counter them.” • You can also control how quickly the player learns. As a general guideline.

they’re playing the game wrong (even if they’re really doing fine). For Space Invaders. so many little gains produce a lot more happiness than one big gain. Another corollary is that timing is important when handing out rewards: • Giving too few rewards. they should be linked to the player’s progress through the game. even if they add up to the same thing. for some games. over the course of a single game. Now. meanwhile the player’s power has sudden jumps at the end of an area. That said. you may want them to seem easy if they are aimed at an extremely casual audience. This does not mean that the rewards themselves should be arbitrary. While these are random. Giving too many big rewards in a small space of time diminishes their impact. than to reward them for something they didn’t know about and weren’t even trying for. One corollary here is that you do need to make sure the player notices you’re rewarding them (in practice. • Another thing we know from psychology is that a random reward schedule is more powerful than a fixed schedule. like “do exactly 123 damage in a single attack” where damage is computed randomly. • Ironically. It gives the impression that the game is too easy. a lot of these achievements are for things that aren’t really under player control and that seem kind of arbitrary. progression is strongly related to what is sometimes called the “reward schedule” or “risk/reward cycle. • “Hidden achievements” in Xbox 360 games. you want them to feel like they are being rewarded for playing well. not even necessarily the best for your game! These will vary based on genre and intended audience. Rewards in PvE In PvE games especially. or their equivalents on other platforms. the game’s power challenges.” The idea is that you don’t just want the player to progress. It is far more powerful to reward the player because of their deliberate action in the game. player skill and player power are all constant. The player is demoralized and may start to feel like if they aren’t making progress. One of the things we’ve learned from psychology is that happiness comes from experiencing some kind of gain or improvement. this is not the only pattern of power progression. with incremental gains along the way as they find new stuff or level up. has a sudden jump at the end for the boss. and it actually waters down the player’s actual achievements later. this is still tied . If achievements are a reward for skill. genuine feelings of accomplishment the player gets later. this is usually not much of a problem). or spacing them out for too long so that the player goes for long stretches without feeling any sense of progression. you can think of progression as a reward itself: as the player continues in the game and demonstrates mastery. but the danger is reducing the actual. then returns. In a sense. so that the player feels a sense of accomplishment. giving too many rewards can also be hazardous. the only thing that increases is the game’s skill challenge (making the aliens start faster and lower to the ground in each successive wave) until eventually they present a hard enough challenge to overwhelm the player. is usually a bad thing. I’ll give a few examples: • Have you ever started a new game on Facebook and been immediately given some kind of trophy or “achievement unlocked” bonus just for logging in the first time? I think this is a mistake a lot of Facebook games make: they give a reward that seems arbitrary. and occasionally the player gets a really cool item. and they should happen as a direct result of what the player did. the ability to progress through the game shows the player they are doing well and reinforces that they’re a good player. how is the player to know what the achievement is if it’s hidden? Even more obnoxious.stays constant awhile. What exactly is the player supposed to feel rewarded for here? • A positive example would be random loot drops in a typical action game or action-RPG.

You want these spaced out a bit so the player isn’t so overwhelmed by changes that they feel like the whole game is always moving ahead without them. they initially had you get this gun near the end of the game. you can add things to make it feel like each region of the . This can be perfectly valid design. if the most fun toy in your game is only discovered 2/3rds of the way through. so that the player is at least getting some great items every now and then.” • Or you can. and having it deliberately give a big reward if the player hasn’t seen one in awhile. a rule of thumb is to offer new levels or areas on a slightly increasing curve. level transitions. make your game shorter. at first you might say this isn’t random – it happens at exactly the same point in the game. every time. but players had so much fun with it that they restructured their levels to give it to the player much earlier. you know.to the deliberate player action of defeating enemies. (This is really hard to do in practice. and story progression. more stat increases. if you have a really long dungeon the players are traversing. if you give the player access to everything early on.) • Use other rewards more liberally after you shut off the new toys: more story. progression to a new area – is a special kind of reward. You can also offer upgrades to their toys. (Note that you can play around with “randomness” here. Still. because it makes the player feel like they’re moving ahead (and they are!). Rewarding the player with increased power Progression through getting a new toy/object/capability that actually increases player options is another special milestone. Portal and Braid are both well-known for two things: being really great games. At the big-budget AAA level. where each level takes a little bit longer than the last. a lot of this can be done with just the visual design of the level. and being short. In this day and age. especially if they are being led up to a huge plot point. you need to use other kinds of rewards to keep them engaged through the longer final parts of the game where they don’t find any new toys. because the level designer scripted that event to happen at exactly that place! And on multiple playthroughs you’d be correct… but the first time a player experiences the game. that’s a lot of time the player doesn’t get to have fun – Valve made this example famous through the Gravity Gun in Half-Life 2: as the story goes. Like we said before. so from the player’s perspective it is not something that can be predicted. Rewarding the player with level transitions Progression through level transitions – that is. it may as well be random. you want these spaced out. Now I’d like to talk about three kinds of rewards that all relate to progression: increasing player power. even though I hear it only lasts about ten hours or so. a player can tolerate slightly longer stretches between transitions near the end of the game. Strangely. they don’t know where these rewards are. although it’s debatable whether you can think of an “upgrade” as just another way of saying “new toy. Some games split the difference. Now. so the player is rewarded but on a random schedule. there’s no shame in this.) • Another common example: the player is treated to a cut-scene once they reach a certain point in a dungeon. Batman: Arkham Asylum was one of the best games of last year (both in critical reception and sales). which is admittedly crossing from game design into the territory of game art: for example. though a lot of times I see the player get all the cool toys in the first third or half of the game and then spend the rest of the game finding new and interesting ways to use them. thankfully. This makes the player feel like they are moving ahead more rapidly at the start of the game when they haven’t become as emotionally invested in the outcome. more frequent boss fights or level transitions. you can just present unique combinations of things to the player to keep them challenged and engaged. How can you do this? Here’s a few ways: • If your mechanics have a lot of depth. for example by having your game track the time between rare loot drops. with random drops and also fixed drops from specific quests/bosses.

but the hardest part of the game is in the middle somewhere. but we’re seeing story integrated into all kinds of games these days). And in some cases the players don’t gain power. Many games do not: the story climax is at the end. Either directly or indirectly. the story itself has a “difficulty” of sorts (we call it “dramatic tension”). In other words. where the actual condition is something abstract like Victory Points. I’ll remind you that when I’m saying “power” in the context of progression. The victory condition for the game is sometimes to reach a certain level of power directly. in practice. they lose power. taking more turns or actions. if it’s one that’s scripted. or really anything that affects the player’s standing (other than the player’s skill level at playing the game). Level transitions are fixed. before you find an uber-powerful weapon that makes the rest of the game easy.) Combining the types of rewards into a single reward schedule Note that a reward is a reward. It’s really strange to write that you get a better story by using math. and a power level increase all at once. a lot of plot advancement tends to happen at the same time as level transitions. in areas that are otherwise uninteresting… although then the danger is that the player is getting a reward arbitrarily when they feel like they weren’t doing anything except walking around and exploring. this makes the story feel more integrated with the mechanics. the goal is to gain enough power to win the game. story progression is one of the rewards built into the reward cycle. Challenge Levels in PvP If PvE games are all about progression and rewards. Tracking player power as the game progresses (that is. but in practice I think it’s easier to change a few numbers than to change the story. all thanks to game balance and math. where you have a level transition. but also interleave them. in many games (originally this was restricted to RPGs. dramatic climaxes (climaxen? climaces?) at the hardest part. PvP games are about gains and losses relative to your opponents. I’m talking about the sum of all aspects of the player’s position in the game. gaining power relative to your opponents is usually an important player goal. and it is the player’s power in the game that merely enables them to score those Victory Points. and the object of the game is to get the opponent(s) to run out first. and so on. Even if it’s a relatively easy fight. Additionally. so another thing to consider in story-based games is whether the dramatic tension of the story overlaps well with the overall difficulty of the game. A common design pattern I see in this case is to split the difference by scripting the plot advancement so it immediately follows a fight of some kind. so you don’t just want to space each category of rewards out individually. knowing the characters’ motivations or their feelings towards each other has absolutely no meaning when you’re dealing with things like combat mechanics. the reward of revealing some additional story immediately after can make the player feel like they earned it. seeing how power changes over time in a . to give the player a sense that they are moving forward. but there you are. which might be a missed opportunity. plot advancement.dungeon is different. so you tend to see the power rewards sprinkled throughout the levels as rewards between transitions. In any case. so this includes having more pieces and cards put into play. maybe having the color or texture of the walls change as the player gets deeper inside. you don’t need to have too many overlaps. Rewarding the player with story progression Progression through plot advancement is interesting to analyze. Some games take the chance to add some backstory in the middle of levels. because in so many ways the story is separate from the gameplay: in most games. and there is some kind of tug-of-war between the players as each is trying to get there first. In general. more resources. better board position. And yet. sometimes it is indirect. you want rising tension in your story while the difficulty curve is increasing. Strangely. (I guess another way of doing this would be to force the story writers to put their drama in other places to match the game’s difficulty curve.

Monopoly is another example of a positive-sum game. In other words. because the only way to win money is to take it from other players. Chess does have one positive-sum element. Settlers of Catan is an example of a positive-sum game: with each roll of the dice. are the primary rewards in a PvP game. not from other players). and zero-sum games This seems as good a time as any to talk about an important distinction in power-based progression that we borrow from the field of Game Theory: whether the game is zero-sum. Positive and negative feedback loops . another player is gaining the exact same amount). depending on the game). or how it changes each turn in a turn-based game) can follow a lot of different patterns in PvP games. but that generally happens rarely and only in the end game. it removes them from the board. Poker is an example of a zero-sum game. and you win exactly as much as the total that everyone else loses. Chess is a good example of a negative-sum game. or negative-sum. Luxury Tax. and has extremely different play dynamics as a result). An interesting property here is that changes in player power. or negative-sum. on average these losses add up to less than $200. And the house rules most people play with just increase the positive-sum nature of the game. player actions remove power from the game without replacing it. players actually lose more power than they gain. since everything is relative to the opponents and not compared with some absolute “you must be THIS GOOD to win the game” yardstick. While you can lose lots of money to other players by landing on their properties. but the object of the game is to bankrupt your opponents which can only be done through zero-sum methods. resources are generated for the players. Positive-sum.) • Negative-sum means that over time. Chess has no zero-sum elements. which make the game even more positive-sum. so on average more wealth is created than removed over time. pawn promotion. Capturing your opponent’s pieces does not give those pieces to you. that activity itself is zero-sum (one player is losing money. the only way for me to gain power is to take it from another player. In PvP there are more options to play with. positive-sum. While there are a few spaces that remove wealth from the game and are therefore negative-sum (Income Tax. positive-sum. making the problem worse! • Zero-sum means that the sum of all power in the game is a constant. a few of the Chance and Community Chest cards. and I gain exactly as much as they lose. because on average every trip around the board will give the player $200 (and that money comes from the bank. unmortgaging properties. generally over time. whether zero-sum. negative-sum. and serves the important purpose of adding a positive feedback loop to bring the game to a close… something I’ll talk about in just a second. and can neither be created nor destroyed by players. and sometimes Jail). Some players use house rules that give jackpots on Free Parking or landing exactly on Go. If you haven’t heard these terms before: • Positive-sum means that the overall power in the game increases over time. This helps explain why Monopoly feels to most people like it takes forever: it’s a positive-sum game so the average wealth of players is increasing over time. where capturing an enemy piece gives that piece to you (although the related game Shogi does work this way. it actually becomes a negative-sum game for the players. so they feel like they have a better chance of winning after making a particularly good move. In PvE you almost always see an increase in absolute player power level over time (even if their power level relative to the challenges around them may increase or decrease. (If you play in a casino or online where the House takes a percentage of each pot. your force is getting smaller. and all players can gain power simultaneously without any of their opponents losing power.real-time game. The player feels rewarded because they have gained power relative to their opponents.

so a single player’s power curve can look very different depending on how they’re doing relative to their opponents. it will end faster. if we understand the game design purpose that is served by feedback loops. The purpose of feedback loops in game design The primary purpose of positive feedback is to get the game to end quickly. the power curve of one player usually depends on their opponent’s power: they will increase more when behind. and even though the gap appears to close. the primary purpose of negative feedback is to let players who are behind catch up. With positive feedback. In order to truly allow those who are behind to catch up. what does a player’s power curve look like in a PvP game? Here are a few ways you might see a player’s power gain (or loss) over time: • In a typical positive-sum game. and as long as everyone gets more power. so that no one ever feels like they are in a position where they can’t possibly win. the more they have. you want all players on an accelerating curve in the end game. I went into a fair amount of detail about these in last summer’s course. so it’s an increasing curve (such as a triangular or exponential gain in power over time) for each player. the purpose is to get the game to end. these aren’t hard-and-fast rules… a negative feedback loop can be absolute. they are slowed down as much as anyone else. In case you’re not familiar with these terms. Such a game is usually not that interesting for the players who are not in the lead. who is behind. However. the game has to be able to tell the difference between someone who is behind and someone who is ahead. but they work differently. and a positive feedback loop can be relative. we’ll see why positive feedback is usually independent of the opponents. linear. Once a winner has been decided and a player is too far ahead. It doesn’t really matter who is ahead. That might be on an increasing. and how often the lead is changing). • In a positive-sum game with negative feedback. or decreasing curve. with larger swings in the endgame. Because of this. “positive feedback loop” means that receiving a power reward makes it more likely that you’ll receive more. while negative feedback is usually dependent. unless they make a mistake along the way. someone who was behind at the beginning can still be behind at the end. because you can have either kind of feedback loop with a zero-sum. and this will look different from game to game. Now. usually what happens is one player gets an early lead and then keeps riding the curve to victory. in other words it rewards you for doing well and punishes you for doing poorly. “negative feedback loop” is the opposite. you tend to have a curve that gets more sharply increasing or decreasing over time. which basically forces everyone to slow down around the time they reach the end game. a positive feedback curve doesn’t always take the opponent’s standings into account… it can just reward a player’s absolute power. where receiving a power reward makes it less likely you’ll receive more. Power curves So. and decrease more when ahead. you don’t want to drag it out because that wastes everyone’s time. • In a positive-sum game with positive feedback. If everyone is slowed down in exactly the same fashion in the endgame. . positive-sum or negative-sum game. One interesting property of feedback loops is how they affect the player’s power curve. the players are still on an increasing curve. unlike negative feedback. With negative feedback. If you subtract one player’s curve from another (which shows you who is ahead. each player is gaining power over time in some way. By contrast. where you gain power when you’re in the lead. the players are gaining power over time and the more power they gain. so it punishes you for doing well and rewards you for doing poorly.Another thing I should mention here is how positive and negative feedback loops fit in with this. so I won’t repeat that here. that doesn’t fulfill this purpose.

which tends to be unsatisfying. this is a bit trickier. you’ll see that the players’ relative power swings back and forth. This can end up as a pretty exciting game of backand-forth where each player spends some time in the lead before one final. reducing the gains for the leader and increasing them further for whoever’s behind. you can easily end in a stalemate where neither player can win. each figurine you control is worth a certain number of points. so again you’ll see this “braid” shape where the players will chase each other downward until they start crashing. A typical design pattern for zero-sum games in particular is to have some strong negative feedback mechanisms in the early game that taper off towards the end. unless the game is very short. the idea is generally not for a player to acquire enough power to win. The hardest part is coming up with some kind of numeric formula for “power” in the game: how well a player is actually doing in absolute terms. while positive feedback increases towards the end of the game. In a negative-sum game with negative feedback. For a game like Chess where you have to balance a player’s remaining pieces. taking all of the power from their opponents quickly. Subtracting one player’s power from another over time. In a typical negative-sum game. In negative-sum games. In a zero-sum game with negative feedback. so if you look at all the players’ power curves simultaneously you’ll see a sort of tangled braid where the players are constantly overtaking one another. so it is not hard to add your points together on any given turn to get at least a rough idea of where each player stands. the slower they’ll tend to lose it. Applied power curves Now. which is pretty much how any negative feedback should work. Usually games that fall into this category also have some kind of early-game negative feedback built in to prevent the game from coming to an end too early. but once they start that slide into oblivion it happens more and more rapidly. spectacular. players who are losing will lose even faster. This is easier with some games than others. that means you could derive either player’s power curve just by looking at the other one. . decreasing or constant. A player’s power curve might be increasing. players who are losing will lose slower. the game may end quickly as one player takes an early advantage and presses it to gain even more of an advantage. we tend to see swings of power that pull the leader back to the center. and players who have more power tend to lose it faster.• • • • • • but that curve is altered by the position of the other players. players are usually eliminated when they lose most or all of their power. players take power from each other. and the object is either to be the last one eliminated. irreversible triumph that brings the game to a close. In a miniatures game like HeroClix or Warhammer 40K. adding a player’s current resources along with the resource costs of all of their units and structures would also give a reasonable approximation of their power over time. if the negative feedback is too strong. showing how a player’s power goes up or down over time… but how do you actually make one for a real game? The easiest way to construct a power curve is through playtest data. This keeps the players close. In a two-player game. or to be in the best standing when the first opponent is eliminated. In a zero-sum game with positive feedback. board position and tempo. and pretty much everything else looks like a positive-sum game turned upside down. and the sum of all player power is a constant. In a negative-sum game with positive feedback. But once you have a “power formula” you can simply track it for all players over time through repeated playtests to see what kinds of patterns emerge. but also makes it very hard for a single player to actually win. The raw numbers can easily allow you to chart something like this. The more power a player has left over. maybe you can visualize what a power curve looks like in theory. In a real-time strategy game like Starcraft. but rather for a player to lose the least power relative to their opponents. sort of an inverse of the positive-sum game. In a typical zero-sum game.

Ending the game One of the really important things to notice as you do this is the amount of time it takes to reach a certain point. You want to scale the game so that it ends in about as much time as you want it to. The most obvious way to do that is by hard-limiting time or turns which guarantees a specific game length (“this game ends after 4 turns”); sometimes this is necessary and even compelling, but a lot of times it’s just a lazy design solution that says “we didn’t playtest this enough to know how long the game would take if you actually played to a satisfying conclusion.” An alternative is to balance your progression mechanics to cause the game to end within your desired range. You can do this by changing the nature of how positive or negative sum your game is (that is, the base rate of combined player power gain or loss), or by adding, removing, strengthening or weakening your feedback loops. This part is pretty straightforward, if you collect all of the numbers that you need to analyze it. For example, if you take an existing positive feedback loop and make the effect stronger, the game will probably end earlier, so that is one way to shorten the game. Game phases I should note that some PvP games have well-defined transitions between different game phases. The most common pattern here is a three-phase structure where you have an early game, a midgame and an endgame, as made famous by Chess –which has many entire books devoted to just a single phase of the game. If you become aware of these transitions (or if you design them into the game explicitly), you don’t just want to pay attention to the player power curve throughout the game, but also how it changes in each phase, and the relative length of each phase. For example, a common finding in game prototypes is that the endgame isn’t very interesting and is mostly a matter of just going through the motions to reach the conclusion that you already arrived at in mid-game. To fix this, you might think of adding new mechanics that come into play in the endgame to make it more interesting. Or, you might try to find ways to either extend the mid-game or shorten the endgame by adjusting your feedback loops and the positive, negative or zero-sum nature of your game during different phases. Another common game design problem is a game that’s great once the players ramp up in midgame, but the early game feels like it starts too slowly. One way to fix this is to add a temporary positive-sum nature to the early game in order to get the players gaining power and into the midgame quickly. In some games, the game is explicitly broken into phases as part of the core design. One example is the board game Shear Panic, where the scoring track is divided into four regions, and each region changes the rules for scoring which gives the game a very different feel in each of the game’s four phases. In this game, you transition between phases based on the number of turns taken in the game, so the length of each phase is dependent on how many turns each player has had. In a game like this, you could easily change the length of time spent in each phase by just increasing the length of that phase, so it lasts more turns. Other games have less sharp transitions between different phases, and those may not be immediately obvious or explicitly designed. Chess is one example I’ve already mentioned. Another is Netrunner, an asymmetric CCG where one player (the Corporation) is trying to put cards in play and then spend actions to score the points on those cards, and the other player (the Runner) is trying to steal the points before they’re scored. After the game had been released, players at the tournament level realized that most games followed three distinct phases: the early game when the Runner is relatively safe from harm and could try to steal as much as possible; then the mid-game when the Corporation sets up its defenses and makes it prohibitively expensive, temporarily for the Runner to steal anything; and finally the endgame where the Runner puts together enough resources to break through the Corporation’s defenses to steal the remaining points needed for the win.

Looked at in this way, the Corporation is trying to enter the second phase as early in the game as possible and is trying to stretch it out for as long as possible, while the Runner is trying to stay in the first phase as long as it can and then if it doesn’t win early, the Runner tries to transition from the second phase into the endgame before the Corporation has scored enough to win. How do you balance the progression mechanics in something like this? One thing you can do, as was done with Netrunner, is to put the progression of the game under partial control of the players, so that it is the players collectively trying to push the game forward or hold it back. That creates an interesting meta-level of strategic tension. Another thing you can do is include some mechanics that actually have some method of detecting what phase you’re in, or at the very least, that tend to work a lot better in some phases than others. Netrunner does this as well; for example, the Runner has some really expensive attack cards that aren’t so useful early on when they don’t have a lot of resources, but that help it greatly to end the game in the final phase. In this way, as players use new strategies in each phase, it tends to give the game a very different feel and offers new dynamics as the game progresses. And then, of course, you can use some of these mechanics specifically to adjust the length of each phase in order to make the game progress at the rate you desire. In Netrunner, the Corporation has some cheap defenses it can throw up quickly to try to transition to the mid-game quickly, and it also has more expensive defenses it can use to put up a high bar that the Runner has to meet in order to transition to the endgame. By adjusting both the relative and absolute lengths of each phase in the game, you can make sure that the game takes about as long as you want it to, and also that it is broken up into phases that last a good amount of time relative to each other. Ideal game length All of this assumes you know how long the game (and each phase of the game) should take, but how do you know that? Part of it depends on target audience: young kids need short games to fit their attention span. Busy working adults want games that can be played in short, bite-sized chunks of time. Otherwise, I think it depends mainly on the level and depth of skill: more luck-based or casual games tend to be shorter, while deeper strategic games can be a bit longer. Another thing to consider is at what point a player is far enough ahead that they’ve essentially won: you want this point to happen just about the time when the game actually ends so it doesn’t drag on. For games that never end, like MMOs or Facebook games, you can think of the elder game as a final infinite-length “phase” of the game, and you’ll want to change the length of the progression portion of your game so that the transition happens at about the time you want it to. How long that is depends on how much you want to support the progression game versus the elder game. For example, if your progression game is very different from your elder game and you see a lot of “churn” (that is, a lot of players that leave the game) when they hit the elder game, and you’re using a subscription-based model where you want players to keep their accounts for as long as possible, you’ll probably want to do two things: work on softening the transition to elder game so you lose fewer people, and also find ways of extending the early game (such as issuing expansion sets that raise the level cap, or letting players create multiple characters with different race/class combinations so they can play through the progression game multiple times). Another interesting case is story-based RPGs, where the story often outlasts the mechanics of the game. We see this all the time with console RPGs, where it says “100 hours of gameplay” right on the box. And on the surface that sounds like the game is delivering more value, but in reality if you’re just repeating the same tired old mechanics and mindlessly grinding for 95 of those hours, all the game is really doing is wasting your time. Ideally you want the player to feel like they’re progressing through learning new mechanics and progressing through the story at any given time; you don’t want the gameplay to drag on any more than you want filler plot that makes the story feel like it’s dragging on. These kinds of games are challenging to design because you want to tune the length of the game to match both story and gameplay, and often that either means lengthening the story or adding more gameplay, both of which tend to be expensive in development. (You can also

shorten the story or remove depth from the gameplay, but when you’ve got a really brilliant plot or really inspired mechanics it can be hard to rip that out of the game just to save a few bucks; also, in this specific case there’s often a consumer expectation that the game is pretty long to give it that “epic” feel, so the tendency is to just keep adding on one side or the other.) Flow Theory, Revisited With all that said, let’s come back to flow. At the start of this blog post, I said there were two problems here that needed to be solved. One is that the player skill is increasing throughout the game, which tends to shift them from being in the flow to being bored. This is mostly a problem for longer PvE games, where the player has enough time and experience in the game to genuinely get better. The solution, as we’ve seen when we talked about PvE games, is to have the game compensate by increasing its difficulty through play in order to make the game seem more challenging – this is the essence of what designers mean when they talk about a game’s “pacing.” For PvP games, in most cases we want the better player to win, so this isn’t seen as much of a problem; however, for games where we want the less-skilled player to have a chance and the highly-skilled player to still be challenged, we can implement negative feedback loops and randomness to give an extra edge to the player who is behind. There was another problem with flow that I mentioned, which is that you can design your game at one level of difficulty, but players come to your game with a range of initial difficulty levels, and what’s easy for one player is hard for another. With PvE games, as you might guess, the de facto standard is to implement a series of difficulty levels, with higher levels granting the AI power-based bonuses or giving the player fewer powerbased bonuses, because that is relatively cheap and easy to design and implement. However, I have two cautions here: 1. If you keep using the same playtesters they will become experts at the game, and thus unable to accurately judge the true difficulty of “easy mode”; easy should mean easy and it’s better to err on the side of making it too easy, than making it challenging enough that some players will feel like they just can’t play at all. The best approach is to use a steady stream of fresh playtesters throughout the playtest phase of development (these are sometimes referred to as “Kleenex testers” because you use them once then throw them away). If you don’t have access to that many testers, at least reserve a few of them for the very end of development, when you’re tuning the difficulty level of “easy.” 2. Take care to set player expectations up front about higher difficulties, especially if the AI actually cheats. If the game pretends on the surface to be a fair opponent that just gets harder because it is more skilled, and then players find out that it’s actually peeking at information that’s supposed to be hidden, it can be frustrating. If you’re clear that the AI is cheating and the player chooses that difficulty level anyway, there are less hurt feelings: the player is expecting an unfair challenge and the whole point is to beat that challenge anyway. Sometimes this is as simple as choosing a creative name for your highest difficulty level, like “Insane.” There are, of course, other ways to deal with differing player skill levels. Higher difficulty levels can actually increase the skill challenge of the game instead of the power challenge. Giving enemies a higher degree of AI, as I said before, is expensive but can be really impressive if pulled off correctly. A cheaper way to do this in some games is simply to modify the design of your levels by blocking off easier alternate paths, forcing the player to go through a harder path to get to the same end location when they’re playing at higher difficulty. Then there’s Dynamic Difficulty Adjustment (DDA), which is a specialized type of negative feedback loop where the game tries to figure out how the player is doing and then adjusts the difficulty on the fly. You have to be very careful with this, as with all negative feedback loops,

Sid Meier’s Pirates actually gives the player the chance to increase the difficulty when they come into port after a successful mission. Interestingly. so this didn’t actually change much as each round progressed. yes.because it does punish the player for doing well and some players will not appreciate that if it isn’t set up as an expectation ahead of time. nor do you have any mechanics about resources. which makes the game more challenging. Later on you have fewer targets and they move faster. wave after wave. let’s look at a few examples to see how we can use this to analyze games in practice.) Chess (and other squad-based wargames) If I had to put Chess in a genre. The player has absolutely no way to gain power in the game. by offering dynamic difficulty changes under player control. where one player can start with more power or earn more power over the course of the game. and once you reach the endgame certain positions let you basically take an automatic win if you’re far enough ahead. early on there are a lot of aliens and they move slowly. to compensate for their lower level of skill. Case Studies With all of that said. the challenge curve does change over the course of a wave. In most cases this should be voluntary. is if you die enough times on a level it’ll offer you the chance to drop the difficulty on the reload screen (which some players might find patronizing. you start with three lives and that’s all you get. your offensive and defensive capabilities are the same. Here. we are dealing with a negative-sum game that actually has a mild positive-feedback loop built into it: if you’re ahead in pieces. try to figure out how the player is doing… but then. Like DDA. One example of this is the game flOw. based on how confident they are in their skills. Another way to do this is to split the difference. in a sense: whether you have one life remaining or all three. though. give the player the option of changing the difficulty level manually. so a player that appears to . and of course if they ever reach the ground you lose all of your lives which makes this a real threat. there does tend to be a bit of back-and-forth as players trade off pieces for board control or tempo. especially if you have two players with extremely unequal skill at the game. Then the next wave starts and it’s a little harder than the last one. This can be pretty demoralizing for the player who is losing. but to survive as long as possible before the increasing challenge curve overwhelms them. where the player can go to the next more challenging level or the previous easier level at just about any time. so it’s very easy to hit a target. Another example. but on the other hand it also gives the player no excuse if they die again anyway). The only reason this works is because against two equally-skilled players. supply or logistics which you tend to see in more detailed army-level games. The player’s goal is not to win. it’s just how spread out it is. I mean this in the sense that you start with a set force and generally do not receive reinforcements. trades tend to be beneficial to you (other things being equal). and actually gives the player an incentive: a higher percentage of the booty on future missions if they succeed. because they will tend to start losing early and just keep losing more and more as the game goes on. Space Invaders (and other retro-arcade games) This game presents the same skill-based challenge to you. you also don’t really lose power in the game. (You’d think that there would also be a tradeoff in that fewer aliens would have less firepower to shoot at you. I’d call a squad-based wargame… which is kind of odd since we normally think of this as an entire army and not a squad. players entering a PvP contest typically expect the game to be fair by default. The equivalent in PvP games is a handicapping system. and there are no powerups. but in the actual game I think the overall rate of fire was constant. and increasing the skill by making the aliens move and shoot faster and start lower. production. God of War did this and probably some other games as well. On the other hand. but the difficulty is still decreased initially.

it’s a mechanic that affects all players equally. we’ll see a stalemate). and if players couldn’t trade with each other that would almost certainly be the case. There are only very limited cases where players can actually lose their progress. the people drafting behind them are much more fuel-efficient and can take over the lead later. this is also the same pattern in stock car racing in real life. the players are trying to exchange positions before the race ends. the way most racing games do this feels artificial to a lot of players. Most racing video games include a strong negative feedback loop that keeps everyone feeling like they still have a chance. so you get an interesting progression curve where players are all moving towards the end at about the same rate. they required a lot more grinding than players will put up . this provides an interesting tension: players in the lead know that they just have to keep the lead for a little bit longer. so it is positive-sum. meanwhile. while players who are behind realize that time is running out and they have to close the gap quickly. Final Fantasy (and other computer/console RPGs) In these games. So again. The ability to trade freely with other players balances this aspect of the game. If I were to criticize this game at all. but it feels a lot more fair: the person in the lead is running into a bunch of air resistance so they’re burning extra fuel to maintain their high speed. Since the game’s length is essentially capped by the number of laps. as trading can be mutually beneficial to both players involved in the trade. other games in the series fix this by giving players extra resources in the early game. At first you’d think this means that the first player to get an early-game advantage automatically wins. if the players are really well-matched. if players who are behind trade fairly with each other and refuse to trade with those in the lead at all (or only at exorbitant rates of exchange). and building is the primary victory condition. it would be that the early game doesn’t see a lot of rapid progression because the players aren’t generating that many resources yet – and in fact. and it’s up to each driver how much of a risk they want to take by breaking away from the pack. On the other hand. which means they need more pit stops. This isn’t arbitrary. And yet. it feels very different as a player. up to the end – usually through some kind of “rubberbanding” technique that causes the computer-controlled cars to speed up or slow down based on how the players are doing. they can catch up fairly quickly. based on how well they are playing. Mario Kart (and other racing games) Racing games are an interesting case. while in Mario Kart a lot of it is under computer control. In auto racing.e. mostly. Notice that this is actually the same progression pattern as Catan: both games are positive-sum with negative feedback. when you build something. making it harder for you to fight for first. Interestingly. Settlers of Catan (and other progression-based board games) Here is a game where progression is gaining power. the negative feedback is under player control. Against wellmatched opponents you will tend to see a variable rate of decrease as they trade pieces. because players are always progressing towards the goal of the finish line. On the one hand. because it feels like a player’s standing in the race is always being modified by factors outside their control. Older games on consoles like NES tended to be even more based in stats and less in skill than today’s games (i. Catan contains a pretty powerful positive feedback loop. this is something that feels more fair because the negative feedback is under player control. and if they play about as well we’ll see a game where the conclusion is uncertain until the endgame (and even then. you’re likely to get something that lets you catch up. there’s also a negative feedback loop. I think this is mostly because in Catan. Games like Mario Kart take this a step further. the player is mostly progressing through the game by increasing their power level more than their skill level.be losing has a number of opportunities to turn that around later before the endgame. that gain is permanent. offering pickups that are weighted so that if you’re behind. in that building more settlements and cities gives you more resources which lets you build even more. while if you’re ahead you’ll get something less valuable.

and they either stop playing or they start playing in a different way. If you’ve never played these games. it’s nearly impossible to lose any progress at all. These are positivesum games where you basically click to progress. which in turn makes it easier for you to win even more combats. World of Warcraft (and other MMORPGs) Massively multiplayer online games have a highly similar progression to CRPGs. so it’s not that you can’t progress any further. like an arcade game. the reward loop starts rewarding you less and less frequently. Eventually. FarmVille (and other social media games) If roguelikes are the harsh sadists of the game design world. then the cute fluffy bunnies of the game world would be progression-based “social media” games like FarmVille. and that you need more and more victories to level up again if you stay in the same area too long. with a slowly increasing player skill curve over time that lets the player survive for longer in each successive game. a lot of players simply never make it that far. but that the game doesn’t really reward you for progression as much. you transition to the elder game.with today). giving them more options and letting them increase their tactical/strategic skills. But after awhile. Usually these games are paced on a slightly increasing curve. the player’s actions in the elder game still do cause progression. you see these repeated increases punctuated by total restarts. So. More player skill simply means you progress at a faster rate.” I mean they will literally delete your save file. where each area takes a little more time than the last. so progress is both a reward and a measure of skill. it’ll happily let you keep earning experience and leveling up… although after a certain point you don’t really get any interesting rewards for doing so. which means the actual gains are close to linear. As we discussed in an earlier week. which are kind of like this weird fusion with the leveling-up and stat-based progression of an RPG. Most of these games made today do give the player more abilities as they progress through the experience levels. and at that point the concept of “progression” loses a lot of meaning. so at some point the player decides that further progression just isn’t worth it to them. up until that point. FarmVille doesn’t have a level cap that I know of (if it does have one. except they then transition at the end to this elder game state. If You’re Working On a Game Now… . So our analysis looks much the same as it does with the more traditional computer RPGs. Interestingly. Nethack (and other roguelikes) There are the so-called “rogue-like” games. And when I say “dead. but with a ton of hard resets that collectively raise the player’s skill as they die (and then learn how not to die that way in the future). While there is a win condition. but from most of the games I’ve played this is a more subtle transition than with MMOs. and the mercilessness of a retro arcade game. Progression and rewards also come from plot advancement and reaching new areas. Nethack looks like an increasing player power and power/skill challenge over the course of a single playthrough… but over a player’s lifetime. and you’re always gaining something. one thing you should know is that most of them have absolutely no problem killing you dead if you make the slightest mistake. with the player gaining power to meet the increasing level of challenge in the game. the player’s goal is to stay alive as long as possible and progress as far as possible. Therefore. you finish earning all the ribbons or trophies or achievements or whatever. keep in mind that taking a character all the way from the start to the end of the game may take dozens of hours. it’s so ridiculously high that most people will never see it). much longer. similar to a modern RPG. and then you have to start over from scratch with a new character. A single successful playthrough in Nethack looks similar to that of an RPG. If they start playing differently that’s where the elder game comes in. but actually reaching the level of skill to complete the game takes much. permanently. and that is counteracted by the negative feedback loop that your enemies also get stronger. there’s usually a positive feedback loop in that winning enough combats lets you level up.

or more likely seemed to drag on forever. you analyzed the progression of player power to the power level of the game’s challenges. I’d like you to analyze the reward structure in the game. and that game has progression mechanics. how long are they? Do they tend to get longer over time. what else would need to change about the nature of the game in order to compensate? • Does the game actually play that long? How do you know? • If the game is split up into phases. or zero-sum? Do any phases of the game have positive or negative feedback loops? How do these affect total play time? Homework Back in week 2. areas. Consider all kinds of rewards: power-based. locations or whatever. In particular. and plot advancement. instead of fixing the leveling or progression curve. or at certain times? Now. This week. I want you to ask yourself some design questions about the nature of that progression: • What is the desired play length of this game? Why? Really challenge yourself here – could you justify a reason to make it twice as long or half as long? If your publisher (or whoever) demanded that you do so anyway. or they’re stuck grinding because they’re behind the curve and have to catch up It’s time to revisit that analysis with what we now know about pacing. Do these points coincide with more or fewer rewards in the game? Now ask yourself if the problem could have been solved by just adding a new power-up at a certain place. . at certain points (based on your gut reaction and memory). or are there some that are a lot longer (or shorter) than those that come immediately before or after? Is this intentional? Is it justifiable? • Is your game positive-sum. your “homework” was to analyze a progression mechanic in a game. level progression. and anything else you can identify as it applies to your game: • Which of these is truly random (such as randomized loot drops).If you’re working on a game right now. in order to identify weak spots where the player would either go through an area too quickly because they’re too powerful by the time they get there. where you felt that the game either went too quickly. look back again at your original predictions. but the player has no way of knowing ahead of time how quickly they’ll find these things)… and are there any rewards that happen on a fixed schedule that’s known to the player? • How often do rewards happen? Do they happen more frequently at the beginning? Are there any places where the player goes a long time with relatively few rewards? Do certain kinds of rewards seem to happen more frequently than others. over time. negative-sum. and which of them only seem random to the player on their first playthrough (they’re placed in a specific location by the level designer.

Even though we collect metrics first and then use statistics to analyze them. For anyone who isn’t familiar with what these mean.Level 8: Metrics and Statistics This Week One of the reasons I love game balance is that different aspects of balance touch all these other areas of game development. we walk right up to the line where game design intersects business. Once we collect a lot of metrics. and you’ll see that game designers (and statisticians) disagree about the core principles of statistics even more than they disagree about the core principles of systems design. you add them all up and then divide by the number of values. What is statistics. it’s a lot messier. and how is it different from probability? In probability. they don’t do anything on their own until we actually look at them and analyze them to learn something. but there’s a 5% chance that you’re wrong. This is sort of like “expected value” in probability. Probability and statistics share one important thing in common: neither one is guaranteed. This week I’m covering two topics: statistics. Statistics People who have never done statistics before think of it as an exact science. which is game design but just this side of art. In reality. that’s an area where you get dangerously close to programming. Likewise. leaderboards and high score lists are probably the best-known metrics because they are exposed to the players. That chance never goes to zero. and that you’re 95% sure. if such a thing is possible. Statistics is kind of the opposite: here you’re given the data up front. you’re given a set of random things. Statistical Tools This isn’t a graduate-level course in statistical analysis. math is pure. Calculating the mean is incredibly useful. To get the mean of a bunch of values. ‘metrics’ just means measurements. and metrics. they’re probably talking about the mean average (there are two other kinds of average that I know of. and told exactly how random they are and what the nature of that randomness is. and your goal is to try to predict what the data will look like when you set those random things in motion. and probably a few more that I don’t). Mean: when someone asks for the “average” of something. but we also can use a lot of metrics behind the scenes to help design our games better. You can think of the mean as a Monte Carlo calculation of expected value. except you’re using real-world playtest . It’s math. it tells you what the ballpark expected value is of something in your game. ‘Statistics’ is just one set of tools we can use to get useful information from our metrics. except that you’re computing it based on realworld die-rolls and not a theoretically balanced set of die-rolls. Probability can tell you there’s a 1/6 chance of rolling a given number on 1d6. Last week we saw how the visual design of a level can be used as a game reward or to express progression to the player. This week. once we take these measurements. but it does not tell you what the actual number will be when you roll the die for real. and you’re trying to figure out the nature of the randomness that caused that data. When we were talking about pseudorandom numbers. and therefore you should be able to get all of the right answers all the time. What I’m going to put down here is the bare minimum I think every game designer should know to be useful when analyzing metrics in their games. I’m actually going to talk about statistics first because it’s useful to know how your tools work before you decide what data to capture. statistics can tell you from a bunch of die rolls that there is probably a uniform distribution. so it means you’re actually measuring or tracking something about your game. so all I’ll say is that there are a lot more tools than this that are outside the scope of this course.

in case you’re curious. However. the SD is about two-and-a-half. you’ll see the standard deviations are a lot different: for 2d6. but it does allow us to use statistical tools to analyze probability. for 1d11+1. multiply the result by itself). meaning that most of the time you’ll get a result in between 5 and 9. For reasons that you don’t really need to know. the median household income is a lot lower than the mean. To calculate it. subtracting it from the mean. SD of 25 looks a lot more spread out than a mean of 5000. if you have five values. Your target is that it should take about 5 minutes.) On its own. it probably means you’ve got a small minority of players that are just obscenely good at the game and getting these massive scores. until you start rolling. while everyone else who is just a mere mortal is closer to the median. if the median is lower than the mean. statistics doesn’t have anything to say about this. squaring the result (that is. Like we talked about in the week on probability. So. As a different example. as you did in the 5 to 9 range for 2d6. You calculate it by taking each of your data points. the third one is the median. you can use 7 as the expected value. What does that tell us? Most people take between 3 and 7 minutes. both of these will give you a number from 2 to 12. A mean of 50. For example. then take the square root of the whole thing. the median isn’t all that useful. Now. Examples To give you an example. If you’re making a game with a scoreboard of some kind and you see a median that’s a lot lower than the mean. standard deviation at 2 minutes. In a classroom. which means if you’re trying to balance either of these numbers in your game. Which doesn’t actually sound like that big a deal.data rather than a computer simulation. so you’ll get about as many rolls in the 4 to 10 range here. you’re supposed to take the mean of those. let’s consider two random variables: 2d6. while a really small SD means your data is all clustered together. add all of those squares together. about whether your values are all weighted to one side. median at 6 minutes. linear experience so this would actually feel like a pretty huge range. and nearly all of your data is within two standard deviations. The other cause for concern is the high median. where most students are clustered around 75 or 80 and then you’ve got some lazy kid who’s getting a zero which pulls down the mean a lot). and I roll 1d11 eleven times and get one of each result… which is wildly unlikely. but it tells you a lot when you compare it with the mean. but let’s just assume that I happen to roll the 2d6 thirty-six times and get one of each result. going through this process gives you a number that represents how spread out the data is. The mean of both of these is 7. which basically means we’ve got a lot of people making a little. SD of 25. and 1d11+1. Median: this is another kind of average. Standard deviation: this is just geeky enough to make you sound like you’re good at math if you use it in normal conversation. but in a lot of games the tutorial is meant to be a pretty standardized. in the US. which means you’re just as likely to be above or below the mean. the SD is about three-and-a-half. then pick the one in the center. which might be good or bad depending on just how much of the level is under player control. you just have a few people who get through the level really fast and they . about two-thirds of your data is within a single standard deviation from the mean. (If you have an even number of values so that there are two in the middle rather than one. it means most of the students are struggling and one or two brainiacs are wrecking the curve (although more often it’s the other way around. which suggests most people actually take longer than 5 minutes. What about the range? The median is also 7 for both. while the 1d11+1 is spread out among all outcomes evenly. divide by the total number of data points. and a few people making these ridiculously huge incomes that push up the mean. A relatively large SD means your data is all over the place. But they have a very different nature. the 2d6 clusters around the center. take all your values and sort them from smallest to largest. which makes sense because both of these are symmetric. so how big your SD is ends up being relative to how big your mean is. or if they’re basically symmetric. You measure the mean at 5 minutes. Basically. maybe you’re looking at the time it takes playtesters to get through your first tutorial level in a video game you’re designing.

If you’re just looking for normal. or they were just having so much fun playing around in the sandbox that they didn’t care about advancing to the next level or whatever. if one fear is that players are skipping through the intro dialogue. If you’re looking for edge cases then you want to leave them in and pay close attention.) Outliers When you have a set of data with some small number of points that are way above or below the mean. For example. People who do this for a living have “confidence intervals” where they’ll tell you a range of values and then say something like they’re 95% sure that the actual mean in reality is within such-and-such a range. you might wonder what to do with the outliers. Since these tend to throw off your mean a lot more than the median. it depends. usual play patterns. if you’re trying to analyze the scores people get so you know how to display them on the leaderboards. those are not arising from normal play. Going back to our earlier example of level play times. and you can probably ignore that if it’s just one person. If there were a few thousand tests. by finding one logical explanation for the numbers and ignoring that there could be other explanations as well. This is a lot more detail than most of us need for our day-to-day design work. that suggests those players may have found some kind of shortcut or exploit. we could actually measure the time spent reading dialogues in addition to the total level time. but it’s never actually 100% no matter how many tests you do. players who find one aspect of your tutorial really fun. the name for those is outliers (pronounced like the words “out” and “liars”). This is good news in that you know you’re not having anyone taking four hours to complete it or whatever (otherwise the mean would be a lot higher than the median instead!). which is good to know as you’re designing the other levels. Do you include them? Do you ignore them? Do you put them in their own special group? As with most things. and you want to figure out what happened. the more sure you can be. you might investigate further. This is one area where statistics is often misused or flat out abused.bring down the mean. There’s also a third lesson here: I didn’t tell you how many playtesters it took to get this data! The more tests you have. for example. We’ll come back to this concept of metrics design later today. and sometimes there are multiple explanations for the why. but it can’t tell us why. the more accurate your final analysis will be. it is usually worth investigating further to figure out what happened. But if it’s three or four people (still in the vast minority) who did that. The more you have. In either case. that’s a lot better. When you’re doing a statistical analysis. or its implications of game design… but we could spend some time thinking about all the possible answers. or else they’re just skipping through all your intro dialogue or something which is going to get them stuck and frustrated in level 2. but it’s potentially bad news in that some players might have found an unintentional shortcut or exploit. If you only had three tests. If most players take 5 to 7 minutes and you have one player that took 30 minutes. if you have any outliers. Population samples . (How many tests are required to make sure your analysis is good enough? Depends what “good enough” means to you. these numbers are pretty meaningless if you’re trying to predict general trends. that is probably because the player put it on pause or had to walk away for awhile. and then we could collect more data that would help us differentiate between them. because there might be some small number of people who are running into problems… or. In this case. realize that your top-score list is going to be dominated by outliers at the top. it is generally better to discard the outliers because by definition. This suggests another lesson: statistics can tell us that something is happening. we have no way of knowing why the median is shorter than the mean. if you see the mean and median differing by a lot it’s probably because of an outlier. or something else. if most players take 5 to 7 minutes to complete your tutorial but you notice a small minority of players that get through in 1 or 2 minutes.

I’d do this a bunch of times going through most of the deck. I already mentioned one frequent problem. Problem: the people playing the game are probably not casual gamers. Programming time is always limited. this one time I put together a tournament deck for a friend. The problem: the focus group was made up of all males. This is easier in some companies than others. the better. Basically. and there’s nothing you can do about that. You also see things like this happening in the rest of the world. so this is not really a representative sample of your target market. which is not having a large enough sample. but that hasn’t stopped it from being the subject of many industry conversations… not just about the role of women in games. so most of the time it seemed like I was doing okay by the end… but I never actually stopped to count. and localization. because their focus group said they preferred a male protagonist. the better. and rightly so. Activision denies all of this. while you might realistically be able to do a fraction . To tell if I had the right ratio of land to spells. and then I’d reshuffle and do it again. A more recent example: in True Crime: Hong Kong. But in reality. You have everyone on the development team play through the game to get some baseline data on how long it takes to play through each level and how challenging each level is. a video game company can release an early beta and get hundreds or thousands of plays. depending on the type of game. And that’s if the decision isn’t made for you by your producer or your publisher. or the questions were inherently biased by the person setting it up. and when we actually went through the deck and counted. • For video games. of course. publisher Activision allegedly demanded that the developers change the main character from male to female. Your analysis is only as good as your data! Even if you use statistics “honestly” there’s still problems every game designer runs into. The more data points you collect. I’ll give you an example: back when I played Magic: the Gathering regularly. which my friend lost badly. After the tournament. but about the use of focus groups and statistics in game design. then I’d take some land out or put some in depending on how many times I had too much or too little. you are at the mercy of your playtesters. • For tabletop games. but my sample size was way too small to draw any meaningful conclusions. The real problem here was that I was trying to analyze the number of land through statistical methods. particularly in governmental politics. The more data points you have. and you want to have as many playtests as possible so that the random noise gets filtered out. when you’re collecting playtest data. and playtesting: tasks that are pushed off towards the end of the development cycle until it’s too late to do anything useful. as a deliberate attempt to further their agenda rather than to actually find out the real-world truth. you want to do your best to recruit playtesters who are as similar as possible to your target market. of course. they reported to me that they were consistently not drawing enough land. Here’s another example: suppose you’re making a game aimed at the casual market. there were only 16 lands in a deck of 60 cards! I took a lot of flak from my friend for that.Here’s another way statistics can go horribly wrong: it all comes down to what and who you’re sampling. but in some places “metrics” falls into the same category as audio. At the time I figured this was a pretty good quick-and-dirty way to figure out how much land I needed. you are at the mercy of your programmers. where a lot of people have their own personal agenda and they’re willing to warp a study and use statistics as a way of proving their point. you know. the actual game mechanics you’ve designed. I’m sure this has happened before at some point in the past. But it just so happened that I wasn’t noticing that the land was actually very evenly distributed and not clustered. for a tournament that I couldn’t play in but they could. so at some point you’ll have to make the call between having your programming team implement metrics collection… or having them implement. I shuffled and dealt an opening hand and played a few mock turns to see if I was getting enough. The programmers are the ones who need to spend time coding the metrics you ask for.

where two things that obviously have no relation are found to be correlated anyway. With a smaller sample. but a lot of times people assume that just because two things are correlated. it’s that being strongly in the lead causes the player to buy a Factory for some reason. and you need to build that into your schedule. • For any kind of game. this calculates the mean. say. try rolling two separate dice a few times and then computing the correlation between those numbers. based only on this data: • Maybe it’s the other way around. if you take a lot of random things you’ll be able to see patterns.of that with in-person tabletop tests. I bet it’s not zero! Statistics in Excel Here’s the good news: while there are a lot of math formulas here. that winning causes the player to buy a Factory. Say you notice when playing Puerto Rico that there’s a strong positive correlation between winning. • Or. or an octopus that predicts the World Cup winner. You could also take the SUM of . in 95 of them the winner bought a Factory. “Correlation” just means that when one thing goes up. so the Factory is just a symptom and not the root cause. since by definition you don’t always know exactly what you’re looking for or what you expect the answer to be. just like probability. So you need to proceed with caution. We actually see this all the time in popular culture. you’ll have to do those tests over again. and use every method you can find of independently verifying your numbers. • Or. Here are some other equally valid conclusions. and in what level of detail. If you don’t believe me. the two could actually be uncorrelated. you need to remember that it’s very easy to mess things up accidentally and get the wrong answer. without additional information. But you can’t draw this conclusion by default. Unlike probability. that one causes the other. It also helps if you try to envision in advance what the likely outcomes of your analysis might be. but another is that if you take a bunch of sets of data. Maybe some early-game purchase sets the player up for buying the Factory. some of them will probably be randomly correlated. you don’t actually need to know any of them. and that earlygame purchase also helps the player to win. it could be that something else is causing a player both to win and to buy a Factory. That sounds odd. and that it’s causing you to win. one of the most common errors with statistics is when you notice some kind of correlation between two things. As we learned when looking at probability. The only thing to do about this is to recognize that just like design itself. AVERAGE: given a range of cells. Here are a few useful ones: 1. you need to be very clear ahead of time what it is you need measured. Recognizing correlations is useful. playtesting with metrics is an iterative process. and that is something that you cannot tell from statistics alone. another thing always seems to go up (which is a positive correlation) or down (a negative correlation) at the same time. Let’s take an example. and your sample size just isn’t large enough for the Law of Large Numbers to really kick in. but maybe the idea is that a Factory helps the player who is already winning. Excel will do this for you. out of 100 games. If you run a few hundred playtests and only find out afterwards that you need to actually collect certain data from the game state that you weren’t collecting before. one thing is that you can expect to see unlikely-looking streaks. and what they’ll look like. so it’s not that the Factory is causing the win. and buying the Factory building. your playtest data is a lot more suspect. The natural assumption is that the Factory must be overpowered. like the Redskins football game predicting the next Presidential election in the US. there aren’t as many sanity checks to make the wrong numbers look wrong. Correlation and causality Finally. it has all these formulas already. • Also for any kind of game. or a groundhog seeing its shadow supposedly predicting the remaining length of Winter.

because the trivia bot wasn’t part of the game. (That part was expected. this might suggest a positive feedback loop in the game somewhere). all we had to do was some very simple analysis. Now. Metrics . tracking how these things changed over time. this gives you the standard deviation. Is there any good news? At this point I’ve spent so much time talking about how statistics are misused. Listening to the players failed. as goofy and stupid and simple as it was. this calculates the median. because it was such a short. and we did! This was simple enough that it didn’t even require much analysis. CORREL: you give this two ranges of cells. the one where they would log in to the chat room to find someone to challenge. because everyone who was logged in was too busy answering dumb trivia questions to actually play a real game. without analysis of the hard data. was one of our programmers got bored one day and made a trivia bot. and there was no way to poll those who weren’t in the vocal minority. For example. Measure total number of actual games played. before they got distracted by the trivia bot)? Some players loved the trivia bot. yes. MEDIAN: given a range of cells. Could we answer this with statistics? We sure could. they claimed that it was harder to find a game. Others hated the trivia bot. at least that was my experience. the numbers were all falling gradually since the time of the last real release. It gave them something to do in between games. I worked for a company that made this online game. that you might be wondering if they’re actually useful for anything. Since our server tracked every player login. send a trivia question every couple of minutes. because the vocal minority of the player base was polarized. As expected. and then parse the incoming public chat to see if anyone said the right answer. we had this data. 2. 4. and another column with a list of scores at the end of the first turn. and it can’t be answered just through the math of your cost or progression curves. but as long as they were there. The number Excel gives you from the CORREL function ranges between -1 (perfect negative correlation) to 0 (uncorrelated) to +1 (perfect positive correlation). And the answer is. not one. statistics let you draw useful conclusions… if you ask the right questions. but AVERAGE is easier. they were also playing games with each other! That was a conclusion that would have been impossible to reach in any kind of definitive way. let alone part of the cost curve. interacting with each other. STDEV: given a range of cells. and if you collect the right data. 3. but the trivia bot actually caused a noticeable increase in both total logins and number of games played. logout and game start already. Measure the total number of logins per day. because everyone’s intuition was different. because we hadn’t released an update in awhile. the big question is: what happened to the player population. and what happened to the actual. just this little script that would log into our server with its own player account. Who was right? Intuition failed.the cells and then divide by the number of cells. And it was popular. And it taught us something really important about online games: more players online. you could have one column with a list of final game scores. to see if early-game performance is any kind of indicator of the final game result (if so. With no updates. Math failed. It turned out that players were logging in and playing with the trivia bot. Here’s an example of a time when statistics really helped a game I was working on. immediate casual experience. real game that players were supposed to be playing (you know. and it gives you the correlation between the two sets of data. is better… even if they’re interacting in nonstandard ways. I’ve found that an online game loses about half of its core population every 6 months or so.) But what we didn’t expect. and we found that our online population was falling and people weren’t playing as many games. If you have a question that can’t be answered with intuition alone. as you might guess.

Or maybe not. The old guard. If you design using metrics. • Rebellion against the Zynga model: metrics are easy to misunderstand. • Someone creates a technology that seems to solve a lot of these problems algorithmically. The progression goes something like this: • Practitioners see their field as a “soft science”. and that while some day the tech might answer everything. you push yourself into designing the kinds of games that can be designed solely by metrics. and eyes it skeptically. We have entire companies that have sprung up solely to help game developers capture and analyze their metrics. it’s fun.” and metrics alone will not get you there. by helping you explore the nearby design space. If you measure player activity and find out that more players use the login screen than any other in-game action. game design seems to be just starting Step 2. Currently. Finally. meanwhile. which pushes you away from a lot of really interesting video game genres. Practitioners rejoice. At any rate. because they will be looking so hard at the numbers that they’ll forget that there are actually human players out there who are trying to have fun in a way that can’t really be measured directly. because sometimes you have to make a game a little worse in one . sees it as a threat to how they’ve always done things. Everyone kisses and makes up. But learning which parts go best with humans and which parts are best left to computers is a learning process that takes awhile. touchy-feely element to what they do.Here’s a common pattern in artistic and creative fields. and what parts need an actual creative human thinking. right now there seems to be three schools of thought on the use of metrics: • The Zynga model: design almost exclusively by metrics. They help you take a good game and make it just a little bit better. We hear of Zynga changing the font color from red to pink which generates exponentially more click-throughs from players to try out other games. particularly things like archaeology or art preservation or psychology or medicine where it requires a certain amount of intuition but at the same time there is still a “right answer” or “best way” to do things. they don’t know a whole lot about best principles or practices. Love it or hate it. I’ve been wrong before. Practitioners realize that there is still a mysterious. The young turks acknowledge that this wasn’t the panacea they thought. • Eventually. intuition also has its uses. eventually. Widespread disillusionment occurs as people no longer want to trust their instincts because theoretically technology can do it better. and the field becomes stronger as the best parts of each get combined. They do learn how things work. 60 Million monthly active unique players laugh at your feeble intuition-based design. We hear about MMOs that are able to solve their game balance problems by looking at player patterns. and are therefore dangerous and do more harm than good. but it’s mostly through trial and error. sometimes you need to take broad leaps in unexplored territory to find the global “peaks. badly. the old guard acknowledge that it’s still a lot more useful than they assumed at first. that doesn’t mean you should add more login screens to your game out of some preconceived notion that if a player does it. we’re a hard science! No more guesswork! Most younger practitioners abandon the “old ways” and embrace “science” as a way to solve all their field’s problems. We’re hearing more and more people anecdotally saying why metrics and statistical analysis saved their company. • The moderate road: metrics have their uses. The industry is falling in love with metrics. by the time this whole thing shakes out. people settle into a pattern where they learn what parts can be done by computer algorithms. • The limitations of the technology become apparent after much use. However. that day is a lot farther off than it first appeared. before the players themselves learn enough to exploit them. easy to manipulate. they help you tune your game to find local “peaks” of joy. but people don’t want to trust the current technology because it doesn’t work that great yet. and I’ll go on record predicting that at least one company that relies entirely on metrics-driven design will fail.

then sure. More to the point. log it all. exactly. than to not collect a piece of critical info and then have to re-do all your tests. Again. and you didn’t give a reason to continue playing after that. Think about it for a bit and decide where you stand. and how . If you’re at a large company with an army of actuarial statisticians with nothing better to do than find data correlations all day. What correlates with fun. go nuts with data collection and you’ll probably find all kinds of interesting things you’d never have thought of otherwise. players might be coming back to your game for some other reason. Most of the things that you want to know about your game. you mine the heck out of this data mountain to the point where you’re finding all kinds of correlations and relationships that don’t actually exist. measure that and only that. you need to measure. Another school of thought is that “record everything” is fine in theory. and that is useful information. or potentially worse. you might want to know if the game is fun or not. because whether people play. I think a lot depends on what resources you have. The idea is that you’d rather collect too much information and not use it. that you can measure? One thing might be if players continue to play for a long time. or if they come back to play multiple sessions (especially if they replay even after they’ve “won”). and these are all things you can measure. and add more metrics later if a new question occurs to you that requires some data you aren’t tracking yet. that tells us that point in the game is probably not enjoyable and may be driving players away. instead you should figure out ahead of time what you’re going to need for your next playtest. mine it later. or something. keep in mind this isn’t a perfect correlation. Example: measuring fun Let’s take an example. In a single-player Flash game. What metrics do you actually take – that is. If it’s you and a few friends making a small commercial game in Flash. but there’s no way to measure fun. One is to record anything and everything you can think of. like if you’ve put in a crop-withering mechanic that punishes them if they don’t return. so instead you have to figure out some kind of thing that you can measure that correlates strongly with what you’re actually trying to learn. what exactly do you measure? There are two schools of thought that I’ve seen. think about where you stand on the issue. and that way you don’t get confused when you look at the wrong stuff in the wrong way later on. Now. but whether you say “just get what we need” or “collect everything we can. Personally.” neither of those is an actual design. But at least we can assume that if a player keeps playing.way before it gets a lot better in another. you can’t actually measure directly. you probably don’t have time to do much in the way of intensive data mining. What about the people you work with on a team (if you work with others on a team)? How much to measure? Suppose you want to take some metrics in your game so you can go back and do statistical analysis to improve your game balance. as a designer. maybe they found it incredibly enjoyable but they beat the game and now they’re done. By this way of thinking. So it all depends on when. if lots of players stop playing your game at a certain point and don’t come back. and metrics won’t ever let you do that. how often they play. or if they spend enough time playing to finish the game and unlock all the achievements. personally. but in practice you either have this overwhelming amount of extraneous information from which you’re supposed to find this needle in a haystack of something useful. metrics is a second-order problem. (Or if the point where they stopped playing was the end. Like game design itself. there’s probably at least some reason.) Player usage patterns are a big deal. so you’re better off just figuring out the useful information you need ahead of time. At some point you need to specify what. What specific things do you measure? That’s all fine and good.

there’s the question of whether player opinion is more or less meaningful than actual play patterns. In theory. it probably means you’ve got a good game. what does that mean? Or if a player admits they haven’t even played the game. particularly on Flash game portals. but your MAU is around 3000.long they play are (hopefully) correlated with how much they like the game. if a lot of your players keep coming back. I don’t know of any actual studies that have checked this. but those players all log in at least once per day. just multiply Daily by 30 or so to get Monthly. So that’s the range. but if I had to guess I’d assume that there is some correlation but not a lot. Users that give ratings are not a representative sample. Why is it important to know this number? For one thing. suppose instead that you have a game that people play once and never again. how much overlap there is between different sets of daily users). the two buzzwords you hear a lot are Monthly Active Uniques and Daily Active Uniques (MAU and DAU). so while actual quality probably falls along a bell curve you tend to have more 5-star and 1-star ratings than 3-star. I always had to wonder about those opinion polls that would say something like 2% of poll respondents said they had no opinion… like. which should be flatly impossible. though I’d be interested to see the results. that is. we’d also expect that a game with high player ratings would also have a good MAU/DAU ratio. but in reality the two will be different based on how quickly your players burn out (that is. it means you’re more likely to make money on the game. dormant accounts belonging to people who stopped playing. For another. So if you see any numbers like this from analytics websites. This is computed by taking DAU/MAU (instead of MAU/DAU) and multiplying by 100 to get a percentage. so you only have 100 players. So if you divide MAU/DAU. meaning on average. A word of warning: a lot of metrics. The “Unique” part is also important. so your MAU/DAU is about 30 in this case. Now. In theory.33% (1/30 of your monthly players logging in each day) to 100% (all of your monthly players log in every single day). Now. we would hope that higher ratings mean a better game. For example. make sure you’re clear on how they’re computing the numbers so you’re not comparing apples to oranges. so clearly the numbers somewhere are being messed with (maybe they took the Dailies from a different range of dates than the Monthlies or something). For example. because it makes sure you don’t overinflate your numbers by counting a bunch of old. that tells you something about how many of your players are new and how many are repeat customers. if a player logs into a game every day for months on end but rates it 1 out of 5 stars. so you get 100 new players every day but they never come back. but your marketing is good. suppose you have a really sticky game with a small player base. And then some people compute this as a %. what percentage of your player pool logs in on a given day. Also. For games that require players to come back on a regular basis (like your typical Facebook game). and your average DAU is also going to be 100. but they’re still giving it 4 out of 5 stars based on… I don’t know… its . The “Active” part of that is important. since one obsessive guy who checks FarmVille ten times a day doesn’t mean he counts as ten users. so your MAU/DAU is 1. they tend to have strong opinions or else they wouldn’t bother rating (seriously. but if that same individual comes in and is “just looking” every single day. an individual who just drops in to window-shop may not buy anything. which is not what you’d expect if everyone rated the game fairly. that the two would be correlated. normally you’d think Monthly and Daily should be equivalent. might use different ways of computing these numbers so that one set of numbers isn’t comparable to another. they’re probably going to buy something from you eventually. 30 or 31 depending on the month (representing a game where no one ever plays more than once). which should range from a minimum of about 3. Another metric that’s used a lot. like the ones Facebook provides. Here your average DAU is still going to be 100. Here your MAU is going to be 100. who calls up a paid opinion poll phone line just to say they have no opinion?). because you’ve got the same people stopping by every day… sort of like how if you operate a brick-and-mortar storefront. for one thing. I saw one website that listed the “worst” MAU/DAU ratio in the top 100 applications as 33-pointsomething. MAU/DAU goes between 1 (for a game where every player is extremely loyal) to 28. is to go ahead and ask the players themselves to rate the game (often in the form of a 5-star rating system).

and also the relative balance of strategies. suppose you have a strategy game where each player can take one of four different actions each turn. and if there are any unintentional spikes in your difficulty curve. You could record each turn. and also which choices seem to have the highest correlation with actually winning. until some top players figured out how to use it. So it’s probably better to pay attention to usage patterns than player reporting. and they have displayed their metrics on a public page (which probably helps with the aforementioned privacy concerns. they found it to be one of the most powerful cards ever made. suppose you have a CCG where players build their own decks. only (usually) after they’re done. players tend to not rate a game while they’re actively playing. . Measures of progression are going to be different depending on your game. you can still capture metrics based on player usage patterns. Again. and anything else. but actually uploading them anywhere is something you want to be very clear to your players about. or a Fighting game where each player chooses a fighter. and how it affects their respective standing in the game. flashy. For a game that presents skill-based challenges like a retro arcade game. where and how they lose a life. So. Interestingly. you can measure things like how long it takes the player to clear each level. Or. but that’s dangerous because player memory of these things is not good (and even if it is. For example. However. because it had this really complicated and obscure set of effects… but once people started experimenting with it. and you have a way of numerically tracking each player’s standing. Collecting this information makes it really easy to see where your hardest points are. Sometimes. but what you can measure is progression. and failure to progress. how many times they lose a life on each level. Now. I’ve been talking about video games. the Necropotence card in Magic: the Gathering saw almost no play for six months or so after release. and that they actually have a visualizer tool that will not only display all of this information. you can try to rely on interviews with players. You can track how these correlate to certain game events or board positions. both popularity and correlation with winning are two useful metrics here. starting with Half-Life 2 Episode 2 they actually have live reporting and uploading from players to their servers. is another thing that’s basically impossible to measure directly. or an RTS where players choose a faction. because of privacy concerns. what action each player takes. and importantly. or an MMO or tabletop RPG where players choose a race/class combination. coollooking thing that everyone likes because it’s impressive and easy to use is still easily defeated by a sufficiently skilled player who uses a less well-known strategy. because players can see for themselves exactly what is being uploaded and how it’s being used). The equivalent in tabletop games is a little fuzzier. especially if that reporting isn’t done during the game from within the game in a way that you can track. Note that this is not always the same thing. sometimes the big. which probably skews the ratings a bit (depending on why they stopped playing). dominant strategies take months or even years to emerge through tens of thousands of games played. I understand that Valve does this for their FPS games. you can track just about any number attached to any player. but as the designer you basically want to be watching people’s facial expressions and posture to see where in the game they’re engaged and where they’re bored or frustrated. Yet another example: measuring game balance What if instead you want to know if your game is fair and balanced? That’s not something you can measure directly either. like fun.reputation or something? Also. but actually plot it overlaid on a map of the level. action or object in the game. in fact most of this is specific to online games. objects. Two things you can track here are which choices seem to be the most and least popular. Another example: measuring difficulty Player difficulty. not every playtester will be completely honest with you). For video games that are not online. and this can tell you a lot about both normal play patterns. so you can see where player deaths are clustered.

so it’s something that’s worth tracking. We see a single curve that combines lots of variables. In this case the thing to watch for is sudden spikes. then at the end of the day this is one of your most important considerations. one really useful area is in measuring beginning asymmetries. regardless of balance issues… or maybe you wouldn’t be that surprised. personally. and so players aren’t experimenting with it right away (which can be really dangerous if you’re relying on playtesters to actually. that gives you some information there. the . and depending on where you play the first-move advantage in Go is 6. this is a pretty standard curve: big release-day sales that fall off over time on an exponentially decreasing curve. Or it might mean it is too complicated to use. for example. Collect a bunch of data on seating arrangements versus end results. in a high fantasy game. again that can mean it’s underpowered or overcosted. such as an MMO. It may also mean that this one thing is just a lot more compelling to your target audience for whatever reason – for example. For some people it’s the most important consideration: they’d rather have a game that makes lots of money but isn’t fun or interesting at all. on the order of a few hundredths of a percent. someone else is going to make the call for you some day. and what caused them. Popularity can be a sign in some games that a certain play style is really fun compared to the others. than a game that’s brilliant and innovative and fun and wonderful but is a “sleeper hit” which is just a nice way of saying it bombed in the market but didn’t deserve to. and we only get the feedback after the game is released. because I guess the idea is that this curve looks like it has a tail on the right-hand side. I think statisticians have calculated the homefield advantage in American Football to be about 2. At any rate. playtest. With traditional games sold online or through retail. because they don’t usually happen on their own. Other game designers would rather make the game fun first. it has a high learning curve relative to the rest of the game. that means sales metrics for traditional sales models aren’t all that useful to game designers. and so on. If it’s one game in a series it’s more useful because we can see how the sales changed from game to game and what game mechanics changed. If a game object sees less use than expected. normally we could discard that as random variation.5 or 7. It might also mean that it’s just not very fun to use. the half point is used to prevent tie games). If instead your game is online. This happens a lot with professional games and sports. so one thing for each of you to consider is. viral spread. which side of the fence you’re on… because if you don’t know that about yourself. Statistics from Settlers of Catan tournaments have shown a very slight advantage to playing second in a four-player game. even if it’s effective. that can certainly signal a potential game balance issue. Those sales tell you something related to how good a job you did with the game design.If a particular game object sees a lot more use than you expected. but the sheer number of games that have been played gives the numbers some weight. so if the game took a major step in a new direction and that drastically increased or reduced sales. a common one being the first-player advantage (or disadvantage). in whole or part. money is something that just about every commercial game should care about in some capacity. along with a ton of other factors like market conditions. until they get to the point where the sales are small enough that it’s not worth it to sell anymore.5 points (in this latter case. Unfortunately. One last example: measuring money If you’re actually trying to make money by selling your game. when those are. marketing success. With online games you don’t have to worry about inventory or shelf space so you can hold onto it a bit longer. if they leave some of your things alone and don’t play with them). you know. which is where this whole “long tail” thing came from. Metrics have other applications besides game objects. and you can sometimes migrate that into other characters or classes or cards or what have you in order to make the game overall more fun.3 points. you might be surprised to find more players creating Elves than Humans. For example. or a game in a Flash portal or on Facebook.

suggesting that your payment process itself is driving away players. and that once players can be convinced to spend any money at all they’ll spend a lot. do it in a way that is honest and up front with the players. In this case. the designers might mistakenly think their game change was a really bad one if they weren’t paying attention to the real world. Second. One way of doing this. Or. and a single player with several accounts should really be thought of as one entity (even if they’re spending money on each account). but also where that money is coming from on a per-user basis. The wonderful thing about this kind of release schedule is that you can manage the sales curve in real-time: make a change to your game today. you’d want to think of ways to give players incentive to spend just a little bit. active users. what counts as a “player”? If some players have multiple accounts (with or without your permission) or if old accounts stay around while dormant. because dormant accounts tend to not be spending money. convincing players to take that leap and spend their first dollar with you. Typically companies are interested in looking at revenue from unique. the choice of whether to count these things will change your calculations. For games where players can either play for free or pay – this includes shareware. if the actual gameplay itself is different between the two groups. Since you have regular incremental releases that each have an effect on sales. and most other kinds of payment models for online games – you can look at not just how many users you have. giving one version to the earliest accounts created on your system and the other version to the most recent adopters). say. that’s hard to do without some players getting angry about it. which works in some special cases. remember that your game doesn’t operate in a vacuum. is to actually have two separate versions of your game that you roll out simultaneously to different players. you’d like to eliminate these factors. versus players who spend regularly. One important thing about this is that you do need to select the players randomly (and not. especially if one of the two groups ends up with an unbalanced design that can be exploited. However. that most Facebook games will see a temporary drop in usage because people are busy watching the news instead. but it also says that you have trouble with “conversion” – that is. Consider a game where you make a huge amount of money from a tiny minority of players. I could imagine assigning players randomly to a faction (like World of Warcraft’s Alliance/Horde split. Ideally.pattern can be a bit different: sales start slow (higher if you do some marketing up front). this suggests you have a great game that attracts and retains free players really well. and keep modifying as you go. then if the game is good it ramps up over time as word-of-mouth spreads. That’s a different problem. So it’s better to do this with things that don’t affect balance: banner ads. but there are also a lot of variables to consider. microtransactions. so it’s basically the same curve but stretched out a lot longer. You might be getting the same total cash across your user base in both of these . or at least that it’s giving your players less incentive to spend more. but it would be interesting to see in action. or how much money you’re getting total. if you do this with gameplay. informational popup dialog text. I bet if there’s a major natural disaster that’s making international headlines. and other things like that. I don’t know of any game that’s actually done this. so you know what you’re measuring. so it would make sense that each faction would have some things that are a little bit different. controlling for outside factors. and then you compare the two groups. there are often other outside factors that will affect your sales. except randomly chosen when an account is created) and having the warring factions as part of the backstory of the game. measure the difference in sales for the rest of the week. Now consider a different game. This is very powerful. For example. you’re getting constant feedback on the effects that minor changes have on the money your game brings in. there’s a difference between players who are playing for free and have absolutely no intention of paying for your game ever. like you’re hitting a spending ceiling somewhere. the color or appearance of the artwork in your game. Of course. where most people that play spend something but that something is a really small amount. So if a game company made a minor game change the day before the Gulf oil spill and they noticed a sudden decrease in usage from that geographical area. First. splash screens. subscriptions.

but it wasn’t my call. I wish we called them players rather than users. a lot of times you can get some really useful correlations between certain game mechanics and increased sales. So these three things. then spend a lot. you’ll want to factor these in if you’re trying to optimize your development resources. Of course. A word of warning (gosh. ARPPU will be really high. Since a business usually wants to maximize its profits (that is. A lot of investors get really burned when they mistake an S curve for an exponential increase. then spend incrementally smaller amounts until they hit zero? An increasing curve where players spend a little. Typically. and where you are in the “tail” of your sales curve. even if ARPU is the same for both games. Another interesting metric to look at is the graph of time-vs-money for the average user. that the trend will continue. total number of players is also a consideration. don’t forget to take your costs into account. keep in mind that your ratio of good-to-bad features will not be perfect.scenarios. getting people to try your game the first time). If your ARPU and ARPPU are both great but you’ve got a player base of a few thousand when you should have a few million. For example. then a bit more. so if your exponential growth is faster than human population growth it has to level off eventually. but they’re a lot trickier if you try to use them to predict the future. the difference between them is shown with two buzzwords. sales. as we saw (more or less) with the dot-com crash about 10 years ago. especially if it’s a really tight fit with an exponential function. the money it takes in minus the money it spends) and not its revenue (which is just the money it takes in). or retention (getting players to keep coming back for more). which tend to scale with the number of players. conversion (getting them to pay you money the first time). until a sudden flameout where they drop your game entirely? Regular small payments on a traditional “long tail” model? What does this tell you about the value you’re delivering to players in your early game à mid-game à late game à elder game progression? While you’re looking at revenue. but the solutions are different. in the second case. It’s tempting to assume. Business growth curves are usually not exponential. so you have to count some portion of the bad ideas as part of the cost in developing the good ones (this is a type of “sunk cost” like we discussed in Week 6 when we talked about situational balance). including both the “good” ones that increase revenue and also the “bad” ones that you try out and then discard. and then eventually levels off or starts decreasing. and ongoing costs. A Note on Ethics . ARPU (Average Revenue Per User) and ARPPU (Average Revenue Per Paying User). not just the average. but instead what is called “S-shaped” where it starts as an exponentially increasing curve and eventually transitions to a logarithmically (that is. Ongoing costs are things like bandwidth and server costs and customer support. with the exception that once they reach the peak of the “S” there’s usually a very sudden crash. There are two kinds of costs: up-front development. then that’s probably more of a marketing problem than a game design problem. It depends on what’s happening to your player base over time. But common sense tells us this can’t continue indefinitely: the human population is finite. can give you a lot of information about whether your problem is with acquisition (that is. in the first example with a minority of players paying a lot when most people play for free. a really hot game that just launched might have what initially looks like an exponentially-increasing curve. And when you overlap these with the changes you make in your game and the updates you offer. slowly) increasing curve. ARPU and ARPPU. ARPPU will be really low. The up-front costs are things like development of new features. then a bit more. How much do people give you on the day they start their account? What about the day after that. Illegal pyramid schemes also tend to go through this kind of growth curve. I seem to be giving a lot of warnings this week): statistics are great at analyzing the past. and the day after that? Do you see a large wad of cash up front and then nothing else? A decreasing curve where players try for free for awhile. At any rate.

• Of those open questions. So there is a question of how far we can push our players to give us money. Maybe some of these are on console memory cards or hard disks. particularly those on Facebook which have evolved to make some of the most efficient use of metrics of any games ever made. and the reason is that those saved games have value to you! And more to the point. or just to play our game at all. while also causing horrendous bad events in the lives of a small minority that lose their marriage and family to their game obsession. or that play for so long without attending to basic bodily needs that they keel over and die at the keyboard. Some games. And then suppose I threatened to destroy all of them… but not to worry. this sounds silly when taken to the extreme. What are the different things you might see? What would they mean? Make sure you know how you’ll interpret the data in advance. the difference between value added and taken away is not constant. it’s different from person to person. consider this: suppose I found all of your saved games and put them in one place. This is why we have things like MMOs that enhance the lives of millions of subscribers. But just like difficulty curves. I invite you to think about where you stand on this. exploiting known flaws in human psychology to keep their players playing (and giving money) against their will. • If you’re doing a video game. and so on. the most vital to your gameplay. if one of these games threatened to delete all your saves unless you bought some extra downloadable content. and that emotional investment does have cash value. you would at least consider it… not because you wanted to gain the content. how much would you pay me to not do that. and how you will use statistics to draw conclusions. As before. for example). by letting them live more in the hours they spend playing than they would have lived had they done other activities. and analyzing them? • Choose one question from the remaining list that is. because we think of games as something inherently voluntary. Now. any game you’ve played for an extended period of time is a game you are emotionally invested in. they still see the game experience itself as a net value-add to their life. the answer is more than zero. in your opinion. On the other hand. And most people don’t really have a problem with this. the decision will be made for you by someone else who does.This is the second time this Summer when talking about game balance that I’ve brought up an issue of professional ethics. which ones could be solved through playtesting. make sure the game has some way of logging the information . taking metrics. my suggestion for any game you’re working on is to ask yourself what game design questions could be best answered through metrics: • What aspects of your design (especially relating to game balance) do you not know the answers to. just like movies and books and all other media (there’s that whole thing about suspending our disbelief. It’s weird how this comes up in discussions of applied mathematics. have also been accused (by some people) of being blatantly manipulative. If You’re Working on a Game Now… If you’re working on a game now. as you might guess. so the idea of a game “holding us prisoner” seems strange. but because you wanted to not lose your save. So you get free replacements of your hard drive and console memory cards. To be fair. a fresh account on every online game you subscribe to. all games involve some kind of psychological manipulation. Figure out what metrics you want to use. because if you don’t know. your “saved game” is on some company’s server somewhere. before we cross an ethical line… especially in the case where our game design is being driven primarily by money-based metrics. If it seems silly to you that I’d say a game “makes” you spend money. I’d replace the hardware. Maybe some of them are on your PC hard drive. at this point in time? Make a list. And I bet when you think about it. For online games. And then suppose I asked you. isn’t it? Anyway… The ethical consideration here is that a lot of these metrics look at player behavior but they don’t actually look at the value added (or removed) from the players’ lives.

Generate a list: • What game balance questions would you want answers to. http://www. where several questions use some of the same metrics. . or RTS. suggesting when to use metrics and when to use your actual game design skills.com/Metrics_Fetishism Game designer Chris Hecker gave a wonderful GDC talk this year called “Achievements Considered Harmful” which talks about a different kind of metric – the Achievements we use to measure and reward player performance within a game – and why this might or might not be such a good idea. or else existing play data from the initial release.” what would a “yes” or “no” answer look like once you analyzed the data? Additional Resources Here are a few links. tabletop RPG. Euro board game. Think of it as a “version 2. he talks about what he calls “Metrics Fetishism. or whatever. run some playtests and start measuring! Homework This is going to be mostly a thought experiment. and it’s just a matter of asking for the data and then analyzing it. In the second article.html Game designer Dan Cook writes on the many benefits of metrics when developing a Flash game.gamasutra.) • What analysis would you perform on your metrics to get the answers to each question? That is. Now choose what you consider to be an archetypal example of such a game. http://www.lostgarden. because I couldn’t think of any way to force you to actually collect metrics on a game that isn’t yours.” basically going into the dangers of relying too much on metrics and not enough on common sense.com/view/news/29916/GDC_Europe_Playfishs_Valadares_on_Intuition_Ver sus_Metrics_Make_Your_Own_Decisions.com/features/20070124/sigman_01. You might have some areas where you already suspect.you want. CCG. Choose your favorite genre of game. that is. your intention was to keep the core mechanics basically the same but just to possibly make some minor changes for the purpose of game balance.php This is a Gamasutra article quoting Playfish studio director Jeferson Valadares at GDC Europe. If it’s a board game. Come up with a metrics plan.com/Achievements_Considered_Harmful%3F and http://chrishecker.0” of the original. medians and standard deviations.gamasutra. one that you’re familiar with and preferably that you own. what would you do to the data (such as taking means. from your designer’s instinct. Pretend that you were given the rights to do a remake of this game (not a sequel). Assume that you have a ready supply of playtesters. that the game is unbalanced… but let’s assume you want to actually prove it. Maybe an FPS. Much of what I wrote was influenced by these: http://chrishecker. more than practical experience.com/2009/08/flash-love-letter-2009-part-2. this time giving a basic primer on statistics rather than probability. or looking for correlations)? If your questions are “yes” or “no. http://www.shtml Written by the same guy who did the “Orc Nostril Hair” probability article. in case you didn’t get enough reading this week. that could be answered with statistical analysis? • What metrics would you use for each question? (It’s okay if there is some overlap here.

building an army of 30% archers. your players are using one thing a lot more than another. These kinds of things aren’t even necessarily designed with the intention of being intransitive. So you can see that intransitive mechanics are in all kinds of places. you might desire certain game objects to be used more or less frequently than others. 50% infantry. you will know ahead of time how the game is likely to develop. We see intransitive mechanics in games all the time. each one designed to beat one or more of the other ones. because everything can be beaten by something else. For example. this kind of analysis may be able to shed light on why that is. Within the metagame of a CCG you often have three or four dominant decks. if it seems like in playtesting. As a reminder. but that is what ends up happening. By designing your game specifically to have one or more optimal strategies of your choosing. because each choice wins sometimes and loses sometimes. and archers are great at bringing down fliers. and by changing the relative costs and availability of each object you can change the optimal mix of objects that players will use in play. Turn-based strategy games often have some units that work well against others. For example. linear algebra. an example pattern being that heavy tanks lose to anti-tank infantry which loses to normal infantry which lose to heavy tanks. In fighting games. For example. a typical pattern is that normal attacks are defeated by blocks. What does a “solution” look like here? It can’t be a cost curve. In real-time strategy games. MMOs and tabletop RPGs often have some character classes that are particularly good at fighting against other classes. As a game designer. which in turn loses to the original 1/3 creature. Sometimes it’s purely mathematical. like rocket launchers being good against tanks (since they’re slow and easy to hit) which are good against light vehicles (which are destroyed by the tank’s fast rate of fire once they get in range) which in turn are good against rocket launchers (since they can dodge and weave around the slow incoming rockets). “intransitive” is just a geeky way of saying “games like Rock-Paper-Scissors” – that is. you might want certain things to only happen rarely during normal play but be spectacular when they do. Some of these relationships might not be immediately obvious. 20% fliers (or 3:5:2) might be a solution to an intransitive game featuring those units. consider a game where one kind of unit has long-range attacks. which is defeated by a short-range attacker who can turn invisible. as well. a typical pattern is that you have fliers that can destroy infantry. games where there is no single dominant strategy. in Magic: the Gathering. and throws are defeated by attacks. so that we can learn more about how these work within our game and what we can expect from player behavior at the expert level. a 1/3 creature will lose in combat to a 3/2 creature. First-person shooters sometimes have an intransitive relationship between different weapons or vehicles. (Or. Solutions to intransitive mechanics Today we’re going to get our hands pretty dirty with some of the mathiest math we’ve done so far. Instead it’s a ratio of how often you choose each available option. and the medium-range attacker is of course weak against the long-range attacker. under certain conditions. and how often you expect your opponent to choose each of their options. borrowing when needed from the tools of algebra. you can design a game to be like that intentionally. and if you understand how your costs affect relative frequencies. blocks are defeated by throws.) . In the process we’ll learn how to solve intransitive mechanics. which loses to a 2/1 First Strike creature. infantry that works well against archers.Level 9: Intransitive Mechanics This Week Welcome back! Today we’re going to learn about how to balance intransitive mechanics. this in turn is defeated by a medium-range attacker with radar that reveals invisible units. and game theory.

get ready and let’s learn how to solve them! Solving the basic RPS game Let’s start by solving the basic game of Rock-Paper-Scissors to see how this works. an intransitive game is at least more interesting than one with a single dominant strategy (“Rock-Rock-Rock”) because you will see more variety in play. Let’s call our opponent’s throws r. Since each throw is theoretically as good as any other. which is particularly interesting in games of partial but incomplete information (like Poker). intransitive mechanics allow for a metagame correction – not an ideal thing to rely on exclusively (such a thing would be very lazy design). if we throw (R)ock and opponent throws (s)cissors. suppose you know ahead of time that your opponent is using a strategy of r=0. an intransitive mechanic embedded in a larger game may still allow players to change or modify their strategies in mid-game.5. In games with bluffing mechanics. For example. one win + one loss balances out) and draws are right in the middle. let’s call a win +1 point. does not mean it’s as dull as Rock-PaperScissors. by calling r. For another. this is from our perspective – for example. hopefully you can see that just because a game has an intransitive mechanic. you can at least know that there will not be a single dominant strategy that invalidates all of the others. So. because it will be weak against at least one other counterstrategy. we would expect the ratio to be 1:1:1. Even if you don’t know exactly what the best strategy in your game is. particularly in action-based games where you must react to your opponent’s reaction to your reaction to their action in the space of a few milliseconds.Who Cares? It may be worth asking. for a net +1 to our score. what’s the appeal? Few people play Rock-Paper-Scissors for fun. but better to have a safety net than not if you’re releasing a game where major game balance changes can’t be easily made after the fact. p and s probabilities that the opponent will make each respective throw. if all intransitive mechanics are just glorified versions of Rock-PaperScissors. players may make choices based on what they’ve observed other players doing in the past and trying to use that to infer their future moves. Additionally. What’s the best counter-strategy? To answer that question. Players may make certain choices in light of what they observe other players doing now (in real-time). p=s=0. and our throws R.25 (that is. so why should they enjoy a game that just uses the same mechanics and dresses them differently? For one thing. And that is what we’ll find. but these numbers make it easiest. they throw 2 rock for every paper or scissors). p and s. and a draw 0 points. Even if the game itself is unbalanced. a loss -1 point. First. Let’s re-frame this a little bit. meaning you choose each throw equally often. Since winning and losing are equal and opposite (that is. intransitive mechanics serve as a kind of “emergency brake” on runaway dominant strategies. but it’s important to understand how to get there so that we can solve more complex problems. So. P and S (we get the capital letters because we’re awesome). Our opponent’s table would be the reverse. we win. if I’ve managed to convince you that intransitive mechanics are worth including for at least some kinds of games. The math here would actually work for any point values really. if all strategies have an intransitive relationship. we can construct a set of three equations that tells you your payoffs for . We now construct a table of results: r R P S p 0 +1 -1 s -1 0 +1 +1 -1 0 Of course. let’s look at the outcomes.

because if the payoff were instead any less than the others it would no longer be worth choosing (you’d just take something else with a higher payoff). it tells us that an intransitive mechanic is very fragile. Rock-Paper-Scissors is a symmetric zero-sum game. • Symmetric games have the same solution for all players. all potential moves that are worth taking have the same payoff. which makes them throw r. R = P = S. against a human opponent who notices we’re always throwing P. the solution will end up being the same for both players. we’ll win one more game than we lose. people smarter than me have actually proved these mathematically. the payoff for everything must be zero (because the payoffs are going to be the same for both players due to symmetry. out of every four throws. but this isn’t a course in math proofs so I’m handwaving over that part of things. assuming the opponent doesn’t vary their strategy at all. which then forces us to throw some R. if any strategy is worth choosing at all. their counter-strategy would be to throw a greater proportion of s. In the case of our rock-heavy opponent. Lastly. until our opponent modifies their strategy. which makes us throw S. Second is that each payoff must be the same as the other payoffs. S=-0. there are a couple of things I’m going to ask you to trust me on. To summarize: • All payoffs that are worth taking at all. and around and around we go. and we expect that we will gain 0.25 per throw – that is. R = P = S = 0. In fact. our best counter-strategy is to throw Paper every time. both players have exactly the same set of options and they work the same way). which then causes them to throw p. we’ll find that if our opponent merely throws rock the tiniest. and if it were any higher than the others you’d choose it exclusively and ignore the others. and that even a slight imbalance on the player’s part can lead to a completely dominant strategy on the part of the opponent. If we’re both constantly adjusting our strategies to counter each other. I hope you’ll forgive me for that. slightest bit more often than the others. we also know the probabilities of their throw add up to 100%: . in symmetric zero-sum games specifically. give an equal payoff to each other. do we tend towards a stable state of some kind? Some Math Theorems Before answering that question. Since the opponent must select exactly one throw. do we ever reach any point where both of us are doing the best we can? Over time. Thus.each throw: • Payoff for R = 0r + (-1)p + 1s = s-p • Payoff for P = 1r + 0p + (-1)s = r-s • Payoff for S = (-1)r + 1p + 0s = p-r So based on the probabilities. • Symmetric zero-sum games have all payoffs equal to zero. and our best strategy is still to throw Paper 100% of the time. Since P has the best payoff of all three throws. and the only way for the payoffs to sum to zero and still be equal is if they’re both zero). First is that if the game mechanics are symmetric (that is. you can calculate the payoffs. Of course. that is. so: 1.25. it will provide the same payoff as all other valid strategies. P=0. the opponent’s probability of choosing Rock is the same as our probability. Finishing the RPS Solution Let’s go back to our equations. This is significant.25. the payoffs are R=0. the net payoff for P will be better than the others.

they tend to not do a very good job of it. while at the same time masking any apparent patterns in your own play. but each of these does a different amount of damage. but that throws do twice as much damage as an attack or block. But let’s just say “every win with Rock counts double” for simplicity here. How does that affect our probabilities? Again we start with a payoff table: r R P S p 0 +1 -2 s -1 0 +1 +2 -1 0 We then use this to construct our three payoff equations: • R = 2s-p • P = r-s • S = p-2r Again. So our solution of 1:1:1 does not say which throw you must choose at any given time (that is in fact where the skill of the game comes in). For example. when humans try to play randomly. but just that over time we expect the optimal strategy to be a 1:1:1 ratio (because any deviation from that hands your opponent a strategy that wins more often over you until you readjust your strategy back to 1:1:1). How does Rock-Paper-Scissors change when we mess with the costs? Here’s an example. You could just as easily frame it like this: in a fighting game. and blocks do the same amount of damage as an attack (let’s say that a successful block allows for a counterattack). but at the same time which throw you choose depends on your ability to detect and exploit patterns in your opponent’s play. our odds of winning are the same no matter what. the game is zero-sum and symmetric. in fighting games there’s a common intransitive system that attacks beat throws. This suggests that against a completely random opponent it doesn’t matter what we choose. Of course. let’s change the scoring mechanism. our best strategy is also to choose each throw with 1/3 probability. this does not mean that the best strategy is to actually play randomly (say. attacks do normal damage. Solving RPS with Unequal Scoring The previous example is all fine and good for Rock-Paper-Scissors. throws beat blocks. Suppose I make a new rule: every win using Rock counts double. so if we choose an unbalanced strategy they can alter their throw ratio to beat us. Note that in actual play. therefore p=r r+p+s = r+r+r = 1. therefore p=s P = 0 = r-s. so we still have: . therefore r=s S = 0 = p-r. so they tend to have different results in the sense that each choice puts a different amount at risk. p=1/3. therefore r=1/3 Since r=p=s. so in the real world the best strategy is still to play each throw about as often as any other. and blocks beat attacks. p and s each with probabilities of 1/3. but how can we apply this to something a little more interesting? As our next step. s=1/3 So our solution is that the opponent should throw r. and both us and our opponent must choose exactly one throw. the opponent knows this too.• r+p+s=1 From here we can solve the system of equations by substitution: • • • • • R = 0 = s-p. by rolling a die secretly before each throw)! As I’ve said before.

but they also reduced our Knight’s HP by 20%. therefore r = s S = 0 = p-2r. Let’s take a simple RTS game where you have knights that beat archers. with Knight vs Knight. but in retrospect it makes sense: since Scissors is such a risky play. which is worth a 75gold advantage. and adding those results together we end up with a net gain of zero).2*50)= -65 +50 a (-50*0. archers cost 75. the end result is that Paper gets chosen half of the time. they win. therefore 2r = p r+p+s = r+2r+r = 1. and fliers beat knights. dropping rocks down on them from above).• R=P=S=0 • r+p+s = 1 Again we solve: • • • • • • R = 0 = 2s-p. So the actual outcome is we’re up by 65 gold. but that it’s a little different with winners. we gain +50 Gold relative to the opponent by defeating their Knight. players are less likely to choose it. but relative to each other it’s still zero-sum (for example. So if you had a fighting game where a successful throw does twice as much damage as a successful attack or a successful block. Fighting games typically don’t have a “cost” associated with performing a move (other than time. archers beat fliers. but they still lose 20% of their health to the initial arrow volley before they close the ranks. because the knights can’t do anything other than stand there and take it (their swords don’t work too well against enemies a hundred feet above them.4)+100=+70 100-100=0 To explain: if we both take the same unit it ends up being zero. But against knights.4)= -70 f -50 (-75*0. and fliers cost 100. therefore 2s = p P = 0 = r-s. What about when our Knight meets an enemy Archer? We kill their Archer. but also lose -50 Gold because our own Knight dies as well. perhaps). Finally. Paper is more likely to either draw or win. So we both actually have a net loss.2)+75=+65 75-75=0 -100+(75*0. so it is actually Paper (and not Rock) that is played more frequently. which means we lost an equivalent of 10 gold in the process. If you know your opponent is not likely to play Scissors. so you could say we lost 20% of our Knight cost of 50. therefore r=1/4 r=s. Let’s say further that if you send one type of unit against the same type. . but really what’s going on is that we’re both paying the same amount and both lose the unit. therefore s=1/4 2r=p. And let’s say against fliers. let’s say that knights cost 50 gold. Let’s say that when knights attack archers. archers lose 40% of their health to counterattacks. but RTS games usually have actual resource costs to produce units. therefore p=1/2 So here we get a surprising result: if we double the wins for Rock. fliers take no damage at all. that’s just common sense. while Rock and Scissors each get chosen a quarter of the time! This is an answer you’d be unlikely to come up with on your own without doing the math. then you’d actually expect to see twice as many attack attempts as throws or blocks! Solving RPS with Incomplete Wins Suppose we factor resource costs into this. Now how does this work? We start with the payoff table: k K A F 50-50=0 -75+(0. they kill each other mutually so there is no net gain or loss on either side. but you do as much damage with a block or an attack.

therefore f = 13/37 a = (10/14)k = (10/14)(14/37). therefore f = (13/14)k F = 0 = 50k-70a. therefore 50k = 70a. suppose I made a Rock-Paper-Scissors variant where each round. and we both win or lose the same amount according to the same set of rules. and Player B does not.When our Knight meets an enemy Flier. I flip up a new card that alters the win rewards. because now both players must figure out the probabilities of their opponents’ throws. if any? Let’s find out by constructing two payoff tables. therefore 65a = 50f A = 0 = 70f-65k. How does this change things? It actually complicates the situation a great deal. But not all intransitive games are perfectly symmetric. with knights being a little more common and archers a little less. Where does the Flier cost of 100 come in? In this case it doesn’t. really – the opponent still has a Flier after the exchange. so they still have 100 gold worth of Flier in play. For example. therefore 70f = 65k. my card says that my opponent gets two points for a win with Rock. or allow knights to do a little bit of damage to them. the hard part is just altering your payoff table. If you wanted fliers to be more rare you could play around with their costs. therefore k = 14/37 f = (13/14)k = (13/14)(14/37). we lose the Knight so we’re down 50 gold. therefore a = (10/14)k k+a+f = k + (10/14)k + (13/14)k = (37/14)k = 1. we find: In this case you’d actually see a pretty even mix of units. and those probabilities may not be the same anymore! Let’s say that Player A has the double-Rock-win bonus. they’ve lost nothing… at least. but I don’t (I would just score normally). Player A’s payoff table looks like this: rB RA PA SA rA RB PB SB pB 0 +1 -1 pA 0 +1 -2 sB -1 0 +1 sA -1 0 +1 +1 -1 0 +2 -1 0 Player B’s payoff table looks like this: . This round. not yet! So in the case of different costs or incomplete victories. Solving RPS with Asymmetric Scoring So far we’ve assumed a game that’s symmetric: we both have the exact same set of throws. or something. the process is the same: • • • • • • • • • • • K = 0k + 65a + (-50)f = 65a-50f A = (-65)k + 0a + 70f = 70f-65k F = 50k + (-70)a + 0f = 50k-70a K=A=F=0 k+a+f = 1 K = 0 = 65a-50f. What’s the optimal strategy for both players? And how much of an advantage does this give to Player A. therefore a = 10/37 Solving. It didn’t hurt the opponent at all. From there.

let’s move the leftmost column to the right instead. Here’s the first one: RA [ SA 0 PA -1 -1 +1 +1 +2 0 0 -1 ] Here. intuition tells us they probably aren’t! We now have this intimidating set of equations: • • • • • • • • • • RA = 2sB – pB PA = rB – sB SA = pB – rB RB = sA – pA PB = rA – sA SB = pA – 2rA RA = PA = SA RB = PB = SB rA + pA + sA = 1 rB + pB + sB = 1 We could do this the hard way through substitution. the third column is pA. let’s just replace them all with a single variable X. meaning we could add each entry of two rows together and the resulting row is still a valid entry (which we could use to add to the rows already there. the left column represents the left side of the first three equations above. or even replace an existing row with the new result). as long as we multiply all four entries in the row by the same amount).Here we can assume that RA=PA=SA and RB=PB=SB. Two changes for clarity: first. Algebra also tells us that we can add both sides of two equations together and the result is still true. that is. And we can also rearrange the rows. which represents the net payoff for Player A: 0 [ -1 -1 +1 +1 +2 0 0 X -1 X X ] This is just a shorthand way of writing down these three equations. and the diagonals themselves (marked here with an asterisk) have to be nonzero: * [ ? 0 ? * ? ? ? ] . because all of them are still true no matter what order we put them in. Here’s how it works: we rewrite the payoff tables as matrices. However. and also that rA+pA+sA = rB+pB+sB = 1. in fact. but an easier way is to use matrices. omitting the variable names but keeping them all lined up in the same order so that each column represents a different variable: 0rB 1rB -1rB -1pB +2sB =X =X =X +0pB -1sB +1pB +0sB Algebra tells us we can multiply everything in an equation by a constant and it’s still true (which means we could multiply any row of the matrix by any value and it’s still valid. we cannot assume that RA=PA=SA=RB=PB=SB=0. and the fourth column is sA. since RA=PA=SA. of the form where everything under the diagonal is zeros. and second. What we want to do here is put this matrix in what’s called triangular form. because we don’t actually know that the payoffs for players A and B are equal. the second column is rA. which will make it easier to work with.

Again. and since RB=PB=SB. but we do know that the ratio for Player B is 3 Scissors to 5 Paper to 4 Rock. the payoff table is: rA RB PB SB 0 [ -2 pA 0 +1 -2 -1 +1 +1 sA -1 0 +1 +1 0 0 +1 -1 0 RB -1 SB PB ] This becomes the following matrix: Again we reorganize. first we reorder them by swapping the top and middle rows: To eliminate the +1 in the bottom row.0 -1 [ +1 0 +1 0 0 * 0 -1 -1 ? X +2 X X ] So. therefore pB = 5*X • -1(rB) + 1(pB) = X. therefore rB = 4*X At this point we don’t really need to know what X is. we add the top and bottom rows together and replace the bottom row with that: -1 + 0 -1 [ 0 +1 +1 +1 +1 0 +1 0 0 -1 0 -1 -1 X -1 2*X X +2 2*X X ] X Our matrix is now: Now we want to eliminate the +1 on the bottom row. this means: rB = 4/12 pB = 5/12 sB = 3/12 We can use the same technique with the second set of equations to figure out the optional ratio for Player A. going from the bottom up. using substitution: • +1(sB) = 3*X. therefore sB = 3*X • -1(pB) +2(sB) = X. therefore -1(pB)+2(3*X) = X. so we add the middle and bottom rows together and replace the bottom row with the result: -1 [ 0 +1 0 0 0 -1 +1 X +2 3*X X ] Now we can write these in the standard equation forms and solve. remember that the payoff for one player may be different from . let’s call these all a new variable Y (we don’t use X to avoid confusion with the previous X. Since sB+pB+rB = 1.

In a symmetric game both would have to be zero. This is probably just a side effect of the fact that the average payoff for Player A is probably positive while Player B’s is probably negative. Let’s swap the bottom and top this time. therefore -2rA = 6Y. -2 + 0 -2 [ 0 -2 [ 0 +1 +1*2 +1 +1 0 -1 +1 0 0 0 0*2 -2 0 +1 +1 0 +1 -1 Y -1*2 Y*3 Y -2 Y Y -2 Y*4 Y*3 ] Y*3 ] Y*2 Our matrix is now: Adding second and third rows to eliminate the -1 in the bottom row we get: Again working backwards and substituting: • sA = -Y*4 • pA – 2sA = Y*3. but this is not symmetric. we can use this to figure out the actual advantage for Player A. therefore Y = -1/12 rB = X*4. therefore X = +1/12 We know that RA=PA=SA and RB=PB=SB. along with replacing the payoffs by Y: -2 [ 0 +1 +1 -1 0 0 +1 Y -1 Y Y ] To eliminate the +1 in the center row. we know that the payoff for A must be equal to the negative payoff for B. therefore rA = -Y*3 Now. This makes a lot of sense and acts as a sanity check: since this is still a zero-sum game. this is 3 Rock to 4 Scissors to 5 Paper: rA = 3/12 rB = 4/12 pA = 5/12 pB = 5/12 sA = 4/12 sB = 3/12 This is slightly different from Player B’s optimal mix: Now. it might seem kind of strange that we get a bunch of negative numbers here when we got positive ones before. For Player A. because we already have a couple of equations from earlier that directly relate these together: sA = -Y*4. multiply the top row by 1/2. or we could just plug these values into our existing equations. We could do this through actually making a 12×12 chart and doing all 144 combinations and counting them up using probability.the other here). For me that last one is the easiest. or we could do a Monte Carlo simulation. it turns out that if both . but I find it easier to multiply by whole numbers than fractions). That said. so this means the payoff for Player A is +1/12 and for Player B it’s -1/12. therefore pA = -Y*5 • -2rA + pA = Y. but in either case it all factors out because we just care about the relative ratio of Rock to Paper to Scissors. we have to multiply the center row by 2 before adding it to the top row (or.

players play optimally, the advantage is surprisingly small: only one extra win out of every 12 games if both play optimally! Solving Extended RPS So far all of the relationships we’ve analyzed have had only three choices. Can we use the same technique with more? Yes, it just means we do the same thing but more of it. Let’s analyze the game Rock-Paper-Scissors-Lizard-Spock. In this game, Rock beats Scissors and Lizard; Paper beats Rock and Spock; Scissors beats Paper and Lizard; and Lizard beats Spock (and Lizard beats Paper, Spock beats Scissors and Rock). Our payoff table is (with ‘k’ for Spock since there’s already an ‘s’ for Scissors, and ‘z’ for Lizard so it doesn’t look like the number one): r R P S Z K p 0 +1 -1 -1 +1 s -1 0 +1 +1 -1 z +1 -1 0 -1 +1 k +1 -1 +1 0 -1 -1 +1 -1 +1 0

We also know r+p+s+z+k=1, and R=P=S=L=K=0. We could solve this by hand as well, but there’s another way to do this using Excel which makes things slightly easier sometimes. First, you would enter in the above matrix in a 5×5 grid of cells somewhere. You’d also need to add another 1×5 column of all 1s (or any non-zero number, really) to represent the variable X (the payoff) to the right of your 5×5 grid. Then, select a new 1×5 column that’s blank (just click and drag), and then enter this formula in the formula bar: =MMULT(MINVERSE(A1:E5),F1:F5) For the MINVERSE parameter, put the top left and lower right cells of your 5×5 grid (I use A1:E5 if the grid is in the extreme top left corner of your worksheet). For the final parameter (I use F1:F5 here), give the 1×5 column of all 1s. Finally, and this is important, press Ctrl+Shift+Enter when you’re done typing in the formula (not just Enter). This propagates the formula to all five cells that you’ve highlighted and treats them as a unified array, which is necessary. One warning is that this method does not always work; in particular, if there are no solutions or infinite solutions, it will give you #NUM! as the result instead of an actual number. In fact, if you enter in the payoff table above, it will give you this error; by setting one of the entries to something very slightly different (say, changing one of the +1s to +0.999999), you will generate a unique solution that is only off by a tiny fraction, so round it to the nearest few decimal places for the “real” answer. Another warning is that anyone who actually knows a lot about math will wince when you do this, because it’s kind of cheating and you’re really not supposed to solve a matrix like that. Excel gives us a solution of 0.2 for each of the five variables, meaning that it is equally likely that the opponent will choose any of the five throws. We can then verify that yes, in fact, R=P=S=L=K=0, so it doesn’t matter which throw we choose, any will do just as well as any other if the opponent plays randomly with equal chances of each throw. Solving Extended RPS with Unequal Relationships Not all intransitive mechanics are equally balanced. In some cases, even without weighted costs, some throws are just better than other throws. For example, let’s consider the unbalanced game of Rock-Paper-Scissors-Dynamite. The idea is that with this fourth throw, Dynamite beats Rock (by explosion), and Scissors beats Dynamite (by cutting the wick). People will argue which should win in a contest between Paper and Dynamite, but for our purposes let’s say Dynamite beats Paper. In

theory this makes Dynamite and Scissors seem like really good choices, because they both beat two of the three other throws. It also makes Rock and Paper seem like poor choices, because they both lose to two of the other three throws. What does the actual math say? Our payoff table looks like this: r R P S D p 0 +1 -1 +1 s -1 0 +1 +1 d +1 -1 0 -1 -1 -1 +1 0

Before we go any further, we run into a problem: if you look closely, you’ll see that Dynamite is better than or equal to Paper in every situation. That is, for every entry in the P row, it is either equal or less than the corresponding entry in the D row (and likewise, every entry in the p column is worse or equal to the d column). Both Paper and Dynamite lose to Scissors, both beat Rock, but against each other Dynamite wins. In other words, there is no logical reason to ever take Paper because whenever you’d think about it, you would take Dynamite instead! In game theory terms, we say that Paper is dominated by Dynamite. If we tried to solve this matrix mathematically like we did earlier, we would end up with some very strange answers and we’d quickly find it was unsolvable (or that the answers made no sense, like a probability for r, p, s or d that was less than zero or greater than one). The reason it wouldn’t work is that at some point we would make the assumption that R=P=S=D, but in this case that isn’t true – the payoff for Paper must be less than the payoff for Dynamite, so it is an invalid assumption. To fix this, before proceeding, we must systematically eliminate all choices that are dominated. In other words, remove Paper as a choice. The new payoff table becomes: R R S D s 0 -1 +1 d +1 0 -1 -1 +1 0

We check again to see if, after the first set of eliminations, any other strategies are now dominated (sometimes a row or column isn’t strictly dominated by another, until you cross out some other dominated choices, so you do have to perform this procedure repeatedly until you eliminate everything). Again, to check for dominated strategies, you must compare every pair of rows to see if one dominates another, and then every pair of columns in the same way. Yes, this means a lot of comparisons if you give each player ten or twelve choices! In this case eliminating Paper was all that was necessary, and in fact we’re back to the same exact payoff table as with the original Rock-Paper-Scissors, but with Paper being “renamed” to Dynamite. And now you know, mathematically, why it never made sense to add Dynamite as a fourth throw. Another Unequal Relationship What if instead we created a new throw that wasn’t weakly dominated, but that worked a little different than normal? For example, something that was equivalent to Scissors except it worked in reverse order, beating Rock but losing to Paper? Let’s say… Construction Vehicle (C), which bulldozes (wins against) Rock, is given a citation by (loses against) Paper, and draws with Scissors because neither of the two can really interact much. Now our payoff table looks like this: r p s c

R P S C

0 +1 -1 +1

-1 0 +1 -1

+1 -1 0 0

-1 +1 0 0

Here, no single throw is strictly better than any other, so we start solving. We know r+p+s+c=1, and the payoffs R=P=S=D=0. Our matrix becomes: 0 +1 [ +1 -1 0 -1 -1 +1 -1 +1 0 -1 +1 0 0 0 0 0 0 0 ]

Rearranging the rows to get non-zeros along the diagonal, we get this by reversing the order from top to bottom: +1 -1 [ 0 -1 +1 +1 -1 0 0 0 +1 0 0 -1 -1 0 0 +1 0 0 ]

Zeroing the first column by adding the first two rows, and subtracting the third from the first, we get: +1 0 [ 0 -1 0 0 -1 0 0 -1 +1 0 0 +1 -1 0 0 -1 0 0 ]

Curious! The second row is all zeros (which gives us absolutely no useful information, as it’s just telling us that zero equals zero), and the bottom two rows are exactly the same as one another (which means the last row is redundant and again tells us nothing extra). We are left with only two rows of useful information. In other words, we have two equations (three if you count r+p+s+c=1) and four unknowns. What this means is that there is actually more than one valid solution here, potentially an infinite number of solutions. We figure out the solutions by hand: • r-p=0, therefore r=p • -p+s-c=0, therefore c=s-p Substituting into r+p+s+c=1, we get: • p+p+s+(s-p)=1, therefore p+2s=1, therefore p=1-2s (and therefore, r=1-2s). Substituting back into c=s-p, we get c=s-1+2s, therefore c=3s-1. We have thus managed to put all three other variables in terms of s: • p=1-2s • r=1-2s • c=3s-1 So it would seem at first that there are in fact an infinite number of solutions: choose any value for s, then that will give you the corresponding values for p, r, and c. But we can narrow down the

It works like this: all players secretly and simultaneously choose a number. This can get pretty complicated with more players. among other things. in which case they lose that much life instead. being used somewhere between a third and half of the time. we can say that any of these is as good as any other. For one thing. right?) but in fact it turns out we can use a more powerful technique to solve such a game uniquely. and how much the players know about how their opponents play. but an asymmetric game. such that a single one would win more than the others? That unfortunately requires a bit more game theory than I wanted to get into today. Are any of these strategies “better” than the others. although I’m sure professional game theorists could philosophically argue the case for certain values over others. r=1/5. At the upper boundary (s=1/2). c=1/5. that there are exactly three choices. For our purposes. and so on. If we rely on there being exactly as many throws for one player as the other. you lose 5 life… and they gain 4. Let’s also make the simplifying assumption that the game is zero-sum. unless someone else also chose 3. making us wonder why we’re wasting development resources on implementing two or three throws that may never even see play once the players are sufficiently skilled! Solving the Game of Malkav So far we’ve systematically done away with each of our basic assumptions: that a game has a symmetric payoff. while all other players choose between 1 and 5. but I can tell you the answer is “it depends” based on certain assumptions about how rational your opponents are. unless any other player chose 4. At the lower boundary condition (s=1/3). meaning they must all be in the range of 0 (if they never happen) to 1 (if they always happen). whether the players are capable of making occasional mistakes when implementing their strategy. as it allows one winning strategy where the throw of C can be completely ignored. what happens when one player has. that it’s zero-sum. If anyone else chose 4. This lets us limit the range of s. This is interesting: it shows us that no matter what. overall? Does the additional option of playing 6 when your opponent can only play up to 5? What is the best strategy. And we could also opt for any strategy in between. Let us consider a card called “Game of Malkav” from an obscure CCG that most of you have probably never heard of. Combining the two ranges. s must be between 1/3 and 1/2. we find p=0. six different throws when their opponent has only five? It would seem such a problem would be unsolvable for a unique solution (there are six unknowns and only five equations. From the equation c=3s-1. p=1/5. r=1/3. so let’s simply consider the two-player case. Probabilities can never be less than zero or greater than 1. we find that p=1/3. for our purposes. we know it must be between 0 and 1. Each player gains as much life as the number they choose… unless another player chose a number exactly one less. r=0. So for example if you choose 5. Looking instead at p and r. but at least it’s a starting point for understanding what this card is actually worth). Also. Scissors is an indispensible part of all ideal strategies. How? By remembering that all of these variables are probabilities. and that’s what happens if the players have a different selection of choices – not just an asymmetric payoff. and this will vary based on relative life totals. and another winning strategy where both P and R are ignored. we could say that Construction Vehicle is probably not a good addition to the core game of Rock-Paper-Scissors. c=1/2. say s=2/5. There’s one other thing that we haven’t covered in the two-player case. We might wonder. and that you gaining 1 life is equivalent in value to your opponent losing 1 life (I realize this is not necessarily valid.ranges even further. and what . which is a valid strategy. we know that s must be at least 1/3 (otherwise c would be negative) and s can be at most 2/3 (otherwise c would be greater than 100%). The player who played this card chooses between 1 and 6. in some cases. you gain 5 life. what is the expected payoff of playing this card. say. c=0. we know s can range from 0 up to 1/2.

our best answer to that is P6. Let’s call the choices P1-P6 (for the Player who played the card). How do we find them? We start by finding the best move for each player. eventually after repeated play only a small subset of moves actually end up being part of the intransitive nature of this game because they form two intransitive loops (O5/P4/O3/P2. if the opponent knows we will throw P1. and O5/P4/O3/P2/O1/P6). and there do not appear to be any dominated picks for either player. and O1-O5 (for the Opponent): O1 P1 P2 P3 P4 P5 P6 O2 0 -3 +2 +3 +4 +5 O3 +3 0 -5 +2 +3 +4 O4 -2 +5 0 -7 +2 +3 O5 -3 -2 +7 0 -9 +2 -4 -3 -2 +9 0 -11 We could try to solve this. say by initially throwing P3? Then the opponent’s best counter is O2. Any other choice ends up being strictly inferior: for example. For example. O5. Against P2. with 6 equations and 5 unknowns. Against P6. O5 and P2. which gets the response P3. which we just covered. Occasionally you will find a game (Prisoner’s Dilemma being a famous example. there is no reason you would prefer P5 instead (even if you expect your opponent to play O5. O3. O4. no rows cancel. best response is O4. is the card worth playing… and if so. and instead you end up with at least two equations that contradict each other. which again brings us into the intransitive sequence O5->P4->O3->P2->O1->P6->O5. we see that no matter what we start with. we start with a payoff table. when you play it. if they knew what the opponent was doing ahead of time. so there’s nothing more to analyze. you can often reduce a larger number of choices to a smaller set of viable ones… or at worst. Thus. What if we start with O1. their best move is O5 (giving them a net +4 and us a net -4). their best move is P4. there are two equally good moves: O1 and O5. game . at any point where it is advantageous to play P6 (that is. • Against O1. P4 or P6? All of them are already accounted for in earlier sequences. which then leads us into the O5->P4->O3->P2->O1->P6->O5 loop. the best move is O3. so we consider both options: • Against O5. the best move is P2. the best response is O5. your best response is not P5.is the expected end result? In short. but rather P4). there is redundancy… except in this case. So there must actually be some dominated strategies here… it’s just that they aren’t immediately obvious. Basically. Looking at these sequences. you are expecting a positive payoff). you can prove that all of the larger set are in fact viable. Against O3. the only choices ever used by either player are O1. P2. as before (and we continue around in the intransitive sequence O5->P4->O3->P2->O5 indefinitely). if you’ve heard of that) where there are one or more locations in the table that are equally advantageous for both players. What if we start at a different place. If we start with P5. P4. because there are a set of rows or columns that are collectively dominated by another set. But then we continue by reacting to their reaction: if the player knows the opponent will choose O5. the best response is P4. which is much harder to just find by looking. so that after repeated play we expect all players to be drawn to those locations. O3. how do you decide what to choose? As usual. O2. P6. the best response is P6. but we will quickly find that the numbers get very hairy very fast… and also that it ends up being unsolvable for reasons that you will find if you try. But against P4. By using this technique to find intransitive loops.

being aware that this is not symmetric. we know that O1=O3=O5 and P2=P4=P6. P2 is positive and O1 is negative. If you’re curious. Now. the final answers are roughly: P2:P4:P6 = 49% : 37% : 14% O1:O3:O5 = 35% : 41% : 24% Expected payoff to Player P (shown as “X” above): 0. Therefore. but in order to learn the probabilities of choosing P2. Solving Three-Player RPS So far we’ve covered just about every possible case for a two-player game. they may involve teams or free-for-all environments. for any kind of two-player game.31.31. So in this case. Not that you need to care. when both players play optimally. a lot of these games involve more than just a single head-to-head. O3 and O5. P4 and P6: -3 [ +5 +5 +3 +3 -3 -7 -11 X +9 X X ] This can be reduced to triangular form and then solved. . P4 and P6 you have to flip the matrix across the diagonal so that the Os are all on the left and the Ps are on the top (this is called a transpose). In this case we’d also need to make all the numbers negative. it turns out to be a pretty small one. but we do not know if they are all equal to zero or if one is the negative of the other. too. since such a matrix is from Player O’s perspective and therefore has the opposite payoffs: +3 [ +3 -3 -5 -9 -5 +7 +11 Y -3 Y Y ] This. we can reduce the table to the set of meaningful values: O1 P2 P4 P6 O3 -3 +3 +5 O5 +5 -7 +3 -3 +9 -11 From there we solve. depending on the deck you’re playing. On the other hand. can be solved normally. in the two-player case of this game.theorists call these Nash Equilibriums after the mathematician who first wrote about them. In other words. (Presumably. Feel free to try it yourself! I give the answer below.) We construct a matrix. using X to stand for the Payoff for P2. the same as earlier problems. the game gets much more complicated in multi-player situations that we haven’t considered here. Can we extend this kind of analysis to multiple players? After all. solving that matrix gets you the probabilities O1. and you can combine the different methods as needed for just about any application. since we would expect the person playing this card to have an advantage. And of course. the player who initiated this card gets ahead by an average of less than one-third of a life point – so while we confirm that playing the card and having the extra option of choosing 6 is in fact an advantage. and the payoff for Player O (shown as “Y”) is the negative of X: -0. but we’ll see. the possibility of sudden large swings may make it worthwhile in actual play (or maybe not).

Or if it’s reversed.Teams are straightforward. each of the Rock players get +1 point and the unfortunate Scissors player loses 2 points. you’re better off playtesting… or more likely. four-player games are probably the upper limit of what I’d ever attempt by hand using any of the methods I’ve mentioned today. just like the two-player version. One thing that game theorists have learned is that the more complex the game. we call it a draw. we have . In this case. but some of them are duplicated. for simplicity.) Of course. playtesting will give you a better idea of how the game actually plays “in the field” than doing the math to prove optimal solutions. but at this point you wouldn’t want to. In the two-player case we found the solution of Rock=Scissors=1/4. because the players probably won’t find the optimal solutions anyway. and as we’ll see the complexity tends to explode with each successive player. for a complicated system like that. So if two players throw Rock and the third throws Scissors. probably requiring the aid of a computer and a professional game theorist. we end up with a payoff table that looks like this: rr R P S rp 0 +2 -4 rs -1 +1 0 pp +2 0 -2 ps -2 0 +2 ss 0 -1 +1 +4 -2 0 You might say: wait a minute. let’s just say it can be done. If you have a sixplayer free-for-all intransitive game where each player has a different set of options and a massive payoff matrix that gives payoffs to each player for each combination… well. since there are now two opponents which make it even more dangerous to throw Scissors (and possibly even more profitable to throw Rock)? The trick we need to use here to make this solvable is to look at the problem from a single player’s perspective. not six. the one Rock player gets +2 points while the other two players lose 1 point each. one for each opponents) which means this isn’t uniquely solvable. we know because of symmetry that the answer to this is 1:1:1. and treat all the opponents collectively as a single opponent. if there’s only two teams: just treat each team as a single “player” for analysis purposes. Three-player games are obnoxious but still quite possible to solve. Does this change at all in the three-player case. so we actually can solve it. Thus. but you could use this method to solve for any other scoring mechanism. Free-for-all is a little harder because you have to manage multiple opponents. since this is zero-sum. the longer it tends to take human players in a lab to converge on the optimal strategies… which means for a highly complex game. because the probabilities of the opponents are taken together and multiplied (recall that we multiply probabilities when we need two independent things to happen at the same time). then whoever throws the winning throw gets a point from each loser. The actual table is like this: rr R P S rp 0 +2 -4 pr -1 +1 0 rs -1 +1 0 sr +2 0 -2 pp +2 0 -2 ps -2 0 +2 sp 0 -1 +1 ss 0 -1 +1 +4 -2 0 All this means is that when using the original matrix and writing it out in longhand form. But the good news is that this game is symmetric. So let’s throw in the same wrinkle as before: wins with Rock count double (which also means. (The idea behind these numbers is to keep the game zero-sum. there’s three variables here and six unknowns (two ‘r’ and two ‘p’ and two ‘s’. Paper=1/2. that losses with Scissors count double). We define the rules like this: if all players make the same throw or if all players each choose different throws. If two players make the same throw and the third player chooses a different one (“odd man out”). one player throwing Rock while two throw Scissors. One thing to be careful of: there’s actually nine possibilities for the opponents. you’re better off simplifying your mechanics! Let’s take a simple multi-player case: three-player Rock-Paper-Scissors.

Summary . Note that I haven’t mentioned which of the two opponents is which. This payoff table doesn’t present so well in matrix form since we’re dealing with two variables rather than one. one for each of the first opponent’s choices. leaving us with only a single valid solution: r:p:s = 3:4:3. and combining the three solutions into one at the end. for example). so that is a valid solution. one choosing Paper and one choosing Scissors makes Scissors less risky than it would be in a two-player game. as I said earlier. p and s must all lie within. we evaluate p=1-2s and any other equation with r. s=100% or s=30%. with three players it is actually closer to 1:1:1 than it was with two players! Perhaps it’s because the likelihood of drawing with one player choosing Rock. then substituting into the three Payoff equations above. which lets us know we’re probably on the right track since the equations don’t contradict each other: • 20ss-26s+6 = 0 Here we do have to use the dreaded Quadratic Formula. as we can solve for p or s in terms of the other: • p = 1-2s Substituting that into the other two equations gives us the same result. and then comparing each of those to the second opponent’s choice… then solving each matrix individually. writing it out in longhand form and seeing if we can isolate anything by combining like terms: • • • • Payoff for R = -2rp+4rs-2pp+4ss = 0 Payoff for P = 2rr+2rp-2sp-2ss = 0 Payoff for S = -4rr-4rs+2pp+2sp = 0 r+s+p=1 (as usual) The “=0” at the end is because we know this game is symmetric and zero sum. This would yield two possible solutions. we get p= -100% and r=100%. so let’s try to solve it algebraically instead. all divided by 2a”). we find s=(26+/-14)/40… that is. For s=30%. Where do you start with something like this? A useful starting place is usually to use r+s+p=1 to eliminate one of the variables by putting it in terms of the other. which is invalid (p cannot be below zero). rs and ps by 2 each. since there are two ways to get each of them (rp and pr. the middle equation above makes our lives much easier. Eliminating Rock (r=1-s-p) and substituting. One way to do this would be to actually split this into three mini-matrices. so the probability of any player throwing Rock or Scissors is the same as that of the other players. Are both of these valid solutions? To find out. Multiplying everything out. It turns out that having multiple players does have an effect on the “rock wins count double” problem. For s=100%. after multiplying everything out and combining terms. because even if one opponent chooses Rock. we find p=40% and r=30%. as they are all probabilities). but it might not be the result we expected. although in most cases you’ll find you can eliminate one as it strays outside the bounds of 0 to 1 (which r. That’s a lot of work.to remember to multiply rp. However. we get: • -4pp+2ps-2p+4s = 0 • -2p-4s+2 = 0 • 2pp-6ps+8p+4s-4 = 0 We could isolate either p or s in the first or last equations by using the Quadratic Formula (you know. the other might choose Paper and turn your double-loss into a draw. “minus b plus or minus the square root of 4ac. it doesn’t matter because this game is symmetric.

assuming optimal play? Is it what you expected? Is it what you want? Homework For practice. and analyze it as we did today. • Calculate the payoffs of each choice for one of the players. but each is better than another in different situations). so just set all the payoffs to zero instead. solve for as many variables as you can. In particular. feel free to start by doing the math by hand to confirm all of the problems I’ve solved here today. Excel. If not. If you manage to learn the value of X. probability. with five players you see quartic equations. The math gets much harder for each player you add over two. In a symmetric game. This is your solution. those are the optimal probabilities with which you should choose each throw. or any other means you have at your disposal. that the probabilities of all choices sum to 1. I’ll give a couple of references at the end. positive-sum (>0) or negative-sum (<0). • If you can find a unique value for each choice that is between 0 and 1. X is zero. are any of them dominant or dominated choices? What is the expected ratio of how frequently the player should choose each of the available options. If You’re Working on a Game Now… Think about your game and whether it features any intransitive mechanics. and so on. Of the choices you offer the player. it’s also possible to analyze games where players choose sequentially rather than simultaneously. it tells you the expected gain (or loss) for that player. with two players all equations are strictly linear. Keep doing that until all remaining choices are viable. choose one player’s payoffs as your point of reference. setting the payoffs equal to the same variable X. with three players you have to solve quadratic equations. to get the hang of it. if you’re working on an RPG. • Using algebraic substitution. and by how much overall.This week we looked at how to evaluate intransitive mechanics using math. X for one player will be the negative of the X for the other player. When you’re comfortable. triangular-form matrices. with four players there are cubic equations. • For games with more than two players each making a simultaneous choice. you’ll need to do this individually for each player. • Add one more equation. If you do have any intransitive mechanics in your game. coordinating their movements or so on (as might be found in positive-sum games where two players can trade or otherwise cooperate to get ahead of their opponents). In a zero-sum game. maybe instead of just having a sequence of weapons where each is strictly better than the previous. It’s probably the most complicated thing we’ve done. here’s a game derived from a mini-game . Summing all players’ X values tells you if the game is zero-sum (X1+X2+…=0). and statistics which is why I’m doing it at the very end of the course only after covering those! To solve these. • Find all intransitive “loops” through finding the best opposing response to each player’s initial choice. ask yourself if there are any opportunities or reasons to take some transitive mechanics and convert them to intransitive (for example. and it covers a wide variety of other games we haven’t covered here. These are beyond the scope of this course. as it brings together the cost curves of transitive mechanics. and treat all other players as a single combined opponent. you go through this process: • Make a payoff table. but if you’re interested. making pleas or threats. and also games where players are able to negotiate ahead of time. find the one that is the most prominent. I should also point out that the field of game theory is huge. • Eliminate all dominated choices from both players (by comparing all combinations of rows and columns and seeing if any pair contains one row or column that is strictly better or equal to another). For asymmetric games. perhaps there’s an opportunity at one point in the game to offer the player a choice of several weapons that are all equally good overall. After all.

If the opponent plays completely randomly (20% chance of playing each card). etc. • Progression of play: At the beginning of each round. no one gets the points. two higher. but for simplicity I’m going to use a 5-card deck for this problem. Does the “one higher” strategy dominate? No. three higher. play your 4 on a 4. The actual game there used 13 cards. for the first play at least. but rather some other ratio. like rock-paper-scissors. think of it this way: for any given play there are only five strategies: matching. you would start with this payoff table (after all. and play simultaneously. a card from the draw pile is flipped face up. You may shift strategies from round to round. A third stack of cards numbered 1 through 5 is shuffled and placed face-down as a draw pile. Both players set aside the cards they chose to play in that round. If you’re not sure where to start. Therefore. I can trounce them by playing one higher than matching on all cards. “four higher” will beat “three higher”. Figure out the payoff table for following each strategy across all five cards. Figure out what it is. Both players then choose one of their own cards. since you only have five cards each): matching M M+1 M+2 M+3 M+4 0 +5 -3 -9 -13 match+1 -5 0 +7 +1 -3 match+2 +3 -7 0 +9 +10 match+3 +9 -1 -9 0 +11 match+4 +13 +3 -10 -11 0 References . Here are the rules: • Players: 2 • Setup: Each player takes five cards. Whoever has the most points wins. and you’ll quickly find that point matching wins the vast majority of the time. Essentially. You can demonstrate this in Excel by shuffling the opponent’s hand so that they are playing randomly. if you play randomly. but I’ll capture the other 10 points for the win. I’ll lose 5 points.) Does that mean that “matching points” is the dominant strategy? Certainly not. play your 1 on that).). you might think that means you can do no worse than choosing one of those strategies at random… except that as we saw. or after one player reaches 8 points. Whoever played the higher card gets the points for that round. It is easy to see that there is no single dominant strategy. and “matching points” beats “four higher” – an intransitive relationship. Since each strategy is just as good as any other if choosing between those five. the goal of this game is to guess what your opponent will play. If I know my opponent is playing this strategy. “matching points” beats you! So it is probably true that the optimal strategy is not 1:1:1:1:1. one higher. playing “two higher” will beat “one higher”… and “three higher” will beat “two higher”. for the first round there are only five strategies you can follow. the round is worth a number of points equal to the face value of that card. or four higher.I once saw in one of the Suikoden series of RPGs (I forget which one). (You could also compute the odds exhaustively for this. but with no other information on the first round you only have five choices. you come out far ahead by simply playing the number in your hand that matches the points each round is worth (so play your 3 if a 3 is flipped. as there are only 120 ways to rearrange 5 cards. numbered 1 through 5. and then play one higher than that (or if you think your opponent is playing their 5. if you wanted. in case of a tie. those cards may not be used again. and each of those choices may help or hurt you depending on what your opponent does. and playing my 1 card on the 5-point round. and comparing that strategy to the “point matching” strategy I’ve described here. • Resolution: The game ends after all five rounds have been played.

This makes it difficult to skip ahead.Here are a pair of references that I found helpful when putting together today’s presentation. and then run into sentences which have more unrecognizable acronyms than actual words! . but neither can I say anything bad about it. the authors tend to define acronyms and then use them liberally in the remainder of the text. as you’re likely to skip over a few key definitions. so take a look and decide for yourself. And there is. I’m still in the middle of reading it so I can’t give it a definite stamp of personal approval at this time. My one warning would be that in the interest of brevity. “Game Theory: a Critical Text” (Heap & Varoufakis). but of course that means the book is a bit simpler and probably more accessible than what I’ve done here. the whole rest of the book dealing with all sorts of other topics. Chapters 3 and 5. This is where I first heard of the idea of using systems of equations to solve intransitive games. “Game Architecture and Design” (Rollings & Morris). you know. I’ve tried to take things a little farther today than the authors did in this book. I found this to be a useful and fairly accessible introduction to Game Theory.

but the relative prices of each resource are always fluctuating based on what each individual player needs at any given time (and usually. Still. even in everyday life. We assume the simplest case possible: an economy with one resource. it’s just not a particularly interesting economy because the resources can’t really be created. a lot of human behavior can be predicted. and it is the combination of the two that the players actually experience. making this the longest post of all of them. usually involving some kind of negotiation or haggling. For example. any of those elements individually might be missing and we’d still think of it as an economy. Then. as you could take it to mean that the pieces in Chess are a “piece economy” – and I’d argue that yes. a really large population of people who produce the resource and want to sell it. The other good news is that in-game economies have a lot of little “design knobs” for us designers to change to modify the game experience. because for all of the depth that we’ve gone into still feels like a pretty narrow topic sometimes. That’s a pretty broad term. I’ll talk about some common multiplayer game balance problems that just didn’t fit anywhere else in the previous nine weeks. players are regularly trading resources between them. where players either burn resources for some use in the game. I’ll use the word “economy” to describe any in-game resource. they could be thought of that way. This week I didn’t know ahead of time what to do. • Resource trading. I’ll get a bit technical and share a few tips and tricks in Excel. or convert one type of resource to another. have one or more of these mechanics: • Resource generation. all the players need different things. Third. Economic Systems What is an economic system? First. and how to balance a system where the players are the ones in control of it through manual wealth generation and trading.Level 10: Final Boss This Week Welcome to the final week of the season. we use the word “economy” a lot. there are four main topics I wanted to cover today. a brief lesson from Economics 101 that we need to be aware of is the law of supply and demand. so I should define it to be clear. or metrics. which can make it pretty challenging. • Limited zero-sum resources. though. As it turns out. where players craft or receive resources over time • Resource destruction. at least. in Settlers of Catan. First I’d like to talk a bit about economic systems in games. I’ll return to last summer with this whole concept of “fun” and how this whole topic of game balance fits into the bigger picture of game design. Like the creation of level design tools. but then the players themselves are creating another social system within your economic system. and another really large population of people who consume the resource and want to buy . so we have a lot of options. so that one player generating a resource for themselves reduces the available pool of resources for everyone else. which some of you have probably heard of. I’ll be going over those options today. In games. creating an economic system in your game is a third-order design activity. You’re not just creating a system that your players experience. where players can transfer resources among themselves. Most economies that we think of as such. tabletop RPGs. Lastly. so if you have limited time I suggest bookmarking this and coming back later. with different levels of desperation and different ability to pay higher prices). You’re creating a system that influences player behavior. so I intentionally left it as an unknown on the syllabus as a catch-all for anything interesting that might have come up over the summer. destroyed or transferred in any meaningful way. The good news is that with economies. Supply and Demand First.

More teenagers would rather buy $50 shoes than $20 shoes. And some sellers might not be willing to sell at exorbitantly high prices because they’d consider that unethical. Now. because players who sell at below the current market price will start seeing other people selling at higher prices (because they have to) and say. and demand also changes. If you play an online game where players can sell or trade in-game items for in-game money. if a consumer pays a lot for something and then sees the guy sitting next to them who paid half what they did for the same resource. if ten people would buy a good at $5. maybe at $1. that points us to another interesting thing about economies: the fewer the players. Maybe some of them have lower production costs or lower costs of living than others. this can be interesting in online games that have resource markets. And we can draw a demand curve on the same graph that shows for any given price. This is why the prices you’ll see for one Clay in the Catan games change a lot . okay. the more we’ll tend to see prices fluctuate. so they demand a higher minimum price. we can draw a supply curve on a graph that says that for any given price (on the x-axis). but you certainly aren’t going to find anyone who wouldn’t buy at a lower price but would buy at a more expensive price. So you might wonder what the deal is: why do prices fluctuate? And the answer is that the supply and demand are changing slightly over time. But for our purposes. for that extra penny we’re in. and that supply is constantly changing. will generally turn out to be the actual market price that the players all somehow collectively agree to. these things aren’t perfectly efficient. then at $5. a certain number or percentage of sellers are willing to sell at that price (on the y-axis). or some of them might drop out and say that’s too rich for their blood. We’ll also assume for our purposes that any single unit of the resource is identical to any other. in the real world. because at any given time a different set of players is going to be online shopping for any given item. these assumptions aren’t always true. so they can accept less of a price and still stay in business. supply curves will increase and demand curves will decrease as the price gets more expensive. And here’s the cool part: wherever the two curves cross. if ten people would sell their good at $5. then at $5. and they’d rather sell for less (or go out of business) than bleed their customers dry. for whatever reason. Basically. see if either the developer or a fansite maintains a historical list of selling prices (not unlike a ticker symbol in the stock market). At any rate. so you get unequal amounts of each item being produced and consumed over time.01 you might keep all ten people if you’re lucky. or they’re just greedy. It won’t happen instantly if the players have incomplete information. you’ll notice that the prices change over time slightly. So. Now. they’re going to demand to pay a whole lot less next time. Supply changes as players craft items and put them on sale. only two sellers in the world can part with their goods at that price. so consumers don’t have to worry about choosing between different “brands” or anything. the demand curve is always decreasing. and eventually if you go up to $100 every single seller would be willing to sell at that price. how many people are willing to buy at that price. but at $5 maybe there are ten sellers. the only thing you need to know about supply curves is that as the price increases. we can assume that most of the time in our games. The sellers each have a minimum price at which they’re willing to part with their goods. because price has social cred. the market price will go there as if by magic. if they can sell for more than I should be able to also! And likewise. And unlike the supply curve.01 you know that at least those ten sellers would still accept (if they sold at $5 then they would clearly sell at $5. You can see this with other games that have any kind of resource buying and selling. Maybe others have a more expensive storefront. on the other side. If so. and you might have some more sellers that finally break down and say. but it does happen pretty fast. Even if the players don’t know the curves. hey. The consumers all have a maximum price that they’re willing (or able) to pay. and at $20 you’ve got a thousand sellers. it works the same but in reverse.01). the supply increases.it. because a single player controls more and more of the production or consumption. And because the player population isn’t infinite. Now. Now of course.

you can change the demand (and therefore the market price) of each of them. For example. . because they work better if you buy them all as a set (this is sort of the opposite of substitutes). for example. you’ll tend to see the game quickly organize into players going for monopolies of individual goods. So you can add these kinds of mechanics in order to influence the price. Even if the two aren’t perfect substitutes. It also means if you’re designing a trading board game for 3 to 6 players. Multiple resources Things get more interesting when we have multiple goods. because the demand curves can affect one another. So you can really change the feeling of a game just by changing whether a given resource is limited or infinite. For example. or several resources that naturally “go together” with each other. so that you’ll buy now (even at a higher price) because you don’t want to miss your chance. and the price difference between what people will pay for the good and what they’ll pay for the substitute tells you how efficient that substitute is (that is. so demand may actually be on a decreasing curve based on how many of the thing you already have. Or maybe if you can use lots of resources more efficiently to get larger bonuses. in games where collecting a complete set of matching gear gives your character a stat bonus. so there’s two different items (with similar uses) so if one is really expensive and one is really cheap you can just buy the cheap one. how “perfect” it substitutes for the original).from game to game (or even within a single game) relative to the price of some piece of epic loot in World of Warcraft. when a company wants you to believe that they only have limited quantities of something. You see this exploited all the time in marketing. you can also have multiple goods where the demand for one increases demand for the other. and you might decide to add some extra rules with less players to account for that if a stable market is important to the functioning of your game. it might be that collecting one resource means your demand for more of that resource increases. As an example. that increases demand. a greater demand for one resource pulls up the demand for all of the others… and once a player has some of the resources in the set. Scarcity I probably don’t need to tell you this. but one can be substituted for another – maybe one gives you +50 health. By creating resources that are meant to be perfect or imperfect substitutes. or where you can turn in one of each resource for a bonus. and the other gives you +5 mana which you can use as a healing spell to get +50 health. sometimes demand is a function of how much of a good you already have. On the flip side. if you have decreasing returns for players if they collect a lot of one good. suppose you have two resources. the first one might be a big deal for you. Marginal pricing As we discussed a long time ago with numeric systems. this isn’t really something you can control directly as the game designer. if you give increasing returns for each additional good that a player possesses. where producing lots of a given resource might be more or less expensive per-unit than producing smaller amounts. but at least you can predict it. as once a player has the majority of a good in the game they’re going to want to buy the rest of them. their demand for the others will increase even more because they’re already part of the way there. players may be willing to accept an imperfect substitute if the price is sufficiently lower than the market value for the thing they actually want. If you have none of a particular resource. you can expect to see more drastic price fluctuations with fewer players. but if you have a thousand of that resource then one more isn’t as meaningful to you. Now. then adding a decreasing cost for producing the good might make a lot of sense if you want the price of that good to be a little more stable. but if the total goods are limited. The same is true on the supply side.

For example. If players can bring as much money as they want to the table. they could just keep buying more chips until the . eventually players will run out. Economies are systems. Most game economies are closed systems. because they know they’ll get all of their ammo back when they reach the next ammo shop. In a game like that. then this is probably not going to be a problem. you can control when this happens. where the player only uses their ammo cautiously. because they never know when they’ll find extra or when they’ll run out. random item drops.As an example. a sufficiently rich player could have an unfair advantage. you’re going to have pretty short games… but if it takes an hour to deplete even the starting resources near your start location. in most RTS games. First. Open and closed economies In systems. if skill is equal. if the players drain the board dry of resources in the first 5 minutes. and once it runs out the players will be unable to produce any more units. but that’s it. but that’s it. imagine it is strictly limited: you get what you find. and we can predict how changes in the system will affect the game. and the fact that they’re limited at all is just there to avoid an infinite stalemate. because there is sometimes the possibility that a single player will collect all of a good. and an open economy has different design considerations than a closed economy. With multiplayer games in a closed economy. which they then use to create units and structures on the map. If the resources of your game drive players towards a victory condition. Since the core resources that are required to produce units are themselves limited. which in turn removes the monopoly. we know how the system works. because we don’t necessarily have control over the system anymore. we say a system is “open” if it can be influenced by things from outside the system itself. like some of the more recent action-oriented FPSs. a player is going to be a lot more willing to experiment with different weapons. if it’s desirable. and if not what you can do to prevent it. Open economies are a lot harder. so my point is just that you increase demand for a good (and decrease the desire to actually consume it now because you might need it later) the more limited it is. because the player who gets a monopoly on the good has incentive to spend them. A game like that feels more like a survival-horror game. where you can expect the player to be shooting more or less constantly. and in fact some people get very uncomfortable if you try to change it to an open system: next time you play Monopoly. the board has a limited number of places where players can mine a limited amount of resources. and stores where you can sell the random drops and buy as much extra ammo as you need. and it is “closed” if the system is completely self-contained. essentially preventing anyone else from using it. For example. if resources do no good until a player actually uses them (and using them puts them back in the public supply). making the resource limited is a great way to control game length. Now compare that with a game where you have completely unlimited ammo so it’s not even a resource or an economy anymore. then the players will probably come to a resolution through military force before resource depletion forces the issue. you can generate or spend money within the game. None of these methods is “right” or “wrong” but they all give very different player experiences. try offering one of your opponents a real-world cash dollar in exchange for 500 of their Monopoly dollars as a trade. which makes the game feel more like a typical FPS. By adjusting the amounts of resources on the map. A simple example of an open economy in a game is in Poker if additional player buy-ins are allowed. consider a first-person-view shooting video game where you have limited ammunition. you also want to be very careful with strictly limited goods. and you should decide as a designer if that should be possible. Compare to a game where you have enemies that respawn in each area. giving the game a natural “time limit” of sorts. and see what happens — at least one other player will probably become very upset! Closed systems are a lot easier to manage from a design standpoint. essentially to place an upper limit on game length. because we have complete control as designers over the system.

some of them do. where players can trade or “gift” resources within the game. In some cases. for the game to be balanced. at least if you want to maximize your player base. There are other games where you can buy in-game stuff with real-world cash. but you don’t support it. most commonly MMOs. it’s the player’s problem and not the developer’s. • You could say that an open economy is okay. we’ll let you do that). you should ask yourself why your trading system that you designed and built even allows it. that can make the game very unbalanced very quickly. and the higher your initial required buy-in is. where a player spending more money can buy more cards and have a greater collection. since they’ll trust other people by assuming that if . or sell Gold to them and get paid in cash. it just speeds up the progression that’s already there. then you have to accept you’re going to lose some customers and generate some community badwill. it can also be really hard to track this. because in any of those cases you can be sure a secondary economy will emerge where players will exchange real-world money for virtual stuff.) Some designers intentionally unbalance their game in this way. money doesn’t give any automatic gameplay advantage. Just google “World of Warcraft Gold” and you’ll probably find a few hundred websites where you can either purchase Gold for real-world cash. A better method is to create a game that’s actually worth playing for anyone. we would want larger collections to give players more options but not more power. especially in games that are competitive by nature. If you do fix it. and developers have to be careful with exactly what the player can and can’t buy. assuming that if they create a financial incentive to gain a gameplay advantage. Another place where this can be a problem is CCGs. and in the real world we also have supply and demand curves. and just sending them a form email with the TOS still takes time. and the gameplay is fun enough that you can do that without feeling like the game is arbitrarily forcing you to grind… but if you want to skip ahead by paying a couple bucks. it is still the developer’s problem. then accept that players will think of you as a “safety net” which actually makes them more likely to get scammed.luck in the game turns their way. and to be fair. this is a typical pattern for free-to-play MMOs and Facebook games. (It’s less of an issue in games like FarmVille where there isn’t really much competition anyway. and if the game itself isn’t very compelling in its core mechanics then this might be the only thing you can fall back on to make money. usually additional buy-ins are restricted or disallowed in tournament play. To solve this balance problem. If you don’t fix it. and that any player found to have done so will have their account terminated. so if someone takes your money and doesn’t give you the goods. Ideally. but if you set out to do this from the start I would call it lazy design. If it’s easy to track (because trades are centralized somewhere). maybe you get your next unlock after another couple of hours of gameplay. because you will receive all kinds of customer support emails from players claiming they were scammed. This is mostly a problem because it’s a huge support headache: you get all kinds of players complaining to you that their account was banned. if the player can purchase an advantage over their opponents. the fewer people who will be willing to pay (and thus the smaller your player base). In this way. and whether you fix it or not you still have to deal with the email volume. like Diablo where there isn’t really an in-game trading mechanism and instead the players just drop stuff on the ground and then go pick it up. if you really don’t want people to buy in-game goods for cash. you set up an in-game economy that essentially has a minimum bar for money to spend if you want to be competitive. If more money always wins. players will pay huge amounts. Unfortunately. which is why I think rarity shouldn’t be a factor in the cost curve of such a game. and then offer to trade money for time (so. There are a few options you can consider if you’re designing an online game like this with any kind of trade mechanic: • You could just say that trading items for cash is against your Terms of Service. One final example of an open economy is in any game.

which can be significant for a large game. There are two ways to fix this: reduce the positive-sum nature of the economy. to make sure you don’t accidentally run afoul of any national banking laws since you are now storing players’ money. . So if you’re running an MMO where players can enter and exit the game freely. this is something that will happen. if you liked their coffee before you would probably be willing to pay the new price. it is a major problem for new players. as is the gold economy in most MMOs. In Monopoly the main problem. and they can never really catch up because even once they start earning more money. Inflation Now. The cash economy in the board game Monopoly is like this. including the ability to accept cash payments. • You can formalize trading within your game. players get richer. (On the bright side. which then increases the market price of each good to compensate. With more total money in the economy (and especially. password and all. or add negative-sum elements to counteract the positive-sum ones. because you can afford it.the other person isn’t honest. But there are a few situations that can permanently shift the demand curve in one direction or another. For an MMO where each player has an individual account on your server that isn’t linked to anything else.) In any case. so you need to decide how to deal with that. inflation is a big long-term problem you need to think about. as I mentioned earlier. it is possible for me to generate wealth and goods without someone else losing them. In MMOs. is that the economy is positive-sum but the object of the game is to bankrupt your opponents. and the most important for our purpose is when each player’s maximum willingness to pay increases. who enter the game to find that they’re earning one gold piece for every five hours of play. for example maybe you can purchase a really cool suit of armor but you have to be at least Level 25 to wear it. we see what is called inflation: the demand curve shifts to the right as more people are willing to pay higher prices. my understanding is that when Sony Online did this for some of their games. This means that over time. this doesn’t affect the balance of the game. more total money per player on average). you’re at the point where you’re so close to winning that no one else is willing to trade with you anyway. if you have more purchasing power. they’ll just send an email to support to get it fixed. The good news for this is that players have no excuses. selling a whole character doesn’t unbalance the game. and anything worth having in the game costs millions of gold. here we see that one possible solution to this is to change the victory condition to “be the first player to get $2500” or something like that. Trying to enforce sanctions against scammers is an unwinnable game of whack-a-mole. maybe fluctuating slightly if a different set of players or situations happen to be online. because after all they are willing to pay more. by the time you’re in the late game and willing to trade vast quantities of stuff for what you need. Why would you change the amount you’re willing to pay? Mostly. One common pattern to avoid this is to place restrictions on items. however. You’ll also want to consider whether players are allowed to sell their entire character. inflation will just continue. remember from before that the demand curve is based on each player’s maximum willingness to pay for some resource. the huge win for them was something like a 40% reduction in customer support costs. inflation isn’t a problem for the existing players. For Facebook games this is less of an issue because a Facebook account links to all the games and it’s not so easy for a player to give that away. How does this work in games? Consider a game with a positive-sum economy: that is. If I doubled your income overnight but Starbucks raised the price of its coffee from $5 to $6. you again want to make sure that whatever players can trade in game does not unbalance the game if a single player uses cash to buy lots of in-game stuff. The bad news is that you will want to contact a lawyer on this. but over time it should balance out. so is the commodity economy in Catan. as we’ve discussed before. Normally we’d like to think of the demand curve as this fixed thing. In Catan.

no matter how big the stat bonuses are for someone else. The trick is balancing the two so that on average.Negative-sum elements are sometimes called “money sinks. 3. sheep are nearly useless to you if you want to build cities. it cancels out. that on its own would be zero-sum and wouldn’t necessarily fix the inflation problem. Trading can be a very interesting mechanic if players have a reason to trade. so essentially it is like the players have a choice of what special advantages to buy for their character. Trading Some games use trading and bartering mechanics extensively within their economies. Any money players have to pay as maintenance and upkeep. 5. but only the first time each quest. If you give no rewards. To reduce the positive-sum nature of the economy is a bit harder. but it becomes much more powerful if you own the other matching ones. for example. especially if that something is a consumable item that the player uses and then it’s gone for good. Players can’t trade gold between themselves. usually that reason is that you have multiple goods. because players are used to going out there. This not only gives an incentive to spend. so the gold they have at any given point in the game is limited. especially if those items are purely cosmetic in nature and not something that gives a gameplay advantage. but it also penalizes the players who are most at fault for the inflation. having to pay gold to repair your weapon and armor periodically. . but it would at least give the newer players a chance to catch up and increase their wealth over time. you could transfer some money from the richest players and distribute it among the poorest. and track that over time to see if it’s increasing or decreasing. If you make the monsters limited (so they don’t respawn) then the world will become depopulated of monsters very quickly. By giving each player an assortment of things that are better for someone else than for them. In Settlers of Catan. In Monopoly. and players can’t sell or trade the treasure that’s dropped. Money sinks take many forms: 1. 2. killing monsters and getting treasure drops. final solution here is the occasional server reset. This doesn’t solve the inflation problem – a new player coming in at the end of a cycle has no chance of catching up – but it does at least mean that if they wait for everything to reset they’ll have as good a chance as anyone else after the reset. players will wonder why they’re bothering to kill monsters at all. it works in real life: have an “adventurer’s tax” that all players have to pay as a periodic percentage of their wealth. While I don’t know of any games that do this. In World of Warcraft. which can remove large amounts of cash from the economy when a few players buy them. some kind of mechanism that permanently removes money from the player economy. Losing some of your money (or items or stats or other things that cost money to replace) when you die in the game. especially within a closed economy. In theory you could do something like this: • Monsters drop treasure but not gold. but they’re great for settlements.” that is. you give players a reason to trade resources. • Players can use the gold to buy special items in shops. One other. 4. Any money paid to NPC shopkeepers for anything. when everyone loses everything and has to start over. • Players receive gold from completing quests. and the average money per person. Trading mechanics usually serve as a negative-feedback loop. a good way to know this is to actually take metrics on the total sum of money in the game. a piece of gear that can’t be equipped by your character class isn’t very useful to you. but that’s about it. and each good is more valuable to some players than others. Another alternative would be to actually redistribute the wealth. so it might make their current equipment a little better. Offering limited quantities of high-status items. so instead of just removing money from the economy. a single color property isn’t that valuable.

players can “buy on credit” so to speak. you can only trade every few turns or something). • If trading events in the game are infrequent (say. On the other hand. Think about whether you want players to be inherently mistrustful and suspicious of each other. In short. Here are a few options to consider: • Can players make deals for future actions as part of the trade (“I’ll give you X now for Y now and Z later”)? If so. as players have had more time to amass tradable resources so they will probably have a lot of deals to make. with future considerations. • If this is a problem. but you want to be very careful in tournament situations and other . in some other games you can trade anything and everything. but it will also cause trading to move along faster because there’s less room for haggling. where one player realizes they can’t win. • I could even imagine a game where uneven trades are enforced: if you trade at all.Players are generally more willing to offer favorable trades to those who are behind. consider if they can trade at “instant speed” in response to a game event. one card for one card) or are uneven trades allowed (Catan can have uneven numbers of cards. For example. it also gives the players a lot more power to strike interesting deals. are future deals binding. • Does the game require all trades to be even (e. but the top two players are in a close game. and there’s also less opportunity for one weak player to make a bad trade that hands the game to someone else. or can players renege? • Disallowing future deals makes trades simpler. which tends to complicate trades. or whether you want to give players every incentive to find ways of cooperating. in Bohnanza there is a trading phase as part of each player’s turn. • If future deals are non-binding. so simply making them untradeable stops the players from ruining the game by making bad trades. • Can players only trade certain resources but not others? For example. it isn’t as simple as just saying “players can trade”… but this is a good thing. • Can players only trade at certain times? In Catan you can only trade with the active player before they build. • Are trades limited in quantity. but at least one per side. Sometimes social pressure prevents people from doing something like this. or unlimited? A specific problem here is the potential for the “kingmaker” problem. players will tend to be a lot more cautious and paranoid about making them. because it gives you a lot of design control over the player experience. consider adding a timer where players only have so much time to make deals within the trading phase. while they expect to get a better deal from someone who is ahead (or else they won’t trade at all). in Monopoly you can trade anything except developed properties. expect trading phases to take longer. and the losing player can choose to “gift” all of their stuff to one of the top two players to allow one of them to win. and you could offer the other player a lesser amount (say. in Monopoly you can trade with anyone at any time. in Catan you can trade resource cards but not victory points or progress cards. 5%) in exchange for the service of providing a tax shelter. then taking it all back after landing on the space. be very clear about exactly when players can and can’t trade. other games might allow a complete “gift” of a trade)? • Requiring even trades places restrictions on what players can do and will reduce the number of trades made. There are a lot of ways to include trading in your game.g. in Monopoly you could theoretically avoid Income Tax by trading all of your stuff to another player. because sometimes the ability to react to game events can become unbalanced. someone must get the shaft. • Resource that are tradable are of course a lot more fluid than others. Some resources may be so powerful (like Victory Points) that no one in their right mind would want to trade them. • If players can trade at any time.

and situational. auctions work the best when the actual cost is variable. so I’ll bid up the price and take the chance that I won’t get stuck with it at the end. or where the “bank” creates an item out of thin air and it is auctioned to the highest bidding player. where any player can call a higher bid at any time. to limit trades. . Auctions often serve as a self-balancing mechanic in that the players are ultimately deciding on the price of how much something is worth. With an effect that is always worth the same amount. is to get the highest price. going twice. the actual auction price is usually somewhere between the highest and second highest willingness to pay. Just as there are many kinds of trading. Auctions Auction mechanics are a special case of trading. if you’re then going to disincentivize it? But in some games trading may be so powerful (if two players form a trading coalition for their mutual benefit. they’ll just bid what it’s worth and be done with it. different between players. allowing them both to pull ahead of all other players. when one player auctions off their stuff to the other players. once per game. and in fact it’s usually closer to the second-highest.) • Are trades direct. • Is there a way for any player to force a trade with another player against their will? Trades usually require both players to agree on the terms. so I auctioned off my pieces to the highest bidder. but it’s also possible to have some kind of “trade tax” where maybe 10% of a gift or trade is removed and given to the bank for example. “auction” is meaningless once players figure out how much it’s worth. but I want an opponent to pay more for it. and I was in a position to decide who got what.” If everyone refuses to bid beyond their own maximum willingness to pay. so if you don’t know what to cost something you can put it up for auction and let the players decide. the person with the highest willingness will purchase the item for one unit more than the second-highest willingness. A lot of times there are meta-considerations: not just “how much do I want this for myself” but also “how much do I not want one of my opponents to get it because it would give them too much power” or even “I don’t want this. This is the type most people think of when they think of auctions. or indirect? Usually a trade just happens. and when no other bids happen someone says “going once. For instance. (However. in a set collection game. for example) to the point where you might need to apply some restrictions just to prevent trades from dominating the rest of the game. although that depends on the auction type: sometimes you end up selling for much lower. but you can include mechanisms to allow one player to force a trade on another player under certain conditions. sold. making this auction inefficient (in the sense that you’d ideally want the item to go for the highest price).” An interesting point with auctions is that normally if the auction goes to the highest bidder.) Auctions are a very pure form of “willingness to pay” because each player has to decide what they’re actually willing to pay so that they can make an appropriate bid. But in reality. so that the players are actually making interesting choices each time. this is lazy design.“official” games where the economic incentive of prizes might trump good sportsmanship (I actually played in a tournament game once where top prize was $20. Here’s a few examples: • Open auction. I give you X and you give me Y. This seems strange – why offer trading as a mechanic at all. there are also many kinds of auctions. you might allow a player to able to force-trade a more-valuable single item for a less-valuable one from an opponent. so that figuring out how much it’s worth is something that changes from game to game. but as we’ll see that is a problem with most auctions. second prize was $10. that the item up for auction sells for the highest willingness to pay among all of the bidders – that’s certainly what the person auctioning the item wants.

. • Some auctions are what are called negative auctions because they work in reverse: instead of something good happening to the highest bidder. In turn order. and there’s some kind of timer that counts down the price at a fixed rate (say. they may be willing to pay a small amount in order to avoid getting stuck with nothing. since it is a fixed-price auction for them and they don’t have to worry about being outbid. It goes around until someone accepts. These are rare in the States as it requires some kind of special equipment. If the item for auction is more valuable for you than the other players. who gets the option to buy (or not) before anyone else – and if it’s offered at less than the first player’s willingness to pay. if you want to give players an incentive to save their money by bidding zero. Conversely. as players are not only trying to figure out their own maximum willingness to pay. Even once you decide on an auction format. but also other players’ willingness. dropping by $1 per second. and highest bid wins. all reveal at once. and the bidder can choose to take from any of their opponents. you’ll need some way of breaking the tie. that gives players an incentive to bid high even if they don’t want anything. so sometimes it can be dangerous to not bid as well. or something). • Circle auction. the top bidder receives the item. then second-highest bidder. giving a bonus to everyone who didn’t win (or even just the bottom bidder) for winning the next auction is a way to do that. they get to keep the extra in their own pocket. giving the last-place bidder a “free” item is a good way to do that – but of course if multiple players bid zero. • Sometimes an entire set of items are auctioned off at the same time in draft form: top bid simply gets first pick. In turn order. You need to include some mechanism of resolving ties. This gives an advantage to the first player. and no one else gets anything. since sometimes two or more players will choose the same highest bid. In other words. meaning that if you bid for the auction at all you wanted to be sure you won and didn’t come in second place! On the other hand. everyone secretly and simultaneously chooses their bid. • Dutch auction. each player can either make a bid (higher than the previous one) or pass. if only one person bids for it. Here. If players can “read” each others’ faces in real-time to try to figure out who is interested and who isn’t. • If it’s important to have negative feedback on auction wins so that a single player shouldn’t win too many auctions in a row. even if a player doesn’t particularly want any given item. so you expect to bid low and still win. The first player to accept at the current price wins. something bad happens to the lowest bidder. You have an item that starts for bid at high price. there are a number of ways to auction items: • The most common is that there’s a single item up for auction at a time. The lowest bidder gets the one thing no one else wanted… or sometimes they get nothing at all. In theory this means as soon as the price hits the top player’s maximum willingness to pay. there may be some bluffing involved here. then everyone else is in second place. they should accept. where the top bidder takes something from the second highest bidder. with the final player deciding whether to bid one unit higher than the current highest bid. or everyone declines. but there may be some interesting tension if they’re willing to wait (and possibly lose out on the item) in an attempt to get a better price. or let the other player take it. The auction game Fist of Dragonstones had a really interesting variant of this. • Silent auction. This can be combined with other auctions: if the top bidder takes something from the bottom bidder. so how efficient this auction is depends on how well the fixed price is chosen. In this case players are bidding for the right to not have something bad happen to them. you may bid lower than your maximum willingness to pay because you expect other players’ bids to be lower. each player is offered the option to purchase the item or decline. It goes around once.• Fixed price auction. This gives an advantage to the last player. so they may be able to offer less than their top willingness to pay. and so on. This can often have some intransitive qualities to it. if you want to give players some incentive to bid higher.

you have to decide what happens if no one bids: . making the auction partially or completely zero-sum. • If the top two players pay their bid (but the top bidder gets the item and the second-highest bidder gets nothing). there are several ways to define who pays their bid: • The most common for a single-item auction is that the top bidder pays their bid. that makes it dangerous to bid if you’re not hoping to win – although in a series of such auctions. Maybe the bid is divided and evenly split among all other players. • If only the top and bottom bidders pay. • In some cases. leading to deflation. If only the top player gets anything. so there is always at least something at stake to lose (otherwise a player could bid zero and pay no penalty)… although if zero bids are possible. that makes low bids appear safer as there’s always the chance someone will protect you from losing your bid by bidding zero themselves. or else they may spread them out and try to take a lot of things for cheap when no one else bids. You may want to force players to bid a certain minimum.01 (because in such a situation they’re only losing one cent rather than 99 cents). and all other players spend nothing. • It could be paid to some kind of holding block that collects auction money.Even once you decide who gets what. that actually gives players an incentive to bid really high or really low. • If everyone pays their bid except the lowest bidder. the best strategy is to bid your maximum willingness to pay. if you auction off a dollar bill in this way and (say) the top bid is 99 cents and the next highest is 98 cents. the second-highest bidder has an incentive to bid a dollar (which lets them break even rather than lose 98 cents)… which then gives incentive to the 99-cent bidder to paradoxically bid $1. because they’d be happy to win but even happier if they don’t have to lose anything. • The winning bid may also be paid to one or more other players. The board game Lascaux has an interesting bid mechanic: each player pays a chip to stay in the auction. and is then redistributed to one or more players later in the game when some condition is met. Game theory tells us. in turn order. particularly when every player gets something and highest bid just chooses first. where players are never sure of exactly what their opponents are bidding. and if both players follow this logic they could be outbidding each other indefinitely! • The top bidder can win the auction but only pay an amount equal to the second-highest bid. If you do something like this with an open auction things can get out of hand very quickly. Lastly. as each of the top two bidders is better off paying the marginal cost of outbidding their opponent than losing their current stake – for example. thus it is up to the player if it’s a good enough time to drop out (and gain enough auction chips to win more auctions later) or if it’s worth staying in for one more go-round (hoping everyone else will stay in. making low or medium bids suddenly becomes very dangerous. or low enough that you don’t lose anything. The auction continues with the remaining players. in a number of ways. you may want to have all players pay their bid. or even continuing to stay in with the hopes of winning the auction. This is most common in silent auctions. through some math I won’t repeat here. a player may choose to either go “all in” on one or two auctions to guarantee winning them. and on their turn a player can choose instead to drop out of the auction and take all of the chips the players have collectively paid so far. no matter what kind of auction there is. you have to decide what happens to the money from the auction: • Usually it’s just paid “to the bank” – that is. turning the auction into an intransitive mechanic where you either want to bid higher than anyone else. players may again have incentive to bid higher than normal. it’s removed from the economy. thus increasing your take when you drop out). Even if you know who has to pay what bid. that in a silent auction with these rules.

is that everyone by default will gang up on the leader. no one gets it. but they are things that usually aren’t much fun so you want to be very careful of them. As an example. If that player is known in advance to everyone. The game balance problem is that attacking – you know. that can either be considered a balance or imbalance depending on the game. and then the set auctioned… and then if no one wants that. The problem here is that the system is essentially rewarding the players for not interacting with each other. Turtling One problem. a second resource could be added. what usually happens is the item is thrown out. but hopefully I’ve at least given you a few different options to consider. add a third resource. • As an alternative. Solving common problems in multiplayer This next section didn’t seem to fit anywhere else in the course so I’m mentioning it here. is that if you get in a fight with another player – even if you win – it still weakens both of you relative to everyone else. If one resource is being auctioned off and no one wants it. players tend to overshoot (so the leader isn’t just kept in check. instead building up their defenses to make them a less tempting target. A few cards are defensive in nature. and the implications of those. that’s why. Plague and Pestilence and Family Business are both light card games where you draw 1 then play 1. and it ends up feeling like a punishment to be doing well. even). they’re totally destroyed). Kill the leader and Sandbagging One common problem in games where players can directly attack each other.• It could be that one of the players gets the item for free (or minimal cost). and let’s not forget the cards you can turn in for armies but you only get a card if you attack. In multiplayer free-for-all games where there can only be one winner. . same for continent bonuses. • Or. Needless to say. and then when all the other players get into fights with each other they swoop in and mop up the pieces when everyone else is in a weakened state. but most cards hurt opponents. The wise player reacts to this by doing their best to not get in any fights. the auction could have additional incentives added. making sure no one gets too much ahead. where attackers and defenders both lose armies so you’d normally want to just not attack. Another solution is to force the issue by making it essentially impossible to not attack. and if the interaction is the fun part of the game then you can hopefully see where this is something that needs fixing. there are no right or wrong answers here. it’s often an incentive to open the bidding. it gives other players an incentive to bid just to prevent someone else for getting an item for free. When the players know that the default state is one of their opponents getting a bonus. A simple example is in the board game RISK. and the game continues from there as if the auction never happened. there are a few problems that come up pretty frequently. this can serve as a useful negative feedback loop to your game. so the game goes to great lengths to avoid turtling by giving incentives to attack: more territories controlled means you get more armies next turn if you hold onto them. On the one hand. especially in war games or other games where players attack each other directly. and then repeated. a random player gets the item for free) then players may be more likely to not bid. If players don’t know who it is (say. The most direct solution is to reward or incentivize aggression. especially when it’s very clear who is in the lead. and you must play one each turn (choosing a target opponent. actually playing the game – is not the optimal strategy. as they have just as good a chance as anyone else. On the other hand. so if it seems out of place. and so on until someone finally thinks it’s worth it. so before too long you’re going to be forced to attack someone else – it’s simply not possible to avoid making enemies. there are a lot of considerations when setting up an in-game economy! Like most things.

Some Eurogames have just a few defined times in the game where players score points. • Or. the game also gives you an incentive to attack weaker players. the player who’s behind might just make favorable trades to one of the leading players in order to hand them the game. even if players can coordinate and they know who to attack. The good news is that a lot of things have to happen in combination for this to be a problem. the game’s systems can make this an unfavorable strategy or it can offer other strategies. This is undesirable because it’s anticlimactic: the winner didn’t actually win because of superior skill. Sometimes it happens indirectly. so in the middle of a round it’s not always clear who’s in the lead… and players know that even the person who got the most points in the first scoring round has only a minor advantage at best going into the final scoring round. the game is all about convincing other people to do what you want. so the game designer generally wants to avoid this situation. The idea is that if it’s dangerous to be the leader. If a player is doing well enough that they’re in danger of taking the lead. they will intentionally play suboptimally in order to not make themselves a target. which gives you a big army bonus. players won’t know who to go after. Now. which I’ve seen called sandbagging. so you don’t see any kill-the-leader strategies in marathons.As a response to this problem. they’re working around it. and it is clear to everyone that if they make one move then one player wins. attacking the leader is impossible. in RISK it is certainly arguable that having everyone attack the leader is a good strategy in some ways… but on the other hand. since kill-the-leader is a negative feedback loop. a new dynamic emerges. Sometimes this happens directly – in a game with trading and negotiation. but the action is moving so fast that it’s hard for players to work together (or really. the “textbook solution” is to add a compensating positive feedback loop that helps the leader to defend against attacks. to do anything other than shoot at whoever’s nearby). . if you make it difficult or impossible for players to form coalitions or to coordinate strategies. If you want a dynamic where the game starts equal but eventually turns into one-against-many. because if you eliminate a player from the game you get their cards. If you choose to remove the negative feedback of kill-the-leader. they need to be able to figure out who the leader is. after all. the problem here is that players aren’t really playing the game you designed. players can attack each other. As with turtling. In an FPS multiplayer deathmatch. with each successive scoring opportunity worth more than the last. they don’t need to if the game already has built-in opportunities for players to catch up. Kingmaking A related problem is when one player is too far behind to win. even if players can coordinate. Lots of Eurogames have players keep their Victory Points secret for this reason. where the player who’s behind has to make one of two moves as part of the game. and you can break the chain of events anywhere to fix it. in a game with heavy diplomacy (like the board game Diplomacy) this might be tolerable. one thing to be aware of is that if you were relying on this negative feedback to keep the game balanced. it might now be an unbalanced positive feedback loop that naturally helps the leader. • Players need a mechanism to join forces and “gang up” on a single player. so consider adding another form of negative feedback to compensate for the removal of this one. But in most games. but instead because one of the losing players liked them better. then you want to be in second place. • Or. players can’t really “attack” each other. • Or. and another move causes another player to win. • Or. the winners both feel like the win wasn’t really deserved. In a foot race. even if players can coordinate and they know who to attack. this might be the way to go. but they are in a position to decide which of two other people wins. For example. If your game uses hidden scoring or if the end goal can be reached a lot of different ways so that it’s unclear who is closest.

If the entire game lasts two minutes and you’re eliminated with 60 seconds left to go. then kingmaking is impossible. and the round would end even if several players were still alive. instead of eliminating your opponents. where as soon as one player was eliminated a 60-second countdown timer started. and that all they can do is help someone else to win. there’s no reason to give away the game to someone else. the player in the lead wants that player eliminated. If players in a two-hour game start dropping around the 1-hour-50-minute mark. • Or. so that the only time a player will actually do this is when they feel they’re strong enough to eliminate everyone and win the game. instead of the victory being decided as last player standing. • You can also change the victory condition. but you can give control of those to the eliminated players (my game group added this as a house rule in the board game Wiz-War. Likewise. • The player in last place has to know that they can’t win. The problem is that when one player is eliminated and everyone else is still playing. who is losing. if player elimination doesn’t happen until late in the game. relatively speaking it won’t feel like a long time to wait until the game ends and the next one can begin. if the goal is to earn 10 Victory Points. and sitting around not playing the game is not very fun. thus players tend to not attack until they feel confident that they can come out ahead. The card game Illuminati has a mechanism for players to be eliminated. It’s when players can be eliminated early and then have to sit around and wait forever that you run into problems. and you can eliminate any of them: • The players have to know their standing. that losing player has to sit and wait for the game to end. who cares? Sit around and wait for the next game to start. The video game Gauntlet IV (for Sega Genesis) also did something like this in its multiplayer battle mode. perhaps disincentivizing players to eliminate their opponents. Player elimination A lot of two-player games are all about eliminating your opponent’s forces. With games of very short length. you can reduce or eliminate ways for players to affect each other. while everyone else actually wants to help that losing player stay in the game! The card game Hearts works this way. Cosmic Encounter included rules for a . this is not usually a problem. which doesn’t necessarily happen until late game. where an eliminated player would take control of all monsters on the board). but the victory condition is to collect enough cards (not to eliminate your opponents). Perhaps there are some NPCs in the game that are normally moved according to certain rules or algorithms. thus. for example. so players are not eliminated all that often. • You can also give the eliminated players something to do in the game after they’re gone. so it makes sense that multi-player games follow this pattern as well. and what actions will cause one player to win over another. then players have no incentive to help out a specific opponent. then players can be so busy collecting VP that they aren’t as concerned with eliminating the opposition. removing elimination entirely. If players can help each other. If every player believes they have a chance to win. The board game Twilight Imperium makes it exceedingly dangerous to attack your opponents because a war can leave you exposed.As with kill-the-leader. victory is the player in best standing (by some criteria) when the first player drops out. this creates some tense alliances as one player nears elimination. If no player knows who is winning. There are a few mechanics that can deal with this: • You can change the nature of your player elimination. this is not a problem. If the person in last place has no mechanism to help anyone else. • One interesting solution is to force the game to end when the first player is eliminated. there are a lot of things that have to happen for kingmaking to be a problem.

Data in Excel is stored in a series of rows and columns. I beg your patience. Numbers are pretty simple. any formulas that reference it will change automatically. where each row has a number. but there are times when that’s not practical. and put the result in whatever other cell there is. Any single entry location is called a cell (as in “cell phone” or “terrorist cell”). which is the main thing Excel does that saves you so much time. just type the text you want in the same way. Excel is a spreadsheet program. which you can do with the +. so while they are giving away information they also may be lying. so most players don’t mind taking on the “spectator” role. Entering data into cells In general. and hope I can show you at least one or two little features that you didn’t know before. each cell can hold one of three things: a number. so an easier way of thinking about Excel is that it’s a program that lets you store data in a list or a grid. of course. although as you’ll see getting a lot of text to display in one of those tiny cells isn’t trivial – you can type it all in. so consider this a brief introduction to how Excel works and how to use it in game design. then that’s a perfectly valid use (I’ve worked with plenty of spreadsheets that are nothing more than that. Moving data around . multiply by 2. by adding “kibbutzing” mechanics where the seventh player can wander around. but if you don’t. What if you want to include text that looks like a number or formula. which means absolutely nothing to you if you aren’t a financial analyst. eliminated players can watch the drama unfold. or click the little green checkmark if you prefer. typing =1+2 and hitting Enter will display 3 in the cell. For those of you who are already Excel experts. One way to do this is just to put another text field next to it. Text is also simple. written text. At its most basic. For a formula. just type in a number in the formula bar at the top and then hit Enter. For those cases you can instead use the Comment feature to add a comment to the box. In Mafia/Werewolf and other variants. In fact. or a computed formula. for example a list of art or sound assets in a video game and a list of their current status). the exact key combinations I list below may vary for you if you’re using a different version or platform. you can use it to keep things like a grocery list or to-do list in a column. and Excel will get the message that you want it to treat that cell as text. and is referred to by its row and column (like “A1” or “B19”). even if you insert new rows that change the actual name of the cell you’re referencing. And the really awesome part about this is that if you change the value in A1. each column has a letter. and a single entry is in a given row and column.seventh player beyond the six that the game normally supports. which shows up as a little red triangle in the corner of the cell. * and / characters. Excel will change your formulas to update what they’re referencing. You can navigate between cells with arrow keys or by clicking with the mouse. For example. but you want Excel to treat it as text? Start the entry with a single apostrophe (‘) and then you can type anything you want. and if you want to include a separate column to keep track of whether it’s done or not. with a few tricks from my own experience thrown in. of course. look at people’s hands. -. so even though they can’t interact the game is fun to observe. Excel Every game designer really needs to learn Excel at some point. Adding comments Suppose you want to leave a note to yourself about something in one of the cells. you should learn your way around it. Note: I’m assuming Excel 2003 for PC. You can also reference other cells: =A1*2 will take the contents in cell A1. start the entry with an equal sign (=) and follow with whatever you want computed. and give them information… and they have a secret goal to try to get a specific other player to win. Mousing over the cell reveals the comment. Most of the time you just want simple arithmetic. Some of you probably already use it regularly.

If you accidentally screw up when sorting. A common use of Excel is to keep track of a bunch of objects. copy and paste work pretty much as you’d expect them to. By holding Ctrl down and clicking on individual cells. you can give it a second column as a tiebreaker. in between others. there are two ways to do it. First. tell it which column to sort by. if you’ve got a formula in cell B1 that references A1. To change it. If you want to force Excel to treat a reference as absolute. Now. and a third column as a second tiebreaker if you want (otherwise it’ll just preserve the existing order when it sorts like that). one per row. it’ll start referencing the cell just to the left in the same row. you’ll notice the new formula references D4 instead. but sometimes it’s easier to just select the whole spreadsheet and click the button to ignore the header row. This is pretty easy to do. For example. so if you have a header with column descriptions at the very top and you don’t want that sorted… well. Why do you need to type the dollar sign twice in this example? Because it means you can treat the row as a relative reference while the column is absolute. if you haven’t already. if you wanted every copy-andpaste to look at A1. damage in another. so that it references a specific cell no matter where you copy or paste to.Cut. Sorting Sometimes you’ll want to sort the data in a worksheet. And maybe you want to sort by name just so you have an easy-to-lookup master list. For example. you’ll see a result that’s an error: #REF! which means you’re referencing a cell that doesn’t exist. a funny thing happens: you’ll notice that any cells that are referenced in the formula keep changing. There are times when you might want to do this which I’m sure you will discover as you use Excel. and each attribute is listed in a separate column. So when you reference A1 in your formula in B1. You can even click and drag. you could use $A$1 instead. There’s another way to reference a specific. or vice versa. which tells Excel to treat either the row or column (or both) as an absolute position. you can select several cells that aren’t even next to each other. or hold Shift while moving around with the arrow keys. if you paste this into a cell in column A (where there’s nothing to the left because you’re already all the way on the left). all of these cell references are relative in position to the original. maybe on a large game project you’ll have an Excel file that lists all of the enemies in a game. named cell in an absolute way. First is to use a dollar sign ($) before the column letter or row number or both in the formula. the name is just the letter and number of the cell by default. You can give any cell a name. That’s because by default. and choose Sort. to select a rectangular block of cells… or click on one of the row or column headings to select everything in that row or column… or click on the corner between the row and column headings (or hit Ctrl-A) to select everything. The nice thing about this is that Excel updates all absolute and relative references just the way you’d want it to.” So when you copy and paste the formula somewhere else. and whether to sort ascending or descending. of course. or whatever. so you should never have to change a value or formula or anything just because . If you’ve got two entries in that column that are the same. Go to the Data menu. As you might guess in this example. it’s displayed in the top left part of the Excel window. you can just not select it when sorting. And then you can reference that name anywhere else in the spreadsheet and it’ll be an absolute reference to that named cell. Excel isn’t actually thinking “the cell named A1”… it’s thinking “the cell just to the left of me in the same row. just click on that name and then type in whatever you want. which is mostly useful if you’re using several cells in a bunch of complicated formulas and it’s hard to keep straight in your head which cell is which value when you’re writing the formulas. with the name in one column. select all the cells you want to sort. hit points in another. There’s also an option for ignoring the header row. don’t panic – just hit Undo. if you paste a cell containing a formula a whole bunch of times. and so on. or maybe you want to sort by hit points to see the largest or smallest values. Sometimes you realize you need to insert a few rows or columns somewhere. For our earlier example. Next. and you copy B1 and paste into D5.

and “insert row” or “insert column” is one of the menu choices. each number in its own cell. then any parameters the function might take in (this varies by function). and let’s say you have 200 game objects. You could create the 200 numbers by formula on a scratch area somewhere. If there are several parameters. then hit Ctrl+D to take the top value and propagate it down to all the others (Fill Down). they are separated by commas (. For something simple like counting up by 1. Now. then Fill Down that formula to create the numbers all the way down to 200. You can also Fill Up or Fill Left if you want. My checkbook. select the three cells. you want a number. and it’ll do it for you. Luckily. That would work. but it’s overkill here. For something more complicated you probably won’t get the result you’re looking for. then hit Ctrl+D and Ctrl+R in any order. If you need to remove a row or column it works similarly. Click on that. Here’s a neat little trick: just create two or three cells and put the numbers 1. say. Using your data Sometimes you’ve got a formula you want copied and pasted into a lot of cells all at once. So in one column. but whenever you start reordering or sorting rows. you want to place the numbers 1 through 200. let’s say it’s cell A2.you inserted a row. If you want to fill down and right. putting the number 1 in the first cell. then a close-parenthesis. You could use a formula. and that number is going to start at 1 and then count upwards from there. then copy. And then you just delete the formulas that you don’t need anymore. And anyway. there’s an easy way to do this: Fill. 2. but those don’t have hotkeys. Functions are always written in capital letters. You can also insert them from the Insert menu. you don’t really want a formula in those cells anyway. like. You can embed functions inside other ones. but you can at least have fun trying and seeing what Excel thinks you’re thinking. Excel is perfectly okay with that. followed by an open-parenthesis. has formulas on each row to compute my current balance after adding or subtracting the current transaction from the previous one. When you release the mouse button. All I had to do was write the formula once… but if I had to manually copy then paste into each individual cell. and then in B2 put the formula =A2+1 (which computes to 2). but that works differently: it just clears the values of the cells but doesn’t actually shift everything else up or left. which just takes the computed values and pastes them in as numbers without copying the formulas. And that will work at first. then Paste Special (under the Edit menu). and fills it all in. . and drag down a couple hundred rows. There’s kind of a related command to Fill. To insert. Just select the one cell you want to propagate. so one of the parameters of a function might actually be the result of another function. and select Values. As with copying and pasting. Fill respects absolute and relative references to other cells in your formulas. Entering each number manually is tedious and error-prone. all of these cells referencing each other might get out of whack.” You might think you could just hit the Delete key on the keyboard too. you can just select your cell and a rectangular block of cells below and to the right of it. I’d cry. right-click on the row or column heading. 3 in them. and you’ll notice there’s a little black square dot in the lower right corner of the selection. and I want that computation on every line. you’ll have to select those from the Edit menu under Fill. Functions Excel comes with a lot of built-in functions that you can use in your formulas. for example. and Paste Special / Values is an awesome tool for a lot of things. and it might work or it might not but it’ll be a big mess regardless. or Ctrl+R to propagate the leftmost value to the right. Excel can figure that out. right-click on the row or column heading and select “delete. which is useful in a situation like the following example: suppose you’re making a list of game objects and you want to assign each one a unique ID number.). and a whole bunch of other cells below or to the right of it. Excel takes its best guess what you were doing.

The second most useful function for me is IF. if you want. otherwise it’s 5. The third parameter is optional. FLOOR() and CEILING() both require a second parameter. but never one). The second parameter is evaluated and returned if the condition is true.Probably the single function I use more than any other is SUM(). if the second parameter is zero (which you normally want) then it will round to the nearest whole number. or the nearest 0. or you can use ROUND() which will round it normally. You can even do this with a rectangular block of cells by giving the top-left and bottom-right corners: =SUM(A5:C8) will add up all twelve cells in that 3×4 block. look back in the week where we talked about pseudorandomness to see my favorite three functions for that: RAND() which takes no parameters at all and returns a pseudorandom number from 0 to 1 (it might possibly be zero. For example. it rounds to the nearest tenth. you could say: =IF(A1>0. you could say =A5+A6+A7+A8. shown at the bottom left. and so on – in other words. or from the Insert menu. and hitting Enter. Ctrl+PgUp and Ctrl+PgDn provide a convenient way to switch between tabs without clicking down there all the time. the second parameter for ROUND() tells you the number of digits after the decimal point to include in the significance.A8). The first is a condition that’s evaluated either to a true or false value. and returns the number of cells that are blank. ROUND() also takes a second parameter. the multiple to round to. and returns true if the cell is blank. then use that as your second parameter instead. use a colon between two cells to tell Excel that you want the range of all cells in between those. If you’re making a checklist and want to know how many items have (or haven’t) been checked off. by the way. The reason to create multiple worksheets is mostly for organizational purposes. if the second parameter is 3. this cell’s value is 1. or you could say =SUM(A5.A7. You can rename these to something more interesting than “Sheet1” by just double-clicking the tab. which takes any number of parameters and adds them together. the cell will just appear blank instead. So you can use this. Changing any cell or pressing F9 causes Excel to reroll all randoms. or whatever.1. it rounds to the nearest hundredth. but if you want it to round up or down to the nearest 5. If the second parameter is 1. there’s also the function COUNTBLANK() which takes a range of cells as its one parameter. they’re useful for when you need to take a list and shuffle it randomly. if you’re using one column as a checklist and you want to set a column to a certain value if something hasn’t been checked. since you want it rounding to the nearest whole number. if the second parameter is 2. The easiest way to . but it works a little differently. for example. For random mechanics. You can add new worksheets or delete them by right-clicking on the worksheet tab. which takes in three parameters. if you leave it out and the condition is false. it’s easier sometimes if you’ve got a bunch of different but related systems to put each one in its own worksheet rather than having to scroll around all over the place to find what you’re looking for on a single worksheet. Multiple worksheets By default. or false if it isn’t. for most cases you want this to be 1. You can actually reference cells in other worksheets in a formula. You can also reorder them by clicking and dragging. I already mentioned back in Week 6. it rounds to the nearest thousandth.5) which means that if A1 is greater than zero.A6. if you find yourself going back and forth between two tabs a lot. a new Excel file has three worksheet tabs.1. or you could say =SUM(A5:A8). The last one is the most useful. One of the common things I use with IF is the function ISBLANK() which takes a cell. FLOOR() and CEILING() will take a number and round it down or up to the nearest whole number value. RANK() and VLOOKUP(). typing a name. For ROUND(). Just to be confusing. So if you wanted to sum all of the cells from A5 to A8. if you want a different sheet to be on the left or in the middle. The third parameter is evaluated and returned if the condition is false.

One thing to point out here is that it’s easy to do this by accident. and generally if you want to change something. and what kind. and also display the R-squared value (which is just a measure of how close the fitted curve is to the actual data. changing the ranges of the axes and labeling them… it’s all there somewhere. and then Bolding anything really important. Another thing you should know is that by default. cells where the user is supposed to change values around to see what effect they have on the rest of the game (yellow). if that happens or you otherwise feel like you’re lost and not sure how to get out of entering half of a formula. adding labels on the X and Y axis. great for looking graphically at your game objects when they relate to each other. or exponential. You’ll have to tell it whether the trendline should be linear. For example. go through the wizard to select whatever options you want. Select two or more rows or columns. left or right justified. then Chart. the charts Excel makes are… umm… really ugly. or what. the axes – are all treated separately. the buttons are on a toolbar in the upper right (at least they are on my machine. you’ll probably want to add these right away. right-click on any single data point on the graph. sometimes significantly more). then right-click (or go to the Format menu) and select Format Cells. or even the whole worksheet. Every element of the graph is clickable and selectable on its own. the legend. you’ll have a ton of options at your disposal. You can also make a cell Bolded or Italicized. Select XY(Scatter) and then the subtype that shows curvy lines. From there. even random data will have an R-squared value of more than zero. even if it’s just cells. The very first tab lets you say if this is text or a number. Just about everything you can imagine to make the display better. the background. as happens a lot when analyzing metrics. it also makes it look more like you know what you’re doing The most obvious thing you can do is mess with the color scheme. each one is just a separate curve. or else doubleclick it. there are a few things you can do to make your worksheets look a little bit nicer. Graphing One last thing that I find really useful is the ability to create a graph. Making things look pretty Lastly. all the other things you’re used to doing in Word. Play around with the formatting options and you’ll see what I mean. You also have a huge range of possible ways to display numbers and text. you can display a number as . Personally. a block of cells. Just be aware that each individual element – the gridlines. I find it useful to use background color to differentiate between cells that are just text headings (no color). the graphed lines. if not just add the Formatting toolbar and it’s all on there). then actually click on the cell or cells you want to reference. maybe they aren’t for you. just hit the red X button to the left of the formula bar and it’ll undo any typing you did just now. On the Options tab of the trendline wizard. then select the Insert menu. You can change the text color and background color of any cell. then click Finish and you’ll have your chart. changing the background and foreground colors of everything. and select Add Trendline. just right-click on it and select Format. or polynomial. and cells that are computed values or formulas that should not be changed (gray). where you’re entering something in a cell and don’t realize you haven’t finished. even without the graphs. then use your mouse to click on the other worksheet. R-squared of 0 means it may as well be random… although in practice. and then you click on another cell to see what’s there and instead it starts adding that cell to your formula. If you’re trying to fit a curve. so if you don’t see a display option it probably just means you have the wrong thing selected. an entire row or column. Aside from making things look more professional. One thing you’ll often want to do with graphs is to add a trendline. you can do: adding vertical and not just horizontal lines on the graph. R-squared of 1 means the curve is a perfect fit. you can also have it display the equation on the chart so you can actually see what the best-fit curve is.do it is to type in the formula until you get to the place where you’d type in the cell name. If you select a single cell.

and have a single cell that says (for example) “defensive attributes”. select the rows or columns on either side. (Speaking of which – you can adjust the widths of columns and heights of rows just by clicking and dragging between two rows or columns. you don’t actually need to look at the cells themselves – they’re all just intermediate values. even sideways. called Pivot Tables. • Word Wrap does exactly what you think it does. If you want to draw a square around certain blocks of data in order to group them together visually. Select that and the row or column will disappear. I usually use this for cosmetic reasons. rows or columns. merge the cells over each group. To do that. another button in the Formatting tab lets you select a border. it’ll probably be part of a word per line and the whole mess will be unreadable. this should be zero for a balanced object (according to my cost . there’s an “Add” button at the bottom where you can add up to two other conditions. One way to take care of this is to stick them in their own scratch worksheet. You could create a second header column over the individual headers. then Format Cells and then click the Merge Cells option and click OK. so that the text is actually readable. is conditional formatting. To display it again. right-click and select Unhide. if I’m making a game with a cost curve. just right-click on the row or column. The only thing I’ll warn you is that when you copy and paste cells with borders. those are copied and pasted too under normal conditions. Excel actually has a way to do this automatically in certain circumstances. and you’ve got a ton of attributes but you want to group them together… say. then the formatting isn’t changed. If it’s true. When in the conditional formatting dialog.currency (with or without a currency symbol like a dollar sign or something else). or whatever. then click on that and select the border that looks like a square. so you’ll want to adjust column width before doing that. just remove the borders. that sort of thing. then it’ll try the second condition. and there’s a Hide option. you can give it a format: using a different font or text color. If not. but that’s a pretty advanced thing that I’ll leave to you to learn on your own through google if you reach a point where you need it. On the Alignment tab are three important features: • Orientation lets you display the text at an angle. then go to the Format menu and select Conditional Formatting. Excel will gleefully expand the row height to fit all of the text. One thing you’ll find sometimes is that you have a set of computed cells. then add them back in… or use Paste Special to only paste formulas and not formatting). you have some offensive attributes and some defensive ones. so don’t add borders until you’re done messing around (or if you have to. Just select a rectangle of cells. If the condition isn’t true. the first condition is always evaluated first (and its formatting is used if the condition is satisfied). particularly when using a spreadsheet to balance game objects. and it’ll put edges around it. • Then there’s a curious little option called Merge Cells. I might have a single column that adds up the numeric benefits minus costs of each game object. but you’ll see a thick line between the previous and next columns. or a decimal (to any number of places). Since I want the benefits and costs to be equivalent. You’ll see that all the cells you selected are now a single giant uber-cell. but an easier way is to stick them in their own row or column and then Hide the row or column. like if you’ve got a list of game objects and each column is some attribute. Another thing that I sometimes find useful. As an example of how I use this. each with its own individual formatting. These are not cumulative. You first give it a condition which is either true or false. First select one or more cells. adding borders or background colors. and if not that it’ll try the third condition. and while you need them to be around because you’re referencing them. font effects like bold or italic. which can make your column headings readable if you want the columns themselves to be narrow. or right-clicking and selecting Column Width or Row Height). a little visual signal to you that something else is still in there that you just can’t see. To use it. so if the column is narrow and you’ve got a paragraph in there. select multiple cells. move the cells around. which lets you convert Excel’s pure grid form into something else.

for example if you selected the wrong row by accident. the goal of the game designer is to create a fun experience for the players. You’ll see a little line appear just about where you’d selected. Sometimes that data doesn’t fit on a single page. players in different seats do have different levels of power. the game never ends. Lastly. and you’ll see it stay even if you scroll down. but in what I’d say is the majority of cases. it is immediately obvious to you that the game is not fair. while the people at the bottom have so many disadvantages that they’re likely to stay there as well.” The players at the bottom are also having fun . To undo this. About that whole “Fun” thing… At this point we’ve covered just about every topic I can think of that relates to game balance. I encountered two games in particular that were fun in spite of being unbalanced. Another reason why this game doesn’t fail is that it has a strong roleplaying dynamic. I wanted to believe the two were synonymous.curve). select the column just to the right. The players at the top are having fun because “it’s good to be king. so we were clearly having a good time in spite of the game being obviously unbalanced. and the worst-position players give their best cards to those in the best position. in a lot of basic spreadsheets you just want to display a single thing. you just keep playing hand after hand until you’re tired of it. Since this game sets the expectation of unfairness up front. The first was a card game that I learned as Landlord (although it has many other names. A fun game is a balanced game. then Freeze Panes again. As I learned it. and the entire rest of the worksheet is data. select a single cell and Freeze Panes. players reorder themselves based on how they did in the round. This is a natural positive feedback loop: the people at the top have so many advantages that they’re likely to stay there. It’s not that fairness and balance are always desirable. What’s going on here? I think there are two reasons here. At the end of each hand. by choosing to play at all you have already decided that you are willing to explore an unbalanced system. so some aspect of roleplaying happens naturally in most groups. the ultimate goal of game design depends on the specific game. go to the Window menu and select Unfreeze Panes (and then try again). select the first row below your header row. Admittedly. or red if it’s less than zero. but as you scroll down you forget which column is which. which sounds strange because this isn’t an RPG… but at the same time. If you want to keep the leftmost columns and topmost rows in place. so the top player takes top seat next round. it’s that when players expect a fair and balanced game and then get one that isn’t. so you can always see the headings. Suppose you want the top row or two to stay put. positive if it’s overpowered and negative if it’s underpowered. To do that. with the positions forming a definite progression from best to worst. In college my friends and I would sometimes play this for hours at a time. If you want instead to keep the left columns in place. so I want to take some time to reflect on where balance fits in to the larger field of game design. so the odds are strongly in favor of the people at the top. and in fact that the unfairness is the whole point. I might use conditional formatting to turn the background of a cell a bright green color if benefits minus costs is greater than zero. This is a deliberately unbalanced game. the game doesn’t meet their expectations. and everything above or to the left of that cell is locked in place now. In that column. I’m not too proud to say that I was very wrong about this. then go to the Window menu and select Freeze Panes. so I can immediately get a visual status of how many objects are still not balanced right. and you’ve got a header row along the top and a header column on the left. and these counterexamples changed my mind. Players in the best position give their worst cards to those in the worst position at the start of the round. some more vulgar than others). How does balance fit in with this? When I was a younger designer. always displayed no matter how far you scroll down. and a balanced game is a fun game. the best known is probably a variant called The Great Dalmuti. One is that as soon as you learn the rules. Each player sits in a different position.

and the stories are interesting. balance is very important. or a player finding out the hard way what that new mysterious token on the board does. most scenarios are much easier to win or lose if there are 3 players or 6 players. at the end of the day. and every now and then one of the players at the bottom ends up doing really well and suddenly toppling the throne. “oh. and that’s exciting (or one of the guys on top crashes and falls to the bottom. I understand that it was playtested extensively. but the playtesters were having such a fun time playing that they didn’t bother to notice or report the errors they encountered. making it seem like it wasn’t playtested nearly enough. the core of the game is . But in a game like The Great Dalmuti which is patently unfair. In games where players expect a fair contest. but in a single-player card-battle game using the same mechanics they can be a lot of fun: for a head-to-head tabletop card game players expect the game to provide a fair match. but most of them strongly favor some players over others. but the first edition of this game also has a ton of printing errors. Now. or drawing just the right card you need at just the right time. the randomness and the printing errors. and eventually reaching the top (sort of like a metaphor for hard work and retirement. in the case of the single-player game. The game is very good at creating a story of the experience. players are willing to overlook the flaws because the core gameplay is about working together as a team to explore an unfamiliar and dangerous place. some kind of crazy thing happened that’s fun to talk about after the fact. the game itself is pretty fun if you play in the right group. (In fact. is the (grammatically-incorrectly titled) board game Betrayal at House on the Hill. The reason is that no matter what happens. so they want the cards to be balanced. Since the game replicates a system that we recognize in everyday life where we see the “haves” and “have-nots. I think that what game balance does is that it makes your game fair. striking a blow for the Little Guy in an unfair system. and in most scenarios it’s even possible to have early player elimination.” being able to play in and explore this system from the magic circle of a game has a strong appeal.) In spite of the imbalances.because there’s a thrill of fighting against the odds. Not that it has to do with balance. and most don’t scale very well with number of players (that is. When I introduce people to the game. And as a general game structure. over many hands. For me it’s equally exciting to dig my way out from the bottom. players expect it to be unbalanced so they accept it easily. somewhere along the line you’ll probably see something that feels highly unlikely. And so. like one player finding a whole bunch of useful items all at once. I guess). I always say up front that the game is not remotely balanced. and winning or losing as a coordinated team. but my point is that it is possible for a game design to succeed without it. on the cards and in the scenarios… but that just sets the haunted-house environment to put the players in the right frame of mind. one of the reasons a lot of players hate the “rubber-banding” negative feedback in racing games. This game is highly unbalanced. Each time you play you get a random scenario which has a different set of victory conditions. The second game I played that convinced me that there’s more to life than balance. I do think it would be a better game if it were more balanced. is that it feels unfair because real-life racing doesn’t work that way. I’ve seen some players that didn’t know anything about the game and were just told. or a player rolling uncharacteristically well or poorly on dice at a key point. And incidentally. this is a fun game” and they couldn’t get over the fact that there were so many problems with it. slowly and patiently. player expectation is another thing that is a huge factor in Betrayal. Mostly it’s that because of the random nature of the game. that turns out to be unique enough to be interesting. Partly this has to do with all of the flavor text in the game. so the game is often decided as a function of how many players there are and which random scenario you get). then having one of your kind betray the others and shifting to a one-against-many situation. So. The game has a strong random element that makes it likely one or more players will have a very strong advantage or disadvantage. This is also why completely unbalanced. for example. offering schadenfreude for the rest of the players). because it helps people to enjoy the experience more. in nearly every game. where an AIcontrolled car suddenly gets an impossible burst of speed when it’s too far behind you. overpowered cards in a Trading Card Game (especially if they’re rare) are seen as a bad thing.

In fact. Put the curve into an Excel spreadsheet and use it to create and balance the individual cards. Instead. Some suggestions: • Design the base set for an original trading-card game. Are certain units or strategies or objects too good or too weak? Look around for online message . and playtest on your own to figure out how much the new mechanic is actually worth. and use it to create and balance a new set. It’s up to you whether you want to do this. early elimination) might be present. and then apply all the lessons of game balance that you can. Adjust your cost curve (and cards) accordingly. it’s all about understanding the design goals. you’re probably already using Excel in that case. let me set you a longer challenge that brings together everything we’ve talked about here. Make a small set. if you’ve got a bit more time and want to do a little bit of systems design work as well as balance: • Make the game self-contained.about character growth and progression. Then. no “homework” at all. • Or. Just like everything in game design. find a turn-based or real-time strategy game on computer that includes some kind of mod tools. and then you’ll have something you can add to your game design portfolio. maybe 50 to 80 cards. • Playtest the game with friends and challenge them to “break” the game by finding exploits and optimal strategies. analyze it. Work on the balance: • First. • As soon as you’re done with the core mechanics. sandbagging. put that into Excel. Consider games like Dominion or Roma or Ascension which behave like TCGs but require no collecting. If your game has a free-for-all multiplayer structure. Can players trade? Are there auctions? What effect would it have on the game if you added or removed these? Are there alternatives to the way your economic system works now that you hadn’t considered? Homework Since it’s the end of the course. First. use your knowledge of the existing game to derive a cost curve. what it is you want the player to experience. maybe more if it ends up being interesting. of course. make a cost curve for the game. ask yourself if any of the problems mentioned in this post (turtling. If your game has an economic system. so let’s keep it limited here: • Design an expansion set to an existing TCG. and 100 cards or less. play the game a bit and use your intuition to analyze the balance. But if you want them to experience a fair game. kingmaking. Make a game. because TCGs have a huge amount of content. I usually tell students to stay away from projects like this. the only games I can think of where you don’t want balance are those where the core gameplay is specifically built around playing with the concept of fairness and unfairness. • Or. kill-the-leader. which is at least true in most games. the course is over! But that would be lazy design on my part. because hey. and repeat the process. One is to say. Create one or two new mechanics that you need to figure out a cost for on the curve. If You’re Working on a Game Now… Well. then that is the function of balance. Spend a month on it. and then decide what (if anything) to do about them. so there’s not much I can have you do to exercise those skills that you’re not already doing. and have players build their deck or hand during play instead. there are two ways to approach this. so getting more powerful cards as the game progresses is part of the expectation.

With such cheap components. and propose rules changes to fix it. after all) so challenge yourself to keep it cheap. For a longer project.com/2006/10/learn-to-love-board-games-again100.) • As with other projects.blogspot. respectively: http://jergames.html . you might consider selecting a different project instead). Keep this small. • For a more intense project. if you prefer tabletop RPGs. Use the existing art if you want. Where can you simplify the combat mechanics. but change the nature of the gameplay. Then. which tend to be fairly complicated. or for an original RPG set in an original game world. For example. you could even add economic and production elements of the RTS genre if you’d like. References I found the following two blog posts useful to reference when writing about auctions and multiplayer mechanics. and by how much. of course. If you end up really liking the game. you can make another set of units of the same size for a new “faction” and try to balance the second set with the first set. create a cost curve for all attributes and abilities of the playing pieces. Consider adding some intransitive relationships between the units. just create an expansion set to an existing miniatures game that you already play. Once you’ve identified one or more imbalances. to make sure that no single strategy is strictly better than another. and playtest and iterate on your design. look at the mechanics of some existing miniatures games. just to keep your workload manageable? Try to reduce the game down to a simple set of movement and attack mechanics with perhaps a small handful of special abilities.blogspot. As a challenge to yourself and to keep the scope of this under control. use the mod tools to wipe out an entire part of the gameplay. and playtest to see if the problem is fixed. and use Excel to create and balance a set of unit types. (For a smaller challenge you can. you’ll find it difficult enough to balance them even with just that few.html#auctions and http://pulsiphergamedesign.com/2007/11/design-problems-to-watch-for-in-multi. maybe 5 to 10 different units. Mod the game. to figure out exactly what numbers need to change. • Print out a set of cheap components. • First. set a page limit: a maximum of ten pages of rules descriptions.boards to see if other players feel the same. either for an existing RPG as a replacement. analyze the game mathematically using every tool at your disposal. • Or. and it should all fit on a one-page summary. analyze the combat (or conflict resolution) system of your favorite game to find imbalances. and start over designing a new one from scratch. Playtest the system with your regular tabletop RPG group if you have one (if you don’t have one. • If you like RTS games but prefer something on tabletop. work on balancing your new system. design your own original combat system. instead design a miniatures game. or if you were just using different strategies. Most miniatures games are expensive (you have to buy and paint a lot of miniatures. maybe you can design a brand-new set of technology upgrades for Civilization. or a new set of units for Starcraft. Use piles of cardboard squares that you assemble and cut out on your own.

- Catan Seasons
- Super Catan Rules*
- Catan
- CaK Rv Rules 091907
- Catan2Players
- Dice Game XXL Rules
- Ultra Catan Rules | Special Building Phase
- Catan Dice Game
- Catan Scenario and Variant Guide
- Settlers of Catan Strategy and Tactics Guide
- Catan Traders Barbarians
- Catan Helpers
- Catan Germany Rules 111908
- Catan Merchants of Europe
- Islet of Catan
- Warfieldexpanded Rules
- Super Lite 2
- Andy Bell-Debates in Psychology (2002)
- Settlers of Catan Printable
- Catan Deck Cards
- Fire Emblem Walk Through
- The Moral Case for the Free Market Economy_2
- Ut Aware Rum Ono
- GTBS Features
- Why is My Life So Boring?
- Kom 4 - Creation Trainings
- Catan Basico to Print
- DD4_KeepBorderlands_Ch05
- Microlite81 Extended 1.00 Silver
- Aveyond Ultimate Walkthrough

Skip carousel

- frbsf_let_19930212.pdf
- tmpF916.tmp
- tmp3183.tmp
- tmp83ED.tmp
- tmp56B3.tmp
- A New Method for Solving Deterministic Multi-Item Fuzzy Inventory Model with Three Constraints
- Orthodoxy by Chesterton, G. K. (Gilbert Keith), 1874-1936
- tmp1974.tmp
- Orthodoxy by Chesterton, G. K. (Gilbert Keith), 1874-1936
- Human Geography MCQ'S
- tmpD087.tmp
- Memoirs of Arthur Hamilton, B. A. Of Trinity College, Cambridge Extracted From His Letters And Diaries, With Reminiscences Of His Conversation By His Friend Christopher Carr Of The Same College by Benson, Arthur Christopher, 1862-1925
- tmp773B

Skip carousel

- United States v. William P. Rieger, 942 F.2d 230, 3rd Cir. (1991)
- Down The RiverBuck Bradford and His Tyrants by Optic, Oliver, 1822-1897
- Forty Years a Gambler on the Mississippi by Devol, George H.
- 2010 September
- M. Kramer Manufacturing Co., Inc. v. Hugh Andrews, Tim Caldwell, Drew's Distributing, Inc., Drew's Distributing Co., and Lynch Enterprises, Inc., 783 F.2d 421, 4th Cir. (1986)
- 2007-07-25
- Bally Gaming v. richardson et. al.
- 2008-06-18
- Lone Shark
- Andiron Tales by Bangs, John Kendrick, 1862-1922
- 2007-07-02
- 2008-04-24-A
- 2014 Congressional Poker Classic
- Card Trick by Garrett, Randall, 1927-1987
- The Story of the Foss River Ranch by Cullum, Ridgwell, [pseud.], 1867-1943
- Va Letter
- John Swallow Krispy Kreme Meeting
- Selected Stories of Bret Harte by Harte, Bret, 1836-1902
- tmpE8BD.tmp
- 2006-08-24
- 2009-04-02
- Online Poker Poll Release Memo

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading