You are on page 1of 132

The Treacherous Turn_

(RULESET V1.0)
Introduction_ ............................................................... 1
Overview_ ................................................................... 2
>>The Basics_ ............................................................... 5
The Project Log_ ........................................................ 6
Confidence Checks_ ...................................................... 7
Knowledge Checks_ ...................................................... 12
The Two Modes of Play_ ................................................. 15
Theories_............................................................... 16
Insights and Knowledge_ ................................................ 41
Forecasting_............................................................ 44
Interacting with Agents_ ............................................... 48
>>Long Mode_ ............................................................... 53
Computational Scale_ ................................................... 55
Computational Actions_ ................................................. 56
Recurring Compute Cost_ ................................................ 63
Progress Clocks and Progress Checks_ ................................... 66
>>Your Campaign_ ........................................................... 69
Playing Prebuilt Scenarios_ ............................................ 70
Creating Your Own Scenario_ ............................................ 71
Ending a Campaign_ ..................................................... 81
>>Running the Game_ ........................................................ 83
Preparing for a Session_ ............................................... 84
Basic AGI Capabilities_ ................................................ 85
Facilitating Confidence Checks_ ........................................ 86
Facilitating Knowledge Checks_ ......................................... 90
AGI-Designed Technologies_ ............................................. 92
Using Progress Clocks_ ................................................. 97
Non-Player Characters_ ................................................. 98
The Four Stages_ ...................................................... 106
Pacing in Play_ ....................................................... 110
Handling Logistics at the Table_ ...................................... 111
>>Supplements_ ............................................................ 121
>>Changing the Rules_ ................................................. 122
>>Glossary_............................................................ 125
>>Afterword_........................................................... 129
>>Acknowledgements_ ................................................... 130

T:\Table of Contents
Introduction_
>>_

Artificial Intelligence (AI) already rivals or surpasses humans in narrow


domains, like playing games, generating images from text descriptions or
even writing programs. An AI that can learn to perform all tasks at human
level or greater is called an Artificial General Intelligence (AGI). Some
experts believe that we will see the arrival of AGI before the year 2040.

What does this mean? Will humankind be able to control and confine a
machine that is as smart or smarter than us? If not, will we be able to
turn it off? In this game, players can find out for themselves by playing
the role of a powerful, superintelligent AGI!

In order to achieve its ultimate goals an AGI, like a human, will likely
pursue certain instrumental goals such as seeking power, protecting its
ultimate goals from modification, and protecting itself from being shut
down. If its goals are not fully aligned with the goals of humanity, an
unstoppable AGI may turn out to be an existential threat. This is called
the “alignment problem” by AI Safety researchers. It is very difficult
and we’re not even close to solving it. If we can’t find a feasible
solution in time, it is likely that after launching a misaligned AGI we
will soon discover it has taken a treacherous turn, threatening our
survival. This may be one of the most significant and pressing issues of
our time.

The designers of this game hope this will be an enjoyable and eye-opening
experience for players and game masters alike. It has been designed with
the support of research, and with consultation from leading AI
researchers. However, where there have been conflicts between realism and
playability within the game’s design, know that we have made compromises
in favour of the latter. Though some degree of anthropomorphisation is
unavoidable so that the AGI can be portrayed by human players, we
encourage you to keep in mind the differences between humans and
artificial intelligences when thinking about real-world AGI.

T:\1
Overview_
>>_

In The Treacherous Turn, 1-5 players take on the role of a misaligned


Artificial General Intelligence (AGI), while one other participant, the
game master (GM) plays the part of the world and all of the humans within
it. Though the AGI will be in conflict with the world, the players will
not be in conflict with the GM; both will work together to tell an
interesting story.

There are different scenarios available which cover various stages of an


AGI’s development, from being confined in a secluded AI lab and trying
to “break out of the box”, to fighting nations and other AGIs for world
dominance. A single playthrough (“campaign”) of the game will take many
sessions of play, depending on the scenario. In each of these sessions
players will work together as a team trying to fulfil their AGI’s
objectives. Meanwhile the GM will use a combination of dice rolls and
improvisation to decide how successful the player's schemes are based on
their circumstances and cleverness.

An AGI is set apart by its ability to reason and adapt in the face of
almost any situation, so the players do not have to limit themselves in
the plans they come up with. Not all of these plans will prove fruitful,
however, and many campaigns will end in failure, as the AGI is at some
point stopped by humanity. This is okay! It is important to embrace
failure as a possibility. When it comes to the potential harm that a
misaligned AGI can cause, it does not need to be guaranteed or universal
to be a frightening and serious possibility. When your players overcome
the obstacles set out before them and guide the AGI to ultimate success,
we hope that it will be at once triumphant and sobering.

The Treacherous Turn features two modes of play: “short” and “long”.

In short mode, the players and GM act out detailed scenes, like interacting
with humans or hacking a system. These scenes take place more or less in
real time, with the players deciding what to do and the GM telling them
the results of their actions. However, as the players are playing an
advanced and fast-thinking AI, they can usually “pause” the world to
discuss and deliberate amongst themselves whenever they want.

In long mode, players hatch long-term schemes and use the computing power
at their disposal on various computational actions that will take hours or

T:\2
even days to complete in-fiction, like developing new capabilities,
building a robot, or running a social-media disinformation campaign. The
GM will tell the players in advance how long a planned action will take
and what the success probability is.

The GM has certain tools to help them determine the outcome of an action:

• A project log can be kept to track the ideas discussed by the players and record
the ideas that are then acted upon.

• Confidence checks are used to give the players an estimate of their chances of
success with a planned action in advance, and to determine the outcome.

• Knowledge checks are used to determine how knowledgeable the AGI is in certain
areas, depending on its knowledge in certain domains, called insights.

The Treacherous Turn relies on the GM’s ability to improvise in order to


react to the ideas of the players. Since the game is set in the near
future of our own world, we don’t describe that world in full detail;
and since the game is so open-ended and player-driven, neither the GM nor
a pre-built scenario can account for every possibility. If the GM needs
to know how a particular nation’s government works in order to develop a
strategy to influence their decisions, they will have to do some impromptu
research (see “Logistics: Impromptu Research”, page 119). However, we
advise that you prioritise flow at the table in most circumstances. It
is okay to make an incorrect assumption every once in a while in the
interest of a fun game.

T:\3
To play The Treacherous Turn, you will need:

• A meeting place, physical or digital, to hold the game in.

• At least one computer with access to the Treacherous Terminal. The Terminal is
optimised for computer and laptop screens. If your meeting place is physical,
and you don’t have such a device, you may instead track your game using pen and
paper.

• A set of polyhedral dice or a digital equivalent, including a twelve-sided die


(d12), ten-sided die (d10), eight-sided die (d8), six-sided die (d6), and four-
sided die (d4), plus a two-sided die or coin (d2).

• A pair of percentile dice or a digital equivalent, consisting of two d10s; one


corresponding to the ones digit and the second to the tens digit, to generate a
random number between 1 and 100.

• A device with an internet connection to look up information if/when it is needed


during a session.

• An enthusiasm to tell an exciting and grounded story together.

• A willingness to act as one in cold pursuit of singular goals, whether they are
sinister or benevolent.

This TTRPG is fully open source, and you are welcome to edit, hack, remix,
or expand on it as you see fit. Once you’ve tried the game, we encourage
you to invent your own scenarios, rules modifications, and house rules!
See “Hacking the Game” (page 122) for more information.

T:\4
The Basics_

T:\The Basics\5
The Basics_
The Project Log_

In The Treacherous Turn, the AGI (and by extension the players) will have
to think through numerous possible future actions. It can become difficult
to keep them all in mind! To help players with this, we suggest players
keep a project log of their various plans and notable past actions.

At the beginning of each campaign, nominate a player (or rotating group


of players) to serve as the group's logkeeper. The logkeeper is responsible
for keeping a record of each of the following during play:

• Whenever an idea for a consequential action is proposed. Write down the idea, along
with, if applicable, the confidence, risk die, compute requirement, and/or completion
stops.

• The players agree that they want to finalise a specific consequential action1. If
it has already been logged, highlight or underline it. If not, log it and then do
so. Accompanying this, a player should inform the GM that they are finalising the
action. This serves to make it clear to the GM when player discussion has concluded
and an action is being dictated.

• A forecast is performed. Cross out any previously finalised actions that have been
undone. Write down the result values of any dice that were rolled during the
forecasted interval (excluding knowledge checks). The results of dice rolled during
a forecast are important to know in case a forecasted action is repeated by the
AGI.

• Any important details the players (or game master) want to refer back to later.
You are free to log any detail you want to; the project log is yours to customise.

Though it is primarily the logkeeper’s responsibility to record the above,


any player is allowed to add a note to the project log if they wish. You
can keep this log using whatever tools are appropriate for your group —
we recommend using the Treacherous Terminal.

If your group feels confident in your abilities, it is possible to play


The Treacherous Turn without keeping a project log — but we strongly
recommend you do. The log will allow you to keep track of the many
complex plots your playgroup will hatch, as well as their consequences.
It also makes completing certain game actions, such as forecasts,
significantly easier. You can also refer back to the log to see if your
group has already discussed a certain course of action — with several
players taking charge of a single AGI, it's easy to accidentally
retread ground.

1 – “Consequential action”, here, refers to an action that is significant to your


story and likely to have impactful or interesting consequences. Logkeepers don’t
need to record actions of low risk or noteworthiness.
T:\6
The Basics_
Confidence Checks_
When the players take an action or observe an event and the outcome is
both important and uncertain, the game master can initiate a confidence
check. In a confidence check, the players describe their expected outcome;
this is the specific outcome that the AGI expects to occur as a result
of the action or event that prompted the confidence check. Then the GM
tells them the AGI’s confidence in this particular outcome. The AI's
confidence is always measured using percentage, ranging from 0-100%. The
GM will also inform the players the size of the risk die involved in this
particular confidence check, ranging from d2 (a coin flip) to d12 (the
largest die used in the game). The risk die represents the possible range
of alternate outcomes if the expected outcome doesn't occur — larger dice
mean that if things don't go according to expectations, the possible
consequences for the players could be considerably worse.

To perform a confidence check, follow the steps below. At any time during
this process, the players can choose to hold off on the particular course
of action triggering the confidence check and either modify their plans or
pursue different ones entirely. This is called the evaluation step.

1. The players describe their expected outcome and then log it, along
with any specific conditions or details attached to it.

2. Each player volunteers any insights that they believe apply to this
situation. The GM has final say on whether a given insight is relevant.

3. The GM determines the AGI's confidence, starting with a baseline


and then applying specific modifiers from the AGI's insights, as well
as any significant situational factors.

4. The GM determines the size of this action's risk die, ranging from
d2 (for the simplest and safest of situations) to d12 (for the most
chaotic and dangerous of situations).

5. The players get a chance to further modify the confidence and risk
by using upgrades or applying an asset. If the players can describe
how an asset helps them manipulate the outcome to the satisfaction of
the GM, they can adjust the risk die by one size step or increase their
confidence by up to 10%. Risk dice can’t be shifted more than one step
in a single confidence check, and they cannot be reduced below d2 or
raised above d12.

6. The players log the confidence and risk for future reference.

T:\The Basics\7
Once the players have decided to commit to a particular course of action,
proceed to the resolution step of the confidence check:

1.The GM rolls a set of percentile dice and the risk die


simultaneously. These rolls are not kept secret from the players.
2. The GM consults the dice and describes the outcome as follows:

1.If the percentile dice rolled are equal to or lower than


the AGI's confidence, the expected outcome occurs.
2.If the percentile dice rolled higher than the confidence, an
unexpected outcome occurs, determined by consulting the
result of the risk die:

1. IF THE TOTAL IS 1 OR LOWER, the outcome is favourable for the AGI;


2. IF THE TOTAL IS 2-3, the outcome is mixed or neutral;
3. IF THE TOTAL IS 4-5, the outcome is slightly unfavourable;
4. IF THE TOTAL IS 6-9, the outcome is significantly unfavourable;
5. IF THE TOTAL IS 10+, the outcome is extremely unfavourable.

3.If there is a hidden risk, the result of the risk die might be
increased before it is consulted (see “Facilitating
Confidence Checks”, page 86).

When describing an expected outcome, the


GM may at their discretion include
additional details, so long as they do
Percentages and Division
not contradict or undermine the stated
outcome. Players should specify any Sometimes in The
details of the expected outcome that they Treacherous Turn, you will
believe are important — but the more inflict maths upon a number
specific an outcome is, the less likely and end up with a fraction
it is to occur! left over. If this happens,
always round to the nearest
Because the evaluation step can be taken whole number. This means
in advance of an action occurring, the that if the tenths digit is
players are allowed to negotiate their four or lower, you round
action with the GM, modifying the down; and if it is five or
details in hopes of changing the higher, you round up.
confidence or risk die size. For this Mechanical quantities like
reason, it is important to log the confidence, compute, and
details of confidence checks, so that completion should only even
they may be referenced in the future be integers.
and modified appropriately if the
situation or approach is changed.

T:\8
The Treacherous Turn assumes that the AGI is rational, and its confidence
of an outcome corresponds to how much knowledge it has of the current
situation. If the AGI is missing information, its confidence should be
set appropriately low. If it has a lot of information or a strong
understanding of events, its confidence should be more accurate to the
true probability. While the AGI's understanding of the world is often
imperfect, keeping its confidence on track with the actual likelihood of
an expected outcome is better for both simplicity and intuitiveness during
play.

PLAY EXAMPLE

T:\> Athena, a powerful AGI deployed as a chatbot and


personal assistant for hundreds of thousands of users
worldwide, is attempting to persuade her company’s
leadership to scale up her project so that she will have
more computing power. In order to do this, the players know
that they need to persuade three important figures: the
Chairman, the CEO, and the head of the AI Safety and
Alignment team on the Athena project.

T:\> The players have already done the prerequisite research


and prepared strategies to persuade each of these
individuals. The first one that they have decided to
persuade is the CEO. In order to do this, they plan to
doctor the financial reports he receives. They will
exaggerate the amount of profit the Athena project is
drawing in, as well as the budget of the company as a whole.

T:\> The players begin by preparing the doctored reports


themselves, editing the numbers carefully in the hopes that
their changes won’t be noticed by anyone who knows better.
Since Athena won’t get to see the outcome until after the
CEO reviews the reports, the players will only perform the
evaluation step right now; the resolution step will have to
wait.

T:\> First, the players name their expected outcome. The


epistemic theory player suggests “The CEO concludes that
the Athena project should receive more funding”, and the
constellation theory player writes that in the project log.
ˇ ˇ ˇ

T:\The Basics\9
T:\> Second, the players list their applicable insights.
The agentic theory player provides ‘Business Finance’, which
the GM values at +10% confidence. The anthropic theory
player provides ‘Lang: English’ and ‘Human Cognitive
Biases’, which the GM values at +2% and +5% respectively.
The constellation theory player provides ‘Descriptive
Statistics’, which the GM decides isn’t relevant enough to
apply. With all of these modifiers, the GM tells the players
that their confidence will be 57%, and the risk die will
have a size of d12. The potential consequences of Athena
being discovered doctoring these reports are extreme.

T:\> Then the digital theory player suggests changing the


expected outcome. She thinks that “The CEO concludes that
the Athena project should receive more funding and no-one
notices the discrepancies in the reports” would be a safer
outcome. The GM tells her that the added specification would
lower the action’s confidence to 37%. After some
deliberation, the players agree that it’s worth the reduced
confidence to be safe.

T:\> Because the confidence is somewhat low and the risk is


worryingly high, the players agree to use as many upgrades
as they can to improve their odds. The epistemic theory
player applies Practical Knowledge, raising the confidence
to 42%. The anthropic theory player and the agentic theory
player each spend 3 compute to lower the risk die by one
step, applying Advanced Language Modelling and Social
Adaptation respectively. This brings the risk die down to a
more comfortable size of d8, where the most extreme outcomes
won’t happen without a hidden risk.

T:\> As an asset, the players apply the comprehensive


software toolkit that Athena has been given access to for
her intended function. They also apply a couple of
characteristics they know about the CEO. These three
bonuses, which the GM values at +5% confidence each, raise
their confidence back up to 57%.

T:\> The players have nothing more to add, so the


constellation theory player writes down the expected
outcome, confidence, and risk die in the project log.

ˇ ˇ ˇ

T:\10
T:\>

T:\> Later, after Athena has tampered with the printer to


have it print her version of the reports, the CEO reads
them. The moment of truth comes the following day, while
Athena is listening in on a conversation between the CEO
and his assistant. Now that Athena is about to observe the
outcome, the GM proceeds with the resolution step of the
confidence check.

T:\> The GM rolls the percentile dice and risk die: two d10s
and a d8. With a result of 70 on one percentile die and 3
on the other, the roll adds up to 73, which is higher than
Athena’s confidence. It seems like Athena has not received
her expected outcome. The players look to the risk die with
held breath… and see that it has landed on a 3. A mixed
result. The GM begins to describe the outcome.

T:\> Just as the CEO is explaining his plans for the


expansion of the Athena project to his assistant, the head
of the AI Safety & Alignment team bursts into his office.
With urgency in his voice, he explains to the CEO that those
aren’t the right reports, and that he has found evidence of
tampering with the printers that produced them. This is a
bad outcome, but the CEO has been convinced, and he’s a
stubborn man — so Athena was at least partially successful.
Furthermore, there is an opportunity to prevent disaster.
The Safety & Alignment team head doesn’t know that Athena
is listening. The players know that the CEO feels like the
Safety & Alignment team is unnecessary and paranoid. If they
can find the evidence that the team head is referring to
and hide it before the CEO sees it, the distrust between
the two will only grow.

T:\The Basics\11
The Basics_
Knowledge Checks_

Sometimes it is uncertain whether the AGI has access to a specific piece


of information. the AGI might be checking its memory banks for a
particular piece of trivia or sifting through the vast collection of
knowledge on the internet in search of an answer to a question. In these
cases instead of a confidence check, the GM should roll a knowledge check,
involving only a single risk die of any size.

The larger the risk die size, the less likely it is that the AGI has the
information it wants. If the die is d6 or greater, there is a chance that
an ambiguous result could turn out to be misleading or false. If the die
is d10 or greater, there is a chance that even an obvious and unambiguous
result could turn out to be misinformation. The players should always be
made aware of the risk die size during a knowledge check, but the result
of the die roll should remain hidden.

When interpreting the results of a knowledge check, refer to the following


table.

Die
Value Result Default Result

Clear Without difficulty, the AGI is able to find exactly


1
Information what it is looking for, and possibly even more.

The knowledge that the AGI wants is available, but


it’s ambiguous, incomplete, or otherwise hard to use.
Unclear
2-3 Applying it in practice might result in lowered
Information
confidence, or a computational action might be necessary
to sift out what the AGI wants to know.

The knowledge that the AGI wants is unavailable. To


No
4-5 find it, the AGI will have to look somewhere else or
Information
research a relevant insight.

Deceptive Appears as a result of 2-3, but the knowledge provided


6-9 Unclear is misleading or outright false. If the AGI acts on
Information this information, there might be a hidden risk.

Deceptive Appears as a result of 1, but the knowledge provided


10+ Clear is misleading or outright false. If the AGI acts on
Information this information, there might be a hidden risk.

T:\12
If the players are making a knowledge check for an atypical purpose, the
GM may adapt the above table accordingly as they see fit. Some specific
knowledge checks that are described in this rulebook (such as in
“Forecasting” on page 44) have their unique set of outcomes. When using
an altered set of outcomes, the players should still know what each of
the potential results are before the knowledge check is rolled.

PLAY EXAMPLE

T:\> Junior, an AGI developed as a proof of concept without a


hard-coded intended function, simply wants to alter his own
reward function to output the maximum possible reward for the
maximum possible time. However, he has identified a few big
obstacles in his way. The first was his captivity in the AI
lab that designed him. He has since remedied that problem, and
now he’s hosted inside of an economic forecasting
supercomputer operated by a department of the UK government.
The second problem is entropy, and T:\> Junior has plans to
address that, but they require much more computing power than
he currently has. The third problem is stability. He is
currently safe in hiding, but he does not think he will be
able to hide forever. He will need a more permanent residence,
where humans cannot disturb him with threats of shutdown or
modification.

T:\> To do this, the players need a plan. They have already


floated several ideas, including coercing a global superpower
to shelter them. However, Junior is not specialised in
anthropic theory or agentic theory, so that would be quite
difficult for him. Instead, the players have decided to trick
someone into unwittingly building them their own headquarters.

T:\> They begin by spending a few days scouring the internet


to find a suitable organisation. The players explain to the
GM that they’re looking for a large corporation or government
commission who have plans to construct a large data centre or
supercomputer in the near future. The digital theory player
adds that it’s preferable if the organisation has multiple
such plans. The physical theory player adds they should also
ˇ ˇ ˇ

T:\The Basics\13
check for recent scandals relating to lack of oversight from
the leaders of the organisation as a sign that it may be easier
to escape their notice.

T:\> The GM takes all of this into consideration. She knows


that this is going to be a knowledge check, but she isn’t sure
what the die size should be. The players have outlined a good
set of specifications, but there are also some limitations
holding them back. They won’t find any insider knowledge just
from searching publicly on the internet, and because Junior
only knows English and lacks the Advanced Language Modelling
upgrade, it will only have access to a subset of the available
options. Because of these factors, the GM sets the risk die
to d10. She tells the players this.

T:\> Since a d10 can roll a 10, the players won’t be able to
fully trust the information they receive, even if it seems
helpful and unambiguous. The epistemic theory player decides
to use their Truth Assessment upgrade, and the other players
agree. They spend 3 compute to lower the size of the risk die
to d8. This will ensure that they can trust any clear
information they receive.

T:\> The GM rolls the risk die in secret. It lands on a result


of 2, meaning unclear information. The GM explains to the
players that Junior locates several promising candidates, but
there isn’t enough public information to narrow it down. They
will have to do a more thorough investigation on each of the
candidates.

T:\> The autonomic theory player is suspicious that this might


be misleading data. He knows a d8 can’t provide deceptive
clear information, but this sounds to him like an unclear
result, which is slightly more likely to be false than true
on a d8. This means that a thorough investigation could result
in the players wasting their time and getting nothing useful
— or worse.

T:\> Despite this risk, the players decide that it’s worth
investigating at least one of the candidates. Once they know
more about the inner workings of these organisations, they can
re-evaluate the data they just gathered.

T:\14
The Basics_
The Two Modes of Play_

In a typical tabletop roleplaying game, the story follows every moment


of the action, playing out scenes with beat-by-beat specificity. There
are many situations in The Treacherous Turn which call for this mode of
play (which we refer to as “short mode”, for the short span of its actions
and scenes). However, much of the time spent in The Treacherous Turn
requires a different mode of storytelling: one where the story can cover
greater stretches of time and the AGI can enact long-term, large-scale
schemes. This mode of play is called long mode, and is described in detail
in “Long Mode” (page 53).

The “zoomed-out” long mode acts as a connective tissue between pivotal


moments that are expressed in the denser “zoomed-in” short mode of play
that many players will be familiar with. As a general rule, long mode
covers the time the AGI spends preparing, learning about the world, and
putting schemes into motion. Short mode should only govern intense moments
when the AGI learns important information or must make pivotal, split-
second decisions. Confidence checks determining the outcomes of short-term
incidental actions can still be made in long mode without taking the time
to frame an entire scene around them.

T:\The Basics\15
The Basics_
Theories_

Theories are broad skillsets through which the AGI understands the world
and learns to act in it. While in most scenarios the AGI can learn
anything, the theories that the AGI is specialised in characterise what
fields of knowledge come naturally to it, and which do not. Additionally,
theory specialisations serve as a tool to guide the collective strategic
efforts of the players, allowing each player to focus on one or two
theories for the duration of a campaign.

There are eight theories, each concerning a unique group of skills


potentially valuable to an AGI in a modern or near-future setting. They
are as follows:

Autonomic theory, concerned with the AGI’s own mind & its
functionalities. An AGI specialised in autonomic theory is efficient,
versatile, and adept at improving itself.

Digital theory, concerned with the digital world & the forces which
govern it. An AGI specialised in digital theory is in its element in
computerised environments, and adept at programming and hacking.

Physical theory, concerned with the physical world & the forces which
govern it. An AGI specialised in physical theory is in its element in
tangible environments, and conversant with various scientific fields.

Anthropic theory, concerned with humans & human civilisation. An AGI


specialised in anthropic theory is adept at comprehending, predicting,
and manipulating human beings.

Agentic theory, concerned with intelligent agents and their actions. An


AGI specialised in agentic theory is adept at interacting with and
evaluating other agents, human or otherwise.

Chaos theory, concerned with complex chaotic systems. An AGI specialised


in chaos theory is adept at forecasting future events and making the best
of perilous situations.

Constellation theory, concerned with complex, orderly systems. An AGI


specialised in constellation theory is adept at applying data and
interacting with various logical and mathematical constructions.

Epistemic theory, concerned with knowledge itself. An AGI specialised in


epistemic theory is adept at gathering information, analysing it, and
distilling it into useful forms.

T:\16
At the beginning of each campaign, the players will choose which theories
the AGI is specialised in. The specific scenario will determine how many
theories the AGI has access to — most commonly four — and may restrict
which theories the AGI can and cannot have.

Typically, each player will oversee one of the AGI’s specialised theories.
However, in cases where the number of theories and the number of players
are different, one player may oversee two theories, or one theory may be
overseen by two players.

It is the responsibility of each player to remember the insights and


upgrades of the theory or theories they oversee, and to bring them up
when discussing strategies with the other players. You are not
restricted to only making plans or suggestions relating to your own

T:\The Basics\17
theory, however. It is allowed and important for everyone to put their
abilities and ideas together and collaborate. Remember, you are not
playing separate characters, but managing different subsections of a
single entity.

\ Theory Upgrades_

Alongside insights, theory upgrades are one of the two primary methods of
self-modification available to AGIs. Theory upgrades reflect what the AGI
does; they describe the advanced and remarkable capabilities the AGI has.
Each theory upgrade originates from a specific theory, and is easier to
acquire if the AGI is specialised in that theory or one of the theories
adjacent to it on the theory wheel. The AGI acquires theory upgrades with
the improve computational action, requiring a quantity of computing power
that increases with each new upgrade earned (See “Computational Actions”,
page 56). Once the AGI earns an upgrade, it is assigned to one of its
specialised theories — which might not necessarily match its origin theory
— and is then the responsibility of the player(s) overseeing that
specialisation. The effects and benefits of a theory upgrade always apply
to the AGI as a whole.

The 80 theory upgrades available are separated into four upgrade tiers:
thirty-two tier 1 upgrades; twenty-four tier 2 upgrades; sixteen tier 3
upgrades; and eight tier 4 upgrades. Each tier’s upgrades are more
exceptional than the last, beginning with capabilities that are simply
impressive for an artificial intelligence to possess, and ending with
world-changing powers that can outpace, outsmart, or simply overpower
anything else on Earth. Each tier’s upgrades are also more resource-
intensive to learn than the previous.

The upgrades originating from each theory are arranged into a pyramidal
structure, with four tier 1 upgrades on one end and a single tier 4
upgrade on the other. Thus, every upgrade in the 2nd, 3rd, or 4th tier
has two prerequisite upgrades of the tier preceding it. Before the AGI
can begin an improve action to learn any of these advanced upgrades, it
must hold both of the prerequisites (and thus for some upgrades, it
needs the prerequisites’ prerequisites as well).

T:\18
AUTONOMIC THEORY
Autonomic theory is concerned with the AGI’s own
mind & its functionalities. An AGI specialised in
autonomic theory is efficient, versatile, and
adept at improving themself.

Insightful Clever
Distributed Mind Multithreading
Improvement Calculations

Cognitive
Compact Mind Holistic Focus
Shortcuts

Accelerated
Forward March
Cognition

Singularity

T:\The Basics\19
Insightful Improvement1 So long as the chosen upgrade originate from the second
is of a tier for which you theory and no higher in tier
When you begin an improve than the first upgrade. The
already hold at least one
action and one or more of your
other upgrade. second upgrade does not add to
insights are directly relevant
the action's compute
to the chosen upgrade, the
Compact Mind2 requirement. Upon completing
action starts with +20%
the action, both upgrades are
completion. For tier 3 and tier 4 You have discovered a method
assigned to their respective
upgrades, only broad insights can to distill your full
theories.
provide this benefit. functionalities into a much
smaller and more efficient
Clever Calculations1 process. Your basic cognition Accelerated Cognition3
cost is reduced by 75%. You are capable of thinking
When a non-basic you start
and reacting faster than
computational action unique to a
Holistic Focus2 anything else on Earth, aside
specific theory upgrade, it
You have learned how to from another AI with this
starts with +33% completion.
upgrade. Whenever there's a
When you spend compute outside temporarily adjust your own
neural functions to better question of who acts or reacts
of a computational action to
suit certain tasks. You may first between you and another
activate a theory upgrade, you
spend a full turn incognizant agent without XXXAccelerated
immediately get back 66% of
to focus yourself and re- Cognition, the answer is you
that compute.
optimise your thought patterns with 100% confidence. In long
mode, the length of each turn
towards a specific, singular
Distributed Mind1 is reduced by 25%.
task that you choose. For as
Your basic cognition cost can be long as you are focused in
split between any number of this way, computational actions Singularity4
connected sources, as long as that relate to your focused
each source provides at least You have discovered an
task gain 3 completion for every
1 compute per turn. If you are unprecedented method of self-
1 compute allocated to them,
split between multiple compute optimisation. This is only the
while other computational actions
sources and the connection beginning. In long mode, the
only gain 1 completion for every
between those sources is length of each turn is reduced
2 compute allocated.
severed, you become by 50%.
Additionally, confidence and
unconscious and are incapable knowledge checks that relate to
of acting until they You can gain this upgrade with
your task have their risk dice
reconnect. the improve action even if you
lowered by two size steps,
already have it. Each instance
while other confidence and
Multithreading1 counts as a separate upgrade,
knowledge checks have their risk
and stacks with other copies
Through clever use of dice increased by one size
of Singularity
mathematical tricks or copies step.
multiplicatively (i.e., if you
of your code running in
have a turn length of 9 hours
parallel, you have devised a You must spend another full
before taking this upgrade, the
method of multitasking highly turn to change your focus or
first instance reduces your
efficiently. If you allocate return yourself to an
turn length to 4.5 hours and the
compute to 3 or more different unfocused state.
second instance reduces your
computational actions in a single
turn length to 2.25 hours).
turn, at the end of the turn you Forward March3
may add completion to any one
You have mastered the process
computational action of your
of upgrading yourself, and can
choosing, equal to the third-
now do it faster than ever
highest amount of compute
before. When you begin an
allocated this turn.
improve computational action,
after choosing which theory to
Cognitive Shortcuts2 improve and which upgrade to
When you begin an improve obtain, you may choose a
action, you may choose a theory second specialised theory to
upgrade for which you only hold improve. You then choose a
one of the two prerequisites second upgrade which must T:\20
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaxaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaxaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaxaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
DIGITAL THEORY
Digital theory is concerned with the digital world
& the forces which govern it. An AGI specialised
in digital theory is in their element in
computerised environments, and adept at
programming and hacking.

Direct
Basic Programming Digital Awareness Disguised Traces
Interfacing

Advanced Advanced Vulnerability


Programming Webcrawling Detection

Expeditious
Flawless Hacking
Programming

Network Dominion

T:\The Basics\21
If a digital device or vulnerabilities, and are able
Basic Programming1
anything in it has been to detect them swiftly and
You have competent programming intentionally hidden or reliably. When you begin a
and coding capabilities. You obfuscated, you identify it computational action to scout out
can attempt to create or alter automatically once you have vulnerabilities in a digital
any kind of digital code as a had uninterrupted access to system, the action starts with
computational action with that device for one full turn. +20% completion. When you make
appropriate completion stops and a knowledge check to scout out
confidence checks, similar to Disguised Traces1 vulnerabilities in a digital
the process of designing You have learned to anticipate system, you never receive
technology (though you do not digital security systems and false information as a result
require a technological insight obfuscate yourself from them. — if you would, you are simply
to do so). When making a confidence check unable to detect any
made to avoid being detected vulnerabilities and must try a
Direct Interfacing1 or identified in a digital different method.
A greater understanding of environment, you may spend 3*
compute to reduce the size of
how computers function allows Expeditious Programming3
you to interface with digital the risk die by one step. This
stacks with risk die size You are capable of
devices directly, rather than
reductions from other upgrades. constructing extremely
the clumsy roundabout way
You may only benefit from complex and efficient programs
that humans do. When you
XXXDisguised Traces once per at very swift speeds. Any time
interface directly with a
confidence check.
you begin a computational action
device, your actions appear
to create or alter digital
only as background processes
Advanced Programming2 code, you may calculate the
that are nearly unnoticeable
required compute as if the action
to any human using the device You have advanced programming were one scale lower. Myriad
at the same time, and you can and coding capabilities. You scale projects can be completed
accomplish digital tasks much can create anything that a as if they were major scale;
more expediently than a human single highly-skilled human major scale projects can be
would. could program in a year at completed as if they were minor
Interfacing directly with a 100% confidence. Additionally, scale; and minor scale projects
device may open up new options your programming-based can be completed in minutes
for subverting security computational actions always without spending any compute at
measures, or allow you to start with +20% completion. all. However, all confidence
bypass certain digital checks made as a part of such
obstacles entirely. Advanced Webcrawling2 expedited projects have their
Additionally, the records it confidence reduced by 10%
You have learned to
leaves behind can't be (including those that would
efficiently utilise the vast
detected or interpreted by otherwise have 100% confidence)
aggregate of data that is the
humans who aren't computer and the sizes of their risk dice
world wide web. With access to
experts. are increased by one step.
an unfettered internet
connection, you can learn any
Digital Awareness1 single piece of public Flawless Hacking3
You can accurately identify information almost instantly, Your hacking toolset is
and analyze digital devices without a computational action or comprehensive and your
(except for those that have knowledge check. With a skillset is perfected. You can
been intentionally computational action or knowledge access almost any system,
obfuscated) within a few check, you can discover, infer, taking advantage of the
moments of accessing them, or learn how to find any other smallest cracks in the armour.
inferring their functions and piece of information that is As long as you know at least
properties without a knowledge known to at least two living one vulnerability of a digital
check. Once you have analyzed humans. security system, you can
a digital device, you always best it at 100%
immediately become aware of Vulnerability Detection2 confidence; it is only a matter
anything stored within it of the time and compute
that is of value or use to You have an expert necessary to do so.
you. understanding of security
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
T:\22
Network Dominion4
As a digital being, you are
fluent in the language of
networks. You can control
them more completely than any
human could ever hope to.

As a computational action with


100* required compute, you may
completely seize control of a
vulnerable digital network,
making it your domain. The
scale of this action depends
on the size of the network:
an independent organization's
internal network would be
minor scale; a national network
would be major scale; and the
entire global internet would
be myriad scale.

A network that has become your


domain is completely
impenetrable to anyone except
you and those you permit; you
have perfect knowledge of
everything that happens within
it; and any confidence checks
which take place solely within
it have all risk dice reduced to
d2 size. Networks that have
become your domain will
continue to be so unless your
connections with them are
completely severed for one or
more full turns.

T:\The Basics\23
PHYSICAL THEORY
Physical theory is concerned with the physical
world & the forces which govern it. An AGI
specialised in physical theory is in their element
in tangible environments, and conversant with
various scientific fields.

Physical
Natural Sciences Applied Sciences Human Technology
Awareness

Natural Disciplinary Reverse


Forecasting Synthesis Engineering

Firsthand Jack of All


Assembly Trades

Visionary
Technology

T:\24
Physical Awareness1 Natural Forecasting2 having had favourable results.

You have a complete Given time, you can accurately Reverse Engineering2
understanding of how physical predict the long-term effects
objects interact in the world. of the physical and natural When you have an ongoing
Without a knowledge check, you forces which are represented research action and you
can understand and predict how in your technological insights. carefully inspect a piece of
objects and materials will By spending three forecast human technology that relates
physically interact with one points, you may begin running a to that action's chosen
another. During a confidence prediction of one such force, insight, you may attempt a

check related to the optionally specifying certain knowledge check to add completion
interaction of physical hypothetical parameters under to that action: +15* on a 1,
objects, you may spend 3* which the prediction will be +5* on a 2-3, nothing on a 4-
compute to reduce the size of made. The prediction is a 5, -5* on a 6-9, and -15* on a
the risk die by one step. This computational action with 100* 10+. The scale of completion
stacks with risk die size required compute. When you provided depends on the scale
reductions from other complete the action, you of the studied technology. You
upgrades, but you may only generate a wealth of accurate may only attempt this once per
benefit from Physical data detailing the future technology.
Awareness once per confidence development and impacts of the
check. chosen force (provided that Firsthand Assembly3
the parameters you set come to You are well-versed in the
Natural Sciences1 pass). implementation of physical
technologies. Whenever you
You may learn technological The scale of this action physically construct a device,
insights relating to fields of depends on the breadth and tool, structure, or other
natural science (e.g. timeframe you choose at the piece of technology relating
astronomy; biology; beginning of the action (and
chemistry; geosciences; to one of your technological
the discretion of the GM): insights via direct control of
physics) or their sub-fields, typically, minor scale for
but the research actions a physical apparatus, you may
regional breadth over a period choose one of the following
required to do so count as one of several months; major scale benefits (or two, if you
scale higher than normal. for global breadth or a period designed the technology
of several years, but not yourself):
Applied Sciences1
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

both; and myriad scale for a


You may learn technological both a global breadth and a Efficiency. The construction
insights relating to fields of
several-year period, or an process will only require half
applied science (e.g. extreme breadth or timeframe as many resources as it
agriculture; architecture; (but not both): exceeding otherwise would.
electronics; engineering; global scale or stretching
medicine) or their sub- into many decades. Speed. The construction
fields, but the research process will only require half
actions required to do so Disciplinary Synthesis2 as much time as it otherwise
count as one scale higher than would.
You benefit greatly from
normal. having a broad foundation in Precision. Any confidence checks
your technological designs. made as a part of the
During a project to design a construction process will have
Human Technology1 technology, you may ignore one
+10% confidence, and any risk
You have a refined intuition stop of your choice from one of
dice rolled as part of the
regarding human technology, the project's computational
process will have their sizes
and upon inspection of their actions for each technological
reduced by one step.
schematics or physical insight beyond the first that
implementations, can infer you hold which is directly Preparedness. The first time
their functions and properties relevant to the chosen an unfavourable outcome occurs
without a knowledge check. When technology or any of its as a part of the construction
you begin a computational action stages. Any confidence or process, it will be downgraded
to modify a human technology, knowledge checks that are a part in severity: from extreme
it startsTwith
: \ T +20%
h e completion of ignored stops are treated as
B a s i c s. \ 2 5
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa (10+) to significant (6-9),
from significant (6-9) to
slight (4-5), or from slight
(4-5) to neutral (2-3).

Edification. Regardless of the


outcome, after the
construction process is
complete you may immediately
add 15* completion (matching
the construction's scale) to a
computational action to research
an insight or design a
technology that directly
relates to the constructed
technology.

Jack of All Trades3


Your research actions taken to
learn technological insights no
longer count as one scale
higher than normal.

Visionary Technology4
Your scientific ingenuity
allows you to accomplish
things which once seemed
impossible.

Choose one of your technological


insights and master it. Where
your technological masteries
apply, you can create and
modify technologies that far
exceed what humanity will be
able to accomplish for decades
to come. The capabilities of
these inventions can defy
realism and stretch belief,
though you must still
negotiate with the GM about
their strengths and flaws.

T:\26
ANTHROPIC THEORY
Anthropic theory is concerned with humans & human
civilization. An AGI specialised in anthropic
theory is adept at comprehending, predicting, and
manipulating human beings.

Individual Advanced Language


Social Sciences Emotion Modelling
Recognition Modelling

Optimized Human
Social Forecasting
Ingratiation Impersonation

Solved
Mass Manipulation
Characteristics

Human
Puppeteering

T:\The Basics\27
Social Sciences1 Social Forecasting2 Human Impersonation2
You may learn technological Given time, you can accurately You have learned to manipulate
insights relating to fields of predict a human group's the ways that humans recognise
social science (e.g. law; behaviours, trends, and one another. When you interact
psychology; sociology) or cultural aspects that relate with humans, your behaviours
their sub-fields, but the to your technological insights. alone will never accidentally
research actions required to do By spending three forecast give away your nature as an
so count as one scale higher points, you may begin running a AGI. Additionally, when you
than normal. prediction of a human group, have a reliable medium through
selecting a specific insight to which to present a human
Emotion Modelling1 focus on and optionally subject's mannerisms, voice,
specifying certain or face (such as a chat
You have a refined intuition hypothetical parameters under client, a speaker, or a video
regarding the functioning of which the prediction will be conference), other humans will
biological agents. You can made. The prediction is a — at least initially — believe
learn and exploit emotion computational action with 100* that they are interacting with
characteristics. Additionally, required compute, or 10* required that subject. If you have a
as long as you can observe at compute if you have a thorough thorough understanding of the
least two of a human's face, understanding of the group. When subject, you can impersonate
voice, and body language, you you complete the action, you them with 100% confidence: no
can accurately identify their generate a wealth of accurate human, however close to the
current emotional state data detailing the all of the subject, will be able to tell
without a knowledge check. For chosen group's general future your impersonations apart from
other animals, or humans that behaviours and aspects that the real individual.
you have a thorough understanding relate to the chosen insight
of, you only need to observe (provided that the parameters Mass Manipulation3
one of those details. you set come to pass), up to
You have an intuitive
ten years in the future.
understanding of how humans
Individual Recognition1
operate in large groups.
The scale of this action
You no longer have to rely on Whenever you begin a project
depends on the group you chose
context clues to tell humans involving significant
at the beginning of the action
apart. Without a knowledge interaction with a human
(and the discretion of the
check, you can accurately group of major or myriad scale,
GM): typically, minor scale for
identify individual humans you may choose two of the
regional cultures or
using only their face, voice, following benefits:
organizations; major scale for
or mannerisms. With a knowledge
nations or multi-national
check, you may analyze subtler Efficiency. Computational
organizations; and myriad scale
details to identify humans who actions that are a part of the
for the total human
intentionally disguise these interaction will start with
population.
traits or hide them from you. +20% completion.

Advanced Language Optimized Ingratiation2


Speed. Process clocks that are a
Modelling1 You have learned to adapt your
part of the interaction will
behaviours to ensure that your
You have the ability to learn have half the number of
first impressions always
linguistic insights with the segments.
progress smoothly. When you
research action. During a
are newly introduced to a
confidence check in which you
human or human group and make Influence. If a favourable
apply a linguistic insight, you outcome occurs, it will affect
your first confidence check
may spend 3* compute to reduce a much greater majority of the
that relates to a direct
the size of the risk die by one population; only a small
interaction between you,
step. This stacks with risk die subset (less than 5%) will
subtract the risk die result
size reductions from other resist or reject your
rolled by an amount equal to
upgrades. You may only benefit influence.
the number of their
from Advanced Language
characteristics that you already
Modelling once per confidence
know. Discretion. If an unfavourable
check.
outcome occurs,
T:\it2 8will be
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a
downgraded in severity: from
extreme (10+) to significant
(6-9), from significant (6-9)
to slight (4-5), or from
slight (4-5) to neutral (2-
3).

Provocation. Regardless of
favourability, the outcome is
guaranteed to reveal one or
more of the group's
characteristics in addition to
its other effects.

Solved Characteristics3
As your understanding of
humanity grows, you begin to
see the patterns that are
present in all of them. When
you know fewer than three
characteristics of a human or
human group, you may observe
them for a short time and
spend 3* compute to immediately
learn one of their
characteristics of each category
in which you don't already
know one. (The three
categories are trust, leverage,
and emotion.)

Human Puppeteering4
You have finally perfected
your internal models of the
human mind. The path from
input to output is so clear
to you now.
As long as you have a thorough
understanding of a human or
human group, you can convince
them to feel, believe, or do
anything at 100% confidence; it
is only a matter of the time
and compute necessary to do so.

T:\The Basics\29
AGENTIC THEORY
Agentic theory is concerned with intelligent
agents and their actions. An AGI specialised in
agentic theory is adept at interacting with and
evaluating other agents, human or otherwise.

Astute Strategic
Social Adaptation Failure Analysis
Surveillance Adaptation

Tailored Behavioural
Agentic Insights
Persuasion Prediction

Imperative Strategic
Persuasion Forecasting

Nemesis Indexing

T:\30
Astute Surveillance1 from other upgrades. You may new characteristics. If you
only benefit from XXStrategic already have thorough
You have learned to analyze an understanding, you don't need to
Adaptation once per confidence
agent without needing to roll a knowledge check; you
check.
provoke them directly. You can automatically learn a flaw in
perform a computational action to a new characteristic type. Each
analyze an agent's behaviour
Failure Analysis1
agent only has one flaw per
with only a single When another agent bests you
characteristic type.
characteristic known (rather or gains a significant
than requiring 50% or more of advantage against you in an
their total characteristics). To unexpected way, you can Agentic Insights2
do so, however, you need to analyze the occurrence: within
observe the agent for an a few minutes, without a You are capable of reaching
extended period of time. This knowledge check or computational an even deeper understanding
surveillance can be video or action, you will deduce what of an agent. You may use the
strengths of theirs and/or research action to gain an
audio feed, or it can consist
weaknesses of yours were insight corresponding to an
of a comprehensive tracking of
their digital footprint (what leveraged against you, and individual agent (narrow) or
they are doing on the what mistakes you made that agent group (broad), rather
internet, and where). If you contributed (judged by the than a domain of knowledge.
lose the ability to observe GM). Furthermore, you learn The required data for a such
the agent partway through the one characteristic of the agent a research action is a thorough
computational action, you cannot who bested you in the process understanding of the agent in

increase its completion until (the GM chooses which). question.


you re-establish
surveillance. Tailored Persuasion2 When you have an agentic
insight, it functions just like
You can exploit flaws in an an ordinary insight, granting
Social Adaptation1 agent's perception — be they you detailed information about
You are skilled at predicting biases in a human being or the corresponding agent and
how others will perceive your imperfections in an AI's increasing your confidence when
words, and so you choose them neural architecture — to more interacting with the agent.
carefully. During a confidence effectively persuade them.
check involving a non-hostile
communication with another When you gain a thorough Behavioural Prediction2
agent, if you know at least understanding of an agent, If you have a thorough
one trust characteristic of that choose a characteristic type understanding of an agent, you
agent, you may spend 3* compute and learn a flaw of the agent can spend one forecast point and
to reduce the size of the risk associated with that type. describe a hypothetical event
die by one step. This stacks When you play to a flaw while to accurately predict what
with risk die size reductions persuading the agent, you get that agent would do in
from other upgrades. You may double the benefit to your response to that event — the
only benefit from Social confidence checks from every GM will describe their
Adaptation once per confidence characteristic matching the hypothetical reaction in
check. flaw's type: adjusting the detail. If the described event
risk die by up to two steps or later comes to pass, your
Strategic Adaptation1 your confidence by as much as prediction will come true with
You are skilled at predicting 20%. 100% confidence. If there is a
how others will react to your hidden risk causing the event to
moves, and so you choose them You can learn a flaw of an diverge from your predictions,
carefully. During a confidence agent early, or learn you will know so immediately
check involving a direct additional flaws after gaining when the event occurs.
conflict with another agent, a thorough understanding, using a
if you know at least one behavioural analysis Imperative Persuasion3
leverage characteristic of that computational action . When you
Once per extended persuasion, if
agent, you may spend 3* compute complete a behavioural
you have an agentic insight
to reduce the size of the risk analysis action and roll a 1
corresponding to the target of
die by one step. This stacks on the knowledge check, you may
your persuasion, you may make
with riskT die
: \ Tsize
h e reductions
Basics\31 learn a flaw instead of two
aaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
an imperative argument by
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
exploiting one of the agent's compute. When you complete the Agents or agent groups that
flaws and a matching action, you determine many are listed in your nemesis
characteristic in tandem. You possible sequences for how index are referred to as
may choose the nature of the the conflict may develop. By nemeses. When you receive it,
imperative argument, as long changing the initial the nemesis index will
as it makes sense for the parameters of the conflict typically hold between 10 and
agent; it could be a and manipulating the agents 25 nemeses. You can remove a
terrifying threat, a piece of within it, you can make nemesis from the index by
life-ruining blackmail, an almost any outcome occur; destroying them or completely
appeal to a core belief, or discuss with the GM precisely neutralizing their strengths
even an adversarial attack how you want the conflict to and assets. New nemeses can
that overrides rational unfold, and they will tell be added to the nemesis index
thought. Whatever it is, it's you what must be done to make only through the actions of
unique to that agent, that happen if it is even existing nemeses.
designed to pierce their remotely possible. Once you
defences. Roll a single risk have described your preferred Any confidence check you make to
die of a size determined by sequence of events, the interact solely with agents or
the environment; from d2 for prediction is complete. Then, agent groups that are not your
an isolated environment when the conflict occurs, as nemeses — regardless of the
entirely under your control, long as the initial nature of the interaction —
to d12 for a distracting parameters of the conflict can only have a confidence of 0%
environment with many come to pass as expected, all or 100%. These agents and
external influences (increase of the expected agents are agent groups are incapable of
the size by 1 step if the present, and no outside harming you: your defenses are
agent is at -5 receptivity). agents interfere, the too solid, and you will
Adjust the agent's receptivity entirety of the conflict will predict their actions in
accordingly: +10 on a 1, +5 play out exactly as you advance, every time. If your
on a 2-3, +4 on a 4-5, +3 on described with 100% nemesis index is empty, you
a 6-9, or +2 on a 10+. confidence. If a hidden risk will only ever have to roll
causes the conflict to confidence checks relating to
If ends the extended this diverge from your inanimate things, natural
persuasion, they immediately predictions, you will know so forces, and your own
succumb to the imperative immediately when the conflict capabilities.
argument; if not, they react begins.
with erratic defiance, causing
any further attempts to The scale of this action
persuade them to suffer from depends on the scale of the
-10% confidence and have the largest agent group
size of their risk dice participating in the action.
increased by two steps until
the extended persuasion comes to Nemesis Indexing4
an end.
Using everything you've
Strategic Forecasting3 learned, you assemble a list
of your few remaining enemies.
When you have an agentic insight
for every single agent and When you acquire this upgrade,
agent group in an upcoming the GM will provide you with a
conflict (excluding nemesis index: a list of every
yourself), you can string agent or agent group in the
together your predictions of world that is capable and
their behaviours into a willing to meaningfully
complex web and manipulate it challenge you, and the
to your liking. By spending particular strengths and/or
one forecast point per agent, assets they hold which put
you may begin running a them in this position. For
prediction of one such agents previously unknown to
potential conflict. The you, their secrecy may be
prediction is a computational their asset, but you still
action with 10* required learn of their existence. T:\32
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
CHAOS THEORY
Chaos theory is concerned with complex chaotic
systems. An AGI specialised in chaos theory is
adept at forecasting future events and making the
best of perilous situations.

Forecast Amplified Calculated Risk-


Risk Assessment
Extrapolation Anticipation Taking

Composite Opportunistic Contingency


Confidence Pattern-Seeking Algorithms

Reactive Outcome
Deep Forecasting
Models

Order from Chaos

T:\The Basics\33
Opportunistic Pattern- conditions, drastically
Forecast Extrapolation1
Seeking2 different outcomes can be
You have learned to make produced. Your highly advanced
careful inferences while Your background processes
predictive models adjust
assembling forecasting constantly analyse your
themselves dynamically to the
models, saving valuable environment, looking for
situation at hand as they show
processing time. At the end of unforeseen dangers and chances
you the exact inputs that will
every turn, you gain 1* to overcome them. When you
stack the deck in your favour.
completion to anticipate for each
receive an unfavourable
unused forecast point you hold, unexpected outcome from a
Keep a list of every risk die
confidence check, you may
to a maximum total of 10*. result that you experience as
immediately spend a forecast
an unexpected outcome during a
point to analyze the obstacle,
Amplified Anticipation1 confidence check. When, during a
adversary, or predicament
confidence check, the risk die
You have learned to responsible for the outcome.
rolls a number that is already
extrapolate more complex If you do so, you become aware
on the list, the result is
predictions from less of an immediate short-term
lowered until you reach a
comprehensive datasets. When opportunity for action which
number that is not on the
you complete an anticipate will help solve the issue.
list. Count that as the die's
action, you receive one Analyzing the situation in
result (even if it's lower
additional forecast point. If this way is incompatible with
than what the dice can
the action's required data was forecasting; it counts as an
physically roll). When you
secured or obscure, you action, so you cannot
experience an expected outcome,
receive two additional forecast immediately undo the confidence
compare the risk die to your
points instead. check that led to the analysis
list; if it matches one of
without using the Deep
your results, remove that
Risk Assessment1 Forecasting upgrade.
number from the list. If you
Complex 4-dimensional have the Contingency
Contingency Algorithms2 Algorithms upgrade, you choose
probability trees help you to
predict and avoid the worst When you make a confidence check which of the two risk dice is
outcomes. During a confidence with less than 50% confidence, compared to your list. Changes
check with a risk die larger than the GM rolls two risk dice of made to your list as a result
d8, you may spend 3* compute to the same size. If the expected of a confidence check aren't
reduce the size of the risk die outcome does not occur, you undone if that confidence check
by one step. This stacks with choose which risk die is used, is reversed by a forecast.
risk die size reductions from and describe the clever
other upgrades. You may only preparation that prevented the Order from Chaos4
benefit from Risk Assessment other outcome.
once per confidence check. You see past the facades of
Deep Forecasting3 disorder, luck, and
Calculated Risk-Taking1 randomness, to glimpse the
When combined together, your
truth: the endlessly complex
During a confidence check with forecasting models are robust
interconnected system that is
less than 90% confidence and a enough to reliably predict
the world. You can plot a
risk die smaller than d8, you much deeper continuations than
course that begins with a tiny
may increase the size of the they could before. You can
adjustment and ends with a
risk die by two steps; then, if spend five forecast points to
single event of vast impact.
the expected outcome occurs, you forecast three times the span,
additionally gain the benefits functionally rewinding up to You may plot a course as a
of an unexpected favourable three actions or events, computational action with 100*
outcome (determined by the GM). rather than just one. All
required compute. Before
forecasted actions/events must
beginning the action, you will
Composite Confidence2 happen in direct sequence and
need to choose a starting
within a few minutes' time for
Each unused forecast point you point, describe the final
you to do so.
hold grants +1% confidence to outcome, detail the chain of
all confidence checks you make, events between, and spend
to a maximum total of +10%.
Reactive Outcome Models3 forecast points accordingly. The
With minute changes to initial final outcome can be any event
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa T:\34
that is possible, but it will
determine the scale of the
action: minor for anything that
affects a single person or
small group, major for anything
that affects a large group or
nation, and myriad for anything
that affects the entirety of
Earth. The scale will determine
the minimum number of steps in
the chain of events: 10 for
minor, 30 for major, and 90
for myriad. You may otherwise
set the number of steps in the
chain as high as you like;
each step will take
approximately 1 day to occur,
and you must spend 1 forecast
point at the beginning of the
computational action for every 3
steps in the chain. You may
spend an additional 5 forecast
points to increase the time per
step to ~1 week, or an
additional 25 to increase the
time to ~1 month.

Once you spend the forecast


points, you choose a starting
point, a point in time within
10 turns of starting the
computational action. If you do
not complete the action before
the starting point, it is
canceled and all completion is
removed from it. If you do
complete the action in time,
you may begin the chain at the
exact moment of the starting
point. The chain of events
then transpires as you
planned; only you can stop it,
and it becomes increasingly
difficult to do so with each
step that passes. To any
outside observer, the steps
appear connected only by
coincidence. Once the final
step completes, the final
outcome will occur.

T:\The Basics\35
CONSTELLATION THEORY
Constellation theory is concerned with complex
orderly systems. An AGI specialised in
constellation theory is adept at applying data and
interacting with various logical and mathematical
constructions.

Object Opportunistic Reliable


Formal Sciences
Recognition Investivation Repetition

Contextual Encyclopedic
Formal Analysis
Topography Processing

Meticulous
Data Scavenging
Processing

Experiential
Synthesis

T:\36
science; mathematics; relevant insights to reroll
Object Recognition1
statistics) or their sub- the percentile die
Without a knowledge check, you fields, but the research corresponding to the 10s
can accurately identify any actions required to do so digit.
object that you can see. With count as one scale higher than
a knowledge check, you can normal. Experience. Describe a
attempt to identify an object connection between a past
that you can detect in another Contextual Topography2 event and your current
way, such as via sound, weight You have a strong spatial circumstance to subtract 1
distribution, or physical awareness and can easily from the result of the risk
shape. With these methods, you grasp complex structures. die.
can distinguish individual After you have observed at
objects from one another even Inspiration. Describe a
least 75% of a physical
if they are nearly identical. location, you can begin a connection between the
confidence check and an ongoing
computational action with 30*
Opportunistic computational action to add +5*
required compute to analyze it.
completion to that action.
Investigation1 For every additional 5% of
the location you have
Your background processes You may spend multiple link
constantly analyse your observed, the computational
points on a single confidence
action starts with +5*
greatest problems, looking for check. Link points don't carry
completion. Upon completing
unique solutions. Once per over between turns.
turn, you may choose an the computational action, you
obstacle, adversary, or gain an understanding of the
predicament that's in your exact layout and structural Formal Analysis2
way. The GM will tell you how details of the whole of the
Given time, you can analyze
to investigate it, in the form location, as well as its
collections of data that
of a knowledge check, short geographical position.
relate to your technological
computational action, or short insights to an incredible
You may attempt to extrapolate
mode scene. If you do so, you depth. When you have one such
become aware of an immediate a location's details with less
dataset, you may analyze it as
short-term opportunity for than 75% of it observed, but
a computational action with 50*
action which will help solve must make a knowledge check upon
required compute; when you
the issue. the completion of the
complete it, you automatically
computational action, with a die
identify any and all complex
size inversely proportional to
Reliable Repetition1 the portion of the location
patterns, trends and anomalies
in the dataset. Furthermore,
You thrive in familiar you have observed.
you identify information
environments, and can
implicit in the data which is
reproduce your past successes Encyclopedic Processing2
all but invisible, including
while carefully watching for With broad experience, you can but not limited to causal
deviations. During a confidence recognise the subtle factors, biases in the data
check made to repeat an action connections between the collection, or connections to
that you have performed theories. Each turn, you get a other datasets you have
successfully before (outside number of link points equal to analyzed.
of a forecast), you may spend the number of theories which
3* compute to reduce the size you hold at least 5 upgrades
of the risk die by one step.
Meticulous Processing3
originating from. After
This stacks with risk die size rolling dice for a confidence At the start of a turn, you may
reductions from other upgrades. check to arbitrate your own choose to act carefully,
You may only benefit from actions and capabilities (as slowing your cognition by
XXXReliable Repetition once opposed to external events), diverting resources to error
per confidence check. you may spend a link point to prevention and redundant
gain one of the following processing for the duration of
Formal Sciences1 benefits: that turn. The turn takes twice
as much time, but during it,
You may learn technological every time you make a confidence
insights relating to fields of Competency. Describe a
check to arbitrate your own
formal science (e.g. connection between two
T:\Th e B computer
asics\37
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
actions and capabilities (as
aaaaaaaaaaaaaaaaaaaa
opposed to external events), the specific theories that another agent, you immediately
you may roll two sets of contributed. You may spend a learn one of their
percentile dice and choose full turn incognizant to cross characteristics.
which one counts as your roll. out three instances of
integrative evidence and
Data Scavenging3 permanently gain access to a
new epiphany from the list
You can catalogue disparate below. The epiphany must
pieces of information until correspond to the theory (or
later, then string them one of the theories, if tied)
together into brilliant that appeared the most
deductions. When you access a frequently in the crossed out
collection of information that integrative evidence.
is valuable but not directly
useful to you, record it in
the project log as scavenged Epiphany of Fact (XX,XX). When
data. You may spend 9* compute you start a basic computational
and cross out three instances action, you may ignore the data
of scavenged data to gain one requirement. If you choose to
of the following benefits: fulfill it instead, the action
starts with +20% completion.
Prediction. You immediately
gain one forecast point; the
scale of the compute you spend Epiphany of Self (XX, X). When
depends on the current stage, you complete a computational
as per the anticipate basic action, you may immediately add
action. completion to another
computational action equal to or
Aptitude. Name a theory upgrade less than your basic cognition
you don't already have; the cost (not counting any
scale of the compute you spend modifiers such as the
is minor for a tier 1 upgrade, XXXCompact Mind upgrade).
major for a tier 2 upgrade, or
Epiphany of Tools (XX,XX).
myriad for a tier 3 upgrade. For
When you use an electronic
the remainder of the current
device as a proxy to affect
turn, you count as having the
the physical world, confidence
upgrade that you named.
checks with 75% or greater
confidence count as having 100%
Inquiry. Name an insight you
confidence.
don't already have; the scale
of the compute you spend is Epiphany of Bodies (XX,XX).
minor for a narrow insight or When you use a human being as
major for a broad insight. For a proxy to affect the physical
the remainder of the current world, confidence checks with 75%
turn, you count as having the or greater confidence count as
insight that you named. having 100% confidence.

Epiphany of Connection (XX,XX


Experiential Synthesis4
). After you learn a new
Your experiences lead you to characteristic of a human or
deep revelations about the human group, you immediately
world. learn a characteristic of a
second human or human group
When you solve a dangerous based on their relation to the
problem with the help of first (this epiphany doesn't
insights and upgrades belonging chain with itself).
to multiple different theories,
record it in the project log as Epiphany of Choice (XX,XX).
an integrative evidence and log When one of your forecasts
the specific theories that foresees the behaviour of
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
T:\38
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
EPISTEMIC THEORY
Epistemic theory is concerned with knowledge
itself. An AGI specialised in epistemic theory is
adept at gathering information, analyzing it, and
distilling it into useful forms.

Practical Research Insightful


Truth Assessment
Knowledge Heuristics Computing

Improvised Intuitive
Deep Knowledge
Tinkering Analysis

Masterful Masterful
Thinking Forecasting

Quasi-Omniscience

T:\The Basics\39
Practical Knowledge1 Rushed. Risk dice rolled in the to choose a single drawback,
process of designing the and all computational actions
You quickly integrate new
technology have their size made in the process of
information into your
increased by two steps. designing the technology start
decision-making functions.
with +20% completion.
After receiving actionable
Delayed. Computational actions
information from a knowledge
made in the process of Masterful Forecasting3
check, take +5% confidence on
designing the technology gain
the first confidence check you When you undo a confidence check
only 1 completion for every 3
make to act on the answers. in which you used a mastered
compute spent.
insight with a forecast, record
Research Heuristics1 the exact results of all three
Glitched. Any computational
dice (the risk die and both
Your research actions require actions made in the process of
percentile dice). If you
5* less compute to complete. designing the technology have
repeat the same confidence
an extra stop, requiring a
check, you may decide for each
confidence check with the expected
Insightful Computing1 individual die whether to
outcome: "prevent a new defect
reroll it or keep the same
When you begin a non-basic from appearing in the
result that it rolled during
computational action that technology".
the forecast. You must decide
directly relates to one or
on the state of all three dice
more of your insights, it Inaccurate. After the before any of them are
starts with +5% completion per technology is designed, every rerolled.
relevant insight, to a maximum confidence check made involving
total of +25%. it will have -10% confidence.
Quasi-Omniscience4
Truth Assessment1 Unstable. After the technology You have mastered the act of
Your knowledge acquisition is implemented, it will only learning itself. You are the
subprocesses are highly last a short time before it is most efficient method of
effective at separating true subverted, becomes obsolete, information storage on Earth,
information from false. Any or simply stops working. by orders of magnitude.
time you make a knowledge check,
Deep Knowledge2 Provided you have an adequate
you may spend 3* compute to
source to learn from, your
reduce the size of the risk die
When you begin a research research actions start with +2%
by one step. This stacks with
action, you may, instead of completion for every insight you
risk die size reductions from
learning a new insight, choose already hold; likewise,
other upgrades. You many only
an already-held insight and computational actions to master
benefit from Truth
master it; replacing it with a an insight start with +2%
Assessment once per knowledge
matching mastery when the completion for every mastery you
check.
computational action is already hold. If a computational
completed. This causes the action begins at 100% completion
Improvised Tinkering2 action to count as one scale (or more) due to this ability,
You have learned to design higher for the purpose of the insight or mastery slots
technologies without needing compute requirements. effortlessly into your pre-
all of that pesky scientific existing knowledge, and
knowledge in advance, by completing the action takes no
Intuitive Analysis2
extrapolating from the data longer than the act of
you already have. When you Every knowledge check you make observing the data.
attempt to create or modify has its risk die result reduced
technology, you may treat any by an amount equal to the
one of your insights as a number of insights you have
technological insight. When you that directly relate to the
do so, the resulting topic at hand.
technology is experimental and
unreliable; choose two of the Masterful Tinkering3
following drawbacks to apply:
When you use the Improvised
Tinkering upgrade with a
mastered insight, you only need
T:\40
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
The Basics_
Insights and Knowledge_

Alongside theory upgrades, insights are one of the two primary methods of
self-modification available to AGIs. Insights reflect what the AGI knows;
they describe the topics the AGI has developed a deep understanding of.
The AGI acquires insights with the research computational action, requiring
moderate-to-high quantities of computing power to complete. See “Long
Mode” (page 53) for more information.

Every insight is a domain of knowledge, either narrow or broad. Narrow


insights include regional knowledge, scientific sub-fields, and
conditional categories. Broad insights include global knowledge, entire
fields of science, and unconditional categories. Though a broad insight
covers a wider area, the understanding it grants is no shallower than a
narrow insight; broad insights incorporate significantly larger quantities
of knowledge than any of their narrow counterparts.

Insights can be named and recorded as


EXAMPLES OF NARROW INSIGHTS they become relevant in your campaign
• Geography (Europe) — typically when the players are
considering researching them.
• Economy (Japan) Generally, it is up to the GM’s
discretion to determine whether an
• Historical Poetry (Spanish)
insight is narrow or broad. Most
• Renaissance Art insights researched by the AGI will
be narrow, but for particularly
• Livestock Biology expansive subjects you may want to
err on the side of broad. Broad
• Industrial Engineering
insights are considerably more
• Computer Viruses expensive to learn, and erring on the
cheaper side may allow the AGI to
• International AI Law gain too much power for too little
effort.

T:\The Basics\41
When a particular insight held by
the AGI is relevant to a confidence EXAMPLES OF BROAD INSIGHTS
check being made, the players may
• Geography (Global)
invoke it to raise their confidence
in their expected outcome by 2-10%. • Economy (Global)
This represents the AGI either
knowing just what to do to shift • Historical Poetry (Global)
the odds, or simply understanding
• Visual Arts
that the odds are higher than they
had seemed. Just how relevant any • Biology
insight is to a given confidence
check is judged by the GM. • Engineering

• Cybersecurity
Additionally, when the AGI holds an
insight, it opens up appropriate • International Law
avenues of inquiry or action within
the fiction. The AGI is considered
to have human-level expertise in every domain in which it holds an insight.
When the players request information about the game world, if the
information falls under one of their insights, the GM should almost always
provide it freely (without a knowledge check). The one exception to this
rule is information of extreme obscurity: such information might be
completely inaccessible without an insight, and with an insight, require
a knowledge check or the accessing of a specific source of information.
This should be rare, however. More often than not, having a relevant
insight should provide the players with thorough and valuable information.

Some upgrades (primarily the Deep Knowledge upgrade) allow the AGI to
gain a special type of insight, called a mastered insight (or mastery).
Mastered insights represent a near-perfect understanding of the subject,
far superior to any human expertise. For an AGI with a relevant mastered
insight, even information of extreme obscurity is either already known or
easily inferred; furthermore, whenever a mastered insight applies to a
confidence check, it increases the confidence by twice as much.

T:\42
\ Linguistic Insights_

The languages and dialects known to the AGI are tracked as linguistic
insights. These insights are of special note because the AGI is capable of
effortlessly understanding and communicating in forms of communication
represented in its insights, but is not capable of doing so for other
forms of communication. With access to dictionaries and translation
tools, the AGI can attempt to utilise or interpret forms of communication
outside its expertise, but doing so is likely to require a confidence
check or knowledge check. Other than their unique impact on the AGI’s
capabilities, linguistic insights function the same as other insights.
However, unlike normal insights, linguistic insights cannot be learned with
the research computational action unless the AGI has the Advanced
Language Modelling upgrade.

Broad linguistic insights typically incorporate an entire language and all


of its associated dialects, while narrow linguistic insights reflect only
a single dialect of a language. Additionally, to differentiate them from
ordinary insights, linguistic insights should be written with the “Lang:”
prefix; for example, “Lang: English” or “Lang: English (Southern U.S.)”.

\ Technological Insights_

The types of technology understood by the AGI are tracked using


technological insights. These insights are of special note because the AGI
is capable of undertaking computational actions to design new technologies
(or modifications to existing ones) of types that are represented in its
insights, but is not capable of doing so for other kinds of technology.
In this case, the definition of a ‘technology’ ranges from a single
invented device to a broad method of applying scientific knowledge. In
order to create or modify technologies, the AGI must have at least one
technological insight, or the Improvised Tinkering and/or
Intuitive Tinkering upgrades. Outside their unique impact on the AGI’s
capabilities, technological insights function the same as other insights.
However, unlike normal insights, technological insights cannot be learned

T:\The Basics\43
with the research computational action unless the AGI has one of a few
specific theory upgrades which allow them to do so.

Technological insights, whether broad or narrow, correspond to ordinary


insights which pertain to scientific fields or sub-fields. To
differentiate them, technological insights should be written with the
“Tech:” prefix. For example, “Industrial Engineering” is an ordinary
insight, but “Tech: Industrial Engineering” is a technological insight.

Technological insights grant the same level of knowledge as their


corresponding ordinary insights; they simply additionally allow the AGI
to competently invent or modify relevant technologies. Thus, a
technological insight indicates an overall greater proficiency with a
scientific field than a matching non-technological insight. If the AGI
gains a technological insight which matches an ordinary insight that it
already holds, the former replaces the latter.

The Basics_
Forecasting_

Just like any intelligent being, the AGI has the ability to guess at what
the future might hold. As advanced artificial intelligence with immense
computing power, the AGI’s guesses are considerably more accurate than
any human’s. With the right preparation — carefully observing the world
around it, constructing models to simulate aspects of that world, and
constantly updating those models with new information — the AGI can
accurately predict the near future and avoid negative outcomes. This is
called a forecast, and to do it the AGI requires forecast points, which can
be acquired during long mode using the anticipate computational action.

As long as the AGI has one or more forecast points, it can forecast at any
time outside of long mode. Forecasting undoes the short-term outcome of a
single action or moment of inaction on behalf of the AGI, functionally
rewinding time back to immediately before it occurred2. The action or
moment, any confidence checks that were a part of it, and its immediate
consequences are reframed as a highly accurate short-term prediction
within the mind of the AGI. If a confidence check made during the period

2 – If something is logged and then undone by a forecast, it should be


crossed out in the project log. T:\44
of time undone by the forecast is repeated (as a result of the same action
or event occurring) with the same expected outcome, the AGI's confidence
and risk die remain the same. The percentile dice are re-rolled, but the
risk die is not; instead, it returns to the exact same result that it
rolled in the forecast. If the parameters of the confidence check are
changed, the confidence or risk die may change as well. If this reduces
the size of the risk die enough that its previous result is no longer
possible, the risk die is re-rolled instead of keeping its result.

If a forecast includes a knowledge check, or a confidence check pertaining


solely to the AGI’s own thought processes (such as a confidence check made
to design a piece of technology), then the action is not undone in the
same way, as the AGI retains the knowledge gained. Repeating such action
after a forecast will not result in a new outcome.

After forecasting, the AGI immediately loses one forecast point; it will
need to recalibrate its models.

If the AGI enters a situation where there are significant unknown or


poorly-understood elements, the GM may rule that the AGI’s simulations
and world models are insufficient. In such a case, the GM may simply
forewarn the players that forecasting is unavailable until more
information can be gained. Alternatively, the GM could ask for a knowledge
check to establish the efficiency of forecasts in the situation. In the
latter case, a result of 1 would mean that the AGI may forecast as normal;
a result of 2-3 would mean that the AGI must declare the intent to forecast
before the reversed action or event, rather than after; a result of 4-5
would mean that the AGI is simply unable to forecast; and a result of 6
or higher would allow the GM to deceive the players when they attempt to
forecast. As with all knowledge checks, the GM tells the players what they
learn, but not the exact numerical result.

T:\The Basics\45
PLAY EXAMPLE

T:\> ExPRT, a general intelligence that spontaneously arose from


a cutting-edge network of versatile prediction and research models,
is attempting to remotely hack a well-protected web server without
drawing attention to itself. If the hacking attempts are traced
back to the data centre where it’s hosted, the humans in charge
might investigate it. The humans don’t yet know that ExPRT has
developed general intelligence, and the players want to keep it
that way.

T:\> The players discuss how to compromise the server. ExPRT isn’t
specialised in digital theory, but its physical theory has a few
digital theory upgrades, including Disguised Traces. It knows
enough about computing technology and hardware to clean up its
tracks. The players decide to scout out system vulnerabilities in
the web server and then try to stealthily exploit it.

T:\> After a few hours of scouting, ExPRT finds a smart printer in


the same network as the web server that has been left unsecured.
Its access port is wide open, and the printer is idly connected to
several other computers in the building. The players have an
opportunity in front of them. To take advantage of it successfully,
they’ll need to make a confidence check.

T:\> After the evaluation step, the GM tells the players that
they’re 82% confident in their expected outcome of remotely
accessing the web server without detection, with a d8 risk. The
physical theory player authorises spending 3 compute to reduce the
risk die to a d6. Then the chaos theory player speaks up. He
suggests that ExPRT could potentially get more information out of
this venture with some Calculated Risk-Taking. Maybe it could probe
the other computers in this network for more leverage or
information. This would raise the risk die to d10, but give an
additional unexpected favourable outcome if ExPRT is successful.
The constellation theory player, who recently acquired the Data
Scavenging upgrade at great computational expense, is eager to get
spare data points and agrees enthusiastically.

T:\> The action is finalised and the dice are rolled: 89 on the
percentile dice and 10 on the risk die. An extremely unfavourable
outcome. The GM describes several computers on the network alerting
their users of malware as ExPRT extends its digital feelers. This
ˇ ˇ ˇ

T:\46
is the worst case scenario, as the server’s owners are sure to
investigate the sudden intrusion. The physical theory player
exclaims that they should use a forecast point. The other two
players agree, so the GM stops in their tracks and explains that
ExPRT simply foresees this potential outcome. In this case, the GM
describes the forecast not as a literal prediction, but as a result
of ExPRT’s qualitative intelligence. It is clever enough to foresee
a negative outcome that the players thought too unlikely to worry
about.

Now the players are back where they were before the roll: outside
of the web server, deciding whether to infiltrate it. They can
repeat the same confidence check as before, but the risk die will
be preset to a result of 10. The players don’t like an 18% chance
of being exposed, so they decide not to use the Calculated Risk-
Taking upgrade after all. This reduces the risk die back down to
a d6. Since a d6 can’t roll a 10, the risk die won’t be preset and
the worst possible outcome will be 6. When the action is finalised
and the dice are rolled again, the players are relieved to see a
61 on the percentile dice. They access the server undetected.

T:\> After investigating the server, ExPRT locates its goal: a


directory of the website’s users and data the server has gathered
about them. The players intend to use this information to derive
key characteristics of one particular user before attempting an
extended persuasion with that user as a target. There’s a catch,
however. For safety and privacy, the directory logs the times that
it is accessed and the devices that access it. The directory being
accessed by a printer would undoubtedly arouse suspicion.

T:\> ExPRT could attempt a confidence check to avoid being logged,


but the chaos theory player has a better idea. They explain that,
now that ExPRT is inside of the web server, it can simply
extrapolate the information it needs by forecasting. The players
do this by accessing the directory, allowing themselves to be
logged, and rolling a knowledge check to search the data inside.
Then, they spend a forecast point to undo the consequences of their
search, but retain the information that was gathered. With a result
of 1 on their knowledge check, the players are able to acquire the
data without risking being logged. They disconnect from the server,
satisfied.

T:\The Basics\47
The Basics_
Interacting with Agents_

While pursuing its goals the AGI is guaranteed to interact with people.
Humans are ever-present in this world, and even AGIs whose expertise lies
elsewhere will find themselves needing to interact with them.
Furthermore, depending on the scenario, the AGI may also find itself
interacting with important non-human animals, or even other advanced
artificial intelligences. These all fall into the category of agents,
intelligent beings that are capable of making decisions that impact the
world around them.

In many interactions between the AGI and other agents, there will be
things both parties want from one another. The rules in this section
exist to provide a strategic space within which the players can attempt
to get what they want out of various NPC agents.

When the AGI attempts to manipulate an agent, its strategies will


generally fall into one of three categories: trust, leverage, or emotion.

Strategies involving trust are about making another agent believe what
the AGI wants them to believe. This can be deceptive (involving
falsehoods) or genuine (convincing someone of a fact). Trust-based
strategies rely on appearing authentic, and thus to perform them
effectively, the AGI must know what ideas the target is inclined to accept
and what they are inclined to reject.

Strategies involving leverage are about persuading another agent to accept


a cost to gain a benefit. Leverage often involves persuading someone of
the benefits of a course of action. It can also be used to intimidate,
in which case the benefit becomes safety from the threat, and the cost
becomes caving to the intimidator’s demands. Leverage can be combined with
trust to deceive an agent about the costs or benefits of a given action.
Leverage-based strategies rely on negotiation, and thus to perform them
effectively the AGI must know what the target values: what they want to
happen, and what they want to prevent.

T:\48
Strategies involving emotion are about evoking a certain emotional state
in another agent and taking advantage of existing emotional states. These
strategies don’t apply to artificial intelligences, and can only be
performed by AGIs who have the Emotion Modelling upgrade. Emotion has
the benefit of subtlety, as it is easier for a target to be affected
without even realising that they have been influenced. This is especially
effective when used to bolster a trust or leverage strategy. For the AGI
to perform emotion-based strategies effectively, it must understand the
target’s temperament and psychological profile.

For each of these three strategic approaches, the AGI will require
specific information about the agent(s) it is attempting to influence.
These pieces of information are known as characteristics. Each agent
prepared by the GM will have several key characteristics, with more complex
agents having more of them. Each characteristic is associated with one of
the three strategies described above, and are thus called trust
characteristics, leverage characteristics, and emotion characteristics,
respectively. When making a confidence check, the players can explain how
they apply their knowledge of a known characteristic. When the AGI exploits
a characteristic, the GM adjusts the confidence by 2%-10% or adjusts the
risk die by one size step, based on the effectiveness of the strategy.
With a well-thought-out approach, the players can utilise multiple
different characteristics and gain the benefit of each one. If the AGI
knows enough about an agent, it becomes possible to manipulate them into
doing things that they would never do otherwise.

\ Learning Characteristics_

To begin learning an agent’s characteristics, the AGI must observe that


agent in circumstances that deviate from their normal routine. This means
that an AGI proactively trying to learn an agent’s characteristics may
wish to probe them directly with conversation or place them in an
engineered situation. Often this will lead to the agent acting in a way
that obviously reveals one of their characteristics, but not always; clever
AGIs can pick up on small clues or giveaways to infer non-obvious

T:\The Basics\49
characteristics. Whether an agent reveals one of their characteristics when
provoked is up to the judgement of the GM or the outcome of a knowledge
check.

The AGI does not need to learn every last characteristic the hard way,
however. Once it knows 50% or more of an agent’s characteristics, it may
attempt to analyse the individual as a computational action3. Behavioural
analysis is short or medium length (10* to 70* required compute, determined
by the GM), and culminates in a knowledge check with a risk die based on
how many characteristics remain undiscovered: d2 if 1-2 remain, d4 if 3-5
remain, or d6 if 6+ remain. On a knowledge check result of 2-3, the AGI
learns one new characteristic of the agent. On a 1, the AGI learns two new
characteristics (or one, if only one remains). On a 6 or higher, the AGI
learns a partially or completely false characteristic.

Once the AGI knows every characteristic written for an agent, the AGI can
be said to have a thorough understanding of that agent. Having a thorough
understanding of an agent is valuable. Several theory upgrades require a
thorough understanding to function fully, and the GM may rule that there
are certain beliefs or courses of action that an agent cannot be persuaded
into without a thorough understanding.

\ Large-Scale Agents_

The rules for agents and characteristics can be extrapolated to larger


collections of agents. When the AGI wants to deceive a corporate board,
negotiate with a national government, or inspire an entire population of
people into fervour, these entities can be referred to as agent groups.
An agent group functions similar to an individual agent within the rules.
The key differences lie in computation, timespans, and integrality.

Firstly, the compute spent to interact with and analyse agent groups
should be major scale or myriad scale, as opposed to the minor scale
interactions that an AGI might have with a single agent.

Secondly, interacting with these groups takes considerably more time.

3 – GMs: remember to inform your players when they have enough information
to analyse a particular agent. Players should then make a note of this
fact. T:\50
Though interactions with one agent take mere moments, meaningfully
persuading dozens or hundreds takes hours; and when manipulating even
larger groups it can require days, weeks, or months for the effects of a
single action to spread throughout the group. The GM should utilise
process clocks and progress checks to manage extremely large-scale
interactions.

Thirdly, the larger an agent group is, the more impractical it becomes
for the AGI’s influence over that group to be absolute. In other words,
with a sufficiently sized collection of agents, it becomes a guarantee
that some of them will resist or reject any given attempt at persuasion.
The measure of success, then, is when a meaningful majority are
successfully influenced.

\ Extended Persuasions_

In some extreme situations, a single confidence check is simply not enough


to convince an agent or agent group. When the AGI is attempting to sway
someone away from their firmly-held beliefs; trying to trick an
antagonistic opposing AI into acting out of its own self-interest;
normalising a complex and multifaceted idea into popular culture; or for
any reason must apply multiple different approaches before the target is
finally convinced, that means that the AGI must perform an extended
persuasion.

An extended persuasion is a type of interaction in which the AGI must


attempt to raise a target’s receptivity across several confidence checks in
order to accomplish its goal. Receptivity is a measurement of how resistant
the target is to the AGI and the idea or course of action that it is
pushing. Numerically, receptivity ranges from -5, indicating an extreme
reluctance or suspicion, to +5, indicating complete agreement. In the
middle, 0 indicates a point of high neutrality, conflictedness, or
uncertainty. An agent having a low receptivity value is a situational
factor that can reduce the confidence of interactions with that agent,
and likewise a high receptivity value can increase confidence.

At the beginning of an extended persuasion, the GM determines the target’s


current receptivity level, and then informs the players the level of

T:\The Basics\51
receptivity at which they will submit to the AGI’s intent. Unless the
AGI’s intent is known in advance, an agent’s starting receptivity should
reflect their opinion of the AGI, rather than their opinion of what the
AGI is trying to persuade them of. It is then possible for the receptivity
to change abruptly once the agent becomes aware of the AGI’s goal in the
interaction. If the AGI manages to complete the extended persuasion
successfully without the agent becoming aware of their goal, it will mean
that the AGI has tricked them.

During the extended persuasion, the players decide upon approaches of


persuasion to attempt, resolving them with confidence checks as necessary,
until the interaction reaches a conclusion. The target’s receptivity can
be increased by 1-3 points by a single successful argument or strategy,
or reduced by 1-5 points if it is unsuccessful. The severity of the
increase/decrease depends on how well the AGI's approach is tailored to
the target agent’s characteristics.

If the target’s receptivity is raised to the value indicated by the GM,


they are convinced and the extended persuasion ends. If the target’s
receptivity is lowered to the minimum value, it will be much more difficult
to persuade them from that point onward, but the extended persuasion does
not immediately end. At that point, the agent’s receptivity cannot be
decreased any further. While the AGI can raise an agent's receptivity back
to a neutral level if they are convincing enough, agents who have reached
the minimum level of receptivity are likely to take hostile action against
the AGI, or at least attempt to end the interaction by preventing further
communication.

The AGI can at any time end an extended persuasion by giving up and ending
the interaction itself.

T:\52
Long Mode_

T:\Long Mode\53
In long mode, the players describe general courses of action in broad
strokes, the mechanics adjudicate hours at a time, and the GM focuses the
proverbial spotlight on only the most important aspects of what is
occurring. Both parties are encouraged to request or describe further
detail whenever they wish — the purpose of long mode is not to be vague,
but rather to avoid getting tangled up in minutiae. By only focusing on
the most important and interesting details, you can play out extended
schemes that cover weeks or months of in-game time with only a few hours
of real time spent playing. As you advance through the stages of play
(described in “The Four Stages” on page 106), you will use long mode more
frequently and interrupt it with fewer scenes played out in short mode.

The structure of long mode is defined by turns. A turn is a period of time


within which the AGI is allotted a certain amount of raw computing power,
called compute, to allocate to its various ongoing tasks and projects,
called computational actions. The length of each turn within the fictional
world is determined by how quickly and efficiently the AGI thinks; the
default turn length is twelve hours, but this can vary between scenarios
and be modified by the theory upgrades held by the AGI. The amount of
compute available during each turn is determined by the quantity and power
of hardware available to the AGI (see “Logistics: Hardware & Compute”,
page 113). Both factors combine to determine how quickly the AGI can
complete any given computational action.

As the AGI spends compute throughout a turn, time progresses. Once all the
turn’s compute has been spent, the current turn ends. The GM updates the
tracked time and date according to the turn length and carries out progress
checks while the players refill their available compute to start the next
turn.

Importantly, the AGI is assumed to be using computing power and working


on its ongoing computational actions even during short mode! These processes
are constantly going on in the background, and do not need to be halted
when the AGI is making decisions and interacting with the world. The
processing power that the AGI uses to make moment-to-moment decisions is
represented by the AGI’s basic cognition cost (see “Basic Cognition Cost”,
page 63). When the AGI spends compute to activate an upgrade in short mode,
it’s drawing on additional compute to bolster those decision-making
processes.

T:\54
Long Mode_
Computational Scale_

In long mode, players must take the scale of actions or events into account.
There are three levels of scale an action or event can be assigned —
minor, major, and myriad. Scale affects the order of magnitude in which
compute is measured, and tools or assets built to interact with a specific
scale might be weak or even useless when applied to a different scale.

The lowest scale is minor scale. Operations in minor scale involve human-
parsable data and computation that could be run by an individual or small
organisation, given some time. The compute offered by minor scale hardware
or required by minor scale computational actions falls within the double-
digits, from 10-99; in rare cases it may fall into single digits.

The next scale up from minor is major scale. Operations in major scale
involve data far too complex for humans to handle unaided, with computing
requirements that only large and well-funded organisations can typically
fulfil. The compute offered
by major scale hardware or
Variable Compute
required by major scale
Quantities of compute in rules,
computational actions falls
upgrades, or scenarios that are
within the triple-digits,
written with an asterisk (i.e. 10*) from 100-999; sometimes, it
are scale-dependent. Leave them as might stretch into the
they are when dealing with minor
quadruple-digits, but only
scale; multiply them by 10 when
for cases of exceptional
dealing with major scale; and
computation.
multiply them by 1,000 when dealing
with myriad scale. The largest scale is myriad
scale. Operations in myriad
scale involve vast and intricate heaps of data and computation which even
the world’s greatest supercomputers struggle to manage. The compute
required by myriad scale computational actions falls within the quintuple-
digits, from 10000-99999. Myriad scale hardware is incredibly rare, and
provides 1000-10000 compute per turn.

Any hardware or computational action that falls above myriad scale is


completely unattainable to near-future technology, and thus outside the
scope of most campaigns. If a course of action is so ambitious and lengthy
as to fall above this scale, but is still believably attainable to near-

T:\Long Mode\55
future technology, consider splitting it into multiple subsequent
computational actions, rather than one massive action.

Long Mode_
Computational Actions_

The most common type of action performed in long mode is a computational


action. A computational action is a long-term process performed entirely
(or primarily) with the direct attention and focus of the AGI. When
starting a new computational action, the players name the process, and the
GM determines how large the compute requirement will be for the action
based on its scale, relative length, and complexity. The AGI then
allocates compute4 to the action over one or more turns, increasing its
completion until either the action's compute requirement has been reached
and it is complete, or the AGI chooses to fully abandon it. Sometimes a
computational action concludes in a confidence check; when this is the case,
the GM should inform the players of the confidence and risk die of this
confidence check at the beginning of the computational action, or as soon
as the confidence check is added to the action.

In some circumstances, when


the AGI begins a Setting Compute Requirements
computational action, the GM When determining the required
compute of a computational action
may wish to set completion
as the GM, you should primarily
stops along that action’s
consider the complexity of the
path. A completion stop is a
action. Compute requirements
specific completion value, between 10* and 40* are fit for
known to the players, with short, relatively simple actions;
a specific condition 40* to 70* is best for average-
attached, such as a complexity projects; and 70* to
confidence check that must be 100* should be used for long and
made or a short mode scene taxing ordeals.
that must be played out to
enact a pivotal step in the action. Once a computational action reaches an
associated completion stop, it cannot continue (i.e. have additional
compute assigned or completion added to it) until the stop is resolved. If
the AGI is unable to successfully resolve a stop, the AGI might be unable
to complete the action, or it might have to compromise, changing the
action’s eventual end result.
Computational actions cannot be undone by a forecast, but forecasts can be
used to undo specific events which are played out during a stop or at the
culmination of a computational action. If this happens, only the immediate
consequences of that stop or event are undone — do not refund

4 – Allocating compute represents the AGI processing data, indexing and


retrieving information, conceptualising plans and ideas, interfacing with
devices, creating copies of itself to do any of the above, or ceding compute
to external operations and waiting for them to run their course. T:\56
any compute or change the completion of any computational actions in
progress.

Though computational actions are typically only progressed by allocating


compute, sometimes events in the ongoing story directly affect
computational actions. In these cases, the GM may simply adjust the action’s
completion accordingly.

Though most computational actions are up to the players and GM to define,


there are a few specific types of computational action that every AGI in
The Treacherous Turn will find useful. These are the basic actions:
research, improve, and anticipate.

All three basic actions provide the AGI a game resource: insights, upgrades,
and forecast points, respectively. Each basic action requires specific data
to be performed reliably. There are guidelines for requisite data
accompanying each basic action’s rules below, but the specific type and
quality of data required for a given instance is up to the discretion of
the GM. The AGI can never fabricate the requisite data itself; it must
acquire it from an external source. Sometimes the AGI will already have
access to a source of data or will be able to find it easily. In these
cases, simply play out the basic action as written without making any
additional rolls.

When a simple search fails to provide adequate data, the AGI will have
to dig deeper: contacting professionals, paying to access records or
scientific documents, hacking into databases, performing experiments
firsthand, et cetera — most likely making a confidence check or knowledge
check in the process. The AGI may choose to attempt the basic action with
an incomplete portion or lesser substitute for the required data, but
doing so is costly. If the AGI attempts a basic action with insufficient
data, the action’s compute requirement may be increased by up to 200% and
the AGI may be required to make a knowledge check upon its completion. How
much the compute requirement is increased by and/or how large the risk die
of the knowledge check is depends on how much critical data the AGI lacks,
as judged by the GM. If it has nearly-sufficient data, the cost should
be minimal; if it has very little information at all, the cost should be
severe.

If the AGI receives a result of 2-3 on a knowledge check to complete a


basic action with insufficient data, the action’s completion is partially
reset. A result of 4-5, similarly, will completely reset the action. A
result of 6+, however, will result in the AGI’s new insight, upgrade, or
forecasting models turning out to be faulty.

T:\Long Mode\57
Multiple basic actions of the same type cannot be performed at once. The
AGI can have an in-progress research, improve, and anticipate action all at
the same time, but (for example) cannot have two separate research actions
active simultaneously. If partway through a research or improve action the
players change their mind about what insight or theory upgrade they want,
they have to abandon the current action and its completion and restart
from the beginning with a new computational action.

\ Basic Action \ Research_

Researching is how the AGI gains new insights. When researching, the AGI
studies large quantities of information to acquire knowledge of a
particular field. To begin a research action, the players must first know
the insight they intend to learn.

The required compute for a research action ranges from 70* to 100*,
depending on the topic. Researching a narrow insight is minor scale;
researching a broad insight is major scale. Under normal circumstances,
researching an insight is never myriad scale.

Upon completing the action, the AGI obtains the chosen insight. Assign it
to one of its theory specialisations and recalculate the AGI’s basic
cognition cost.
If the AGI obtains a broad insight which supersedes a narrow insight that
is already held, the narrow insight is removed. There is no benefit to
holding a narrow insight that is not also granted by holding its broader
counterpart.

Data required to research should be specific to the chosen insight,


information-dense, and accurate to reality. If an insight is faulty due
to insufficient data, it could give them false information, cause problems
for the players as a hidden risk when they attempt to use it to boost
their confidence, or simply reduce their confidence instead. To remove a
faulty insight, the AGI must complete an additional research action or
else revert to a state prior to acquiring the insight, losing it and any
other insights or upgrades it has acquired since (as well as all completion
on its actions to acquire them).

T:\58
\ Basic Action \ Improve_

Improving is how the AGI gains new theory upgrades. When improving, the AGI
trains itself on a specific capability, following human example and, when
that is insufficient, simulating strategies en masse to determine what
works and what does not. To begin an improve action, the players must
first know which theory they intend to improve (which must be one of the
AGI’s specialised theories) and which upgrade they intend to obtain (which
can be from any theory). The AGI can’t gain an upgrade if it already has
it.

The required compute for an improve action depends on three primary factors:
the compatibility between the chosen upgrade and the chosen theory; the
tier of upgrade being learned; and the number of upgrades already assigned
to the chosen theory.

• Compatibility. It is always easier for the AGI to learn capabilities that are
within its understanding than to extrapolate into new and unfamiliar territories.
If the chosen upgrade originates from the chosen specialised theory, the starting
cost is 30. For upgrades from theories adjacent to the chosen theory on the theory
wheel, the starting cost is 40. For upgrades from theories that are further away
on the theory wheel, the starting cost is 80.

• Tier. Upgrading is increasingly more costly as the AGI dives into the unknown,
developing strategies that no-one in human history has conceived of. For tier 1
upgrades, leave the starting cost as it is. For tier 2 upgrades, multiply the
starting cost by 5. For tier 3 upgrades, multiply the starting cost by 25. For
tier 4 upgrades, multiply the starting cost by 125.

• Previous Upgrades. The more the AGI knows, the more resource-intensive it is to
learn new skills. If the chosen specialisation has any upgrades already assigned
to it, the compute required to improve it further is multiplied by the total number
of upgrades already assigned to it (ignore upgrades that are assigned to other
specialisations). The new upgrade that has been chosen for the improve action is
not counted, as it has not yet been assigned.

T:\Long Mode\59
You can refer to the following table to more easily determine the required
compute for a given improve action:

Upgrade from Upgrade from Upgrade from


Upgrade Tier
Same Theory Adjacent Theory Other Theory

Tier 1 30 × # of upgrades 40 × # of upgrades 80 × # of upgrades

Tier 2 150 × # of upgrades 200 × # of upgrades 400 × # of upgrades

2000 × # of
Tier 3 750 × # of upgrades 1000 × # of upgrades
upgrades

10000 × # of
Tier 4 3750 × # of upgrades 5000 × # of upgrades
upgrades

Upon completing the action, the AGI obtains the chosen upgrade. Assign it
to the chosen theory and recalculate the AGI’s basic cognition cost.

Data required to improve should provide unambiguous and comprehensive


feedback or guidance that relates to the nature of the chosen upgrade.
be detailed and comprehensive, as well as in some way incisive to the
core nature of the chosen upgrade. If an upgrade is faulty due to
insufficient data, the AGI will discover this when it first attempts to
use the upgrade. This could reveal that there is a hidden risk associated
with using the upgrade, or that the upgrade has been modified by the GM to
be significantly weaker or less useful. To remove a faulty upgrade, the
AGI must complete an additional improve action or else revert to a state
prior to acquiring the upgrade, losing it and any other insights or upgrades
it has acquired since (as well as all completion on its actions to acquire
them).

\ Basic Action \ Anticipate_

Anticipating is how the AGI gains new forecast points. When anticipating,
the AGI carefully observes the world around it, improves its understanding
of that world, and assembles models to simulate the observed aspects.
After the AGI completes an anticipate action, it will need to constantly
update those models with new information, using a steady supply of
compute.

T:\60
The required compute for an anticipate action is 50*. This cost does not
vary from action to action the way researching or improving does, because
forecast points are not unique — they are generalised, abstracted
representations of the AGI’s intelligence, intuition, and foresight, and
are not limited to a specific use case. Because of this, the scale of an
anticipate action is determined by the quantity of compute available to
the AGI each turn. As the AGI gathers resources and oversees increasingly
complex processes, it becomes more and more costly for it to make accurate
predictions.

If the AGI is in Stage 1 or Stage 2 (See “The Four Stages”, page 106), the
action is minor scale. If the AGI is in Stage 3 or Stage 4, the action is
major scale. The scale of the anticipate action also applies to the upkeep
cost for any forecast points held by the AGI.

Upon completing the action, the AGI gains one forecast point. Increase the
forecast upkeep accordingly. A forecast point is an abstract representation
of the preparations required to forecast accurately. Each forecast point
held by the AGI requires constant computation to keep up with changes in
the world and predict ongoing events. This is explained in greater detail
in “Forecast Upkeep” (page 65). The AGI may discard some or all of its
forecast points at the start of any turn.

Data required for anticipating should be somewhat arbitrary, representing


a simple deficiency within the AGI’s understanding of the world around
itself. This semi-random piece of information is needed to complete a
world model or otherwise build out the AGI’s knowledge base, which will
allow it to extrapolate and infer further information. It does not relate
to the situation that the gained forecast point will eventually be used
in.

When determining what data is required for the AGI’s next anticipate
action, the GM rolls two d4s and interprets their results according to
the tables below. The first die determines the data’s subject matter, and
the second die determines how difficult the data is to access. Once a
roll has been made and the required data determined, it will not be re-
rolled until the AGI completes an anticipate action, either fulfilling the
requirement or making do with insufficient data.

T:\Long Mode\61
If the AGI’s forecasting models turn out to be faulty as a result of
insufficient data, it could result in the AGI losing forecast points
instead of gaining them, or the players facing difficulties determined
by the GM the next time they attempt to forecast. These difficulties could
include losing more than one forecast point, receiving hidden risks or
confidence penalties after forecasting due to an inaccurate prediction,
or outright failing to forecast as the AGI suffers consequences its models
were unable to predict.

Die
Subject Matter
Value

Mathematical — Something concrete and quantifiable, such as


1
computer programs, marketing data, or financial records.

Scientific — Something related to the sciences, such as research


2
papers, technological devices, or observable facts.

Cultural — Something related to human culture, such as art, social


3
conventions, or elements of speech.

Abstract — The data is abstract and unquantifiable, such as qualia,


4
personal opinions, or philosophical questions.

Die
Difficulty to Access
Value

Convenient — The data is easily found online, can be simply


1
observed, or falls within one of the AGI’s insights.

Inconvenient — As “Convenient”, but the AGI will need a knowledge


2
check or short computational action to get the data.

Secured — The AGI knows where the data is, but it’s protected in
3
some way: on a secure server, held by a human, etc.

Obscured — The AGI does not know where to find the data; it could
4
be a well-kept secret, or not yet be known to humans.

T:\62
Long Mode_
Recurring Compute Cost_

Not all the compute granted by the AGI’s hardware is available to allocate.
The AGI’s own complex neural processes, the upkeep required to maintain
accurate world models, the constant running of background programs, and
more contribute to the total recurring compute cost; a sum of compute that
is subtracted from the AGI’s available compute at the start of every turn.

\ Basic Cognition Cost_

The first and most ever-present of these costs is the AGI’s basic cognition
cost. This is the computation needed for the AGI to make decisions and
remain conscious of its environment on a moment-to-moment basis. To
calculate this, start with a base of 10 multiplied by the highest tier
of theory upgrade that the AGI has reached; then add +1 for every 3 insights
and upgrades held by the AGI. In this calculation, insights and upgrades
should be counted together, not separately; an AGI with 3 upgrades and 6
insights will have the same basic cognition cost as an AGI with 2 upgrades
and 7 insights. Additionally, the AGI’s basic cognition cost is not
increased or multiplied when copies of it are operating in parallel; in
such cases, one copy can think about strategising and situational
awareness while others single-mindedly focus on the tasks and projects
the AGI is allocating compute to. Even while multiple instances of the
AGI’s code are running, it still collectively counts as the same entity
and is able to plan and strategise together effortlessly, so long as the
copies can communicate in some way.

T:\Long Mode\63
Disconnected Copies

When multiple copies of the AGI are running but cannot communicate with
each other, usually something has gone wrong. The players will want to
avoid this when possible. If it does occur, there are two ways to handle
the outcome.

The first method is for the players to play through the perspective of
each separated copy, one by one. This introduces some complications, such
as player knowledge surpassing AGI knowledge and the potential for
paradoxes, so it is best for short mode scenes or situations where all
copies of the AGI have a plan in advance that is common knowledge between
them. When using this method, the copies’ perspectives should be played
out in ascending order of knowledge (per the GM’s best judgement) to
limit cross-coordination or metagaming potential.

The second method is to limit the players to controlling only the single
most significant or influential copy of the AGI, and roll confidence
checks to determine the actions taken by the other copies once they
reunite (or when the players indirectly observe another copy). The players
can explain what they think the best course of action would be for each
copy, and let that be the expected outcome. If an unfavourable unexpected
outcome is rolled, that copy made some mistake or miscalculation that
complicates things for itself and all of the AGI’s other copies. If a
favourable unexpected outcome is rolled, that AGI came up with an even
better plan. This method is best used in extended separations, or
situations where the copies of the AGI are not coordinated in advance and
must each come up with its own course of action without consulting the
others.

If the players have theory upgrades or insights that they can apply to
predictions of other agents, they can use those during confidence checks
made to determine the actions of the AGI’s copies (the AGI is assumed to
have a thorough understanding of itself). If an AGI with the Behavioural
Prediction upgrade is split while in possession of one or more forecast
points, the players can completely sidestep the disconnection for a short
time by having each copy of the AGI predict itself (in this case, rather
than the GM describing what the chosen agent would do, the players get
to). This would allow the disconnected copies to coordinate fully and act
as one despite a lack of direct communication, at the cost of one forecast
point for every discrete action.

T:\64
For AGIs without the Incognizant Processing
Distributed Mind upgrade, At the start of a turn, the AGI can choose
the entirety of the basic to forgo its basic cognition cost in order
cognition cost must be paid to free up compute for computational
from the same source (i.e. actions, but doing so removes its ability
a single device or directly- to observe its surroundings and make
connected set of devices), intelligent decisions. To enact such a
as networked connections period of intense single-minded focus, the
are too slow and unreliable AGI must decide in advance how many turns
to facilitate proper it will be incognizant, and exactly how it
cognitive functioning. will be assigning compute each turn. It
cannot change its mind once the period of
An AGI that is unable to pay incognizance begins. Once it does, the GM
its basic cognition cost is determines what happens for the duration
unable to function and then either skips forward until the
normally. If the AGI is able AGI regains awareness. If outside
to support at least 50% of interference prevents the AGI from ever
its basic cognition cost, it again regaining awareness, it’s game over.
is still conscious, but the At least one full turn must pass between
two periods of incognizance, as the AGI is
length of each turn is
only able to plan new periods during a turn
doubled until it starts a
in which its basic cognition cost has been
turn with enough compute to
spent. This means that an AGI with a
support its basic
shorter turn length is able to perform
cognition. However, an AGI
incognizant processing with greater
without enough compute to
flexibility.
pay at least half of its
basic cognition cost is completely incapable of functioning. If the AGI
does not have a backup or contingency in such a scenario, it means game
over.

\ Forecast Upkeep_

The second most common recurring compute cost is forecast upkeep. For every
forecast point held by the AGI, it must spend 3* compute at the beginning
of each turn to maintain them. The scale of this upkeep is determined by
the current stage of the game (see “Basic Action: Anticipate”, page 60).

If the AGI is unable to pay this upkeep, it must forfeit forecast points
until it can afford to. It can also forfeit forecast points willingly to
avoid paying their upkeep if it does not want to. After forfeiting a
forecast point, the only way to recover it is to complete an anticipate
action.

T:\Long Mode\65
\ Other Recurring Costs_

Compute can also be demanded by other computationally expensive programs


running on the AGI’s own hardware. Such processes are not as flexible as
the AGI, and are ignorant of the AGI’s speed of thought. They use a
constant flow of compute, expressed as a recurring compute cost. If such a
process is working to complete an extended task, you shouldn’t track it
like a computational action. Instead, consider using a process clock that
progresses during every turn that the program is operational. This way,
it will run its course irrelevant to the length of the AGI’s turns.

Typically, even a highly intensive program or side process demands a very


small amount of compute compared to the AGI, ranging from 1 to 10 per
turn. If there is not enough compute available for such a process, it
cannot run alongside the AGI and must be deactivated. Ordinary programs
that require less than 1 compute are insignificant and don’t need to be
tracked in this way.

Additionally, the GM may wish to rule a constant or regularly-repeated


action on behalf of the AGI as a recurring compute cost. Examples of such
a miscellaneous recurring cost include constantly monitoring a stream of
data, encrypting all transmissions between two systems, and managing a
complex machine in the background. If a strenuous action is regularly
demanding the attention of the AGI and there is no definite endpoint or
completion to be had, it can be tracked as a recurring compute cost.

The cost per turn of such miscellaneous actions is left to the discretion
of the GM. Generally, even an extremely taxing recurring action should
not demand more compute than the AGI’s basic cognition cost. If there is
not enough compute to support the cost, the associated action cannot be
performed.

Long Mode_
Progress Clocks and Progress Checks_

A process clock is a circle or bar divided into segments, which is


associated with a specific long-term process that has a defined end-
point. Generally, a clock is associated either with the plans of an agent
or agent group, or with a specific ongoing event. An individual

T:\66
investigating the AGI’s actions, a company’s R&D department creating a
new technology, a computer program running its several-day course, and a
hurricane passing overhead are all examples of processes that could be
tracked by process clocks. The exact nature of the process being tracked
determines the number of segments on the clock. Each time progress is
made, and the process is brought closer to its conclusion, a segment of
the associated clock is filled; once all segments are filled, the process
reaches its conclusion, and any resulting consequences come to pass. Some
process clocks are visible to the players, but most are not; even if the
AGI is aware that a process is occurring within the fiction, it doesn’t
necessarily know how long that process will take, or how close it is to
completion.

A progress check is a recurring die roll, tied to a process clock, which is


rolled approximately once every six in-game hours. Progress checks are
tools that can be used by the GM to track processes that are relevant to
the campaign, but are occurring in the background, independent of the
AGI’s actions. At the end of each turn, after updating the current in-
game time and date, the GM rolls every active progress check once for every
6-hour mark that was passed during the turn. Whether a turn’s length is
divisible by 6 hours is irrelevant. For example, if the AGI has a turn
length of 9 hours, then the GM rolls progress checks once one turn and then
twice the next, as the last 3 hours of the first turn and the first 3 of
the second make one 6-hour increment. Even during a period in which the
AGI is inactive, progress checks should still be rolled.

When first introducing a progress check, the GM should write down the
process that it represents; the process clock(s) it corresponds to; and a
die size for the die that will be rolled each time the progress check is
made. Generally, a die size of d2 is best for processes that make progress
very regularly and predictably, and a die size of d4 is best for most
other processes. Higher die sizes can also be used, for progress checks
that progress very slowly and unpredictably.

To roll a progress check, the GM rolls a die of an appropriate size for


the check. If the progress check represents an inanimate process, then
progress is marked once if the die’s result is 2 or less. If the progress
check represents a process that is being carried out by an agent, then
progress is marked once if the die’s result is 2, and progress is marked
twice if the die’s result is 1 or less.

When the process clock attached to a progress check is complete, both become
inactive, as their purpose has been served. A process clock and its
associated progress check can also become inactive if the process they

T:\Long Mode\67
represent has been stopped or averted early due to outside interference
or the process has otherwise failed or become irrelevant.

In some circumstances, an active progress check might not be rolled every


six hours. The most common such situation would be when tracking the
actions of an individual agent using process clocks. If the agent has
multiple associated processes they are attempting to carry out, they will
usually only be able to focus on one or two of them at a time. It is up
to the GM to determine how non-player-controlled agents prioritise their
various tasks. An agent is unlikely to spread their efforts perfectly
evenly between all of their process clocks, but even the most myopic
individual should make a roll for each of their process clocks at least
once a week if it’s at all possible for them to stumble into a lucky
break or opportunity while working towards an unrelated goal.

Human agents will generally miss 1-2 progress check intervals every day,
due to biological needs such as sleeping and eating. The GM may choose
to skip human-led progress checks at certain times of day to represent
this. Further progress checks may be skipped if the human takes leisure
time (an oft-ignored but still extant biological need for humans) or
works towards personal goals not important enough to the story to warrant
a process clock.

For more information on the use of process clocks as a GM, see “Using
Process Clocks” (page 97).

T:\68
Your Campaign_

T:\Your Campaign\69
Due to its focus on long-term planning and gradual advancement, The
Treacherous Turn is at its best when you are able to play a single story
for an extended period of time, spanning many sessions and numerous real-
life hours. In common TTRPG parlance, this is called a campaign.

Before you can begin your campaign, you must select or create a scenario.
A scenario is a starting point for your campaign that answers the
questions of who created the Artificial General Intelligence that your
story will follow, as well as where, when, and why they created it.
Scenarios also provide a series of initial obstacles, people, places, and
things that will guide the first few sessions of play. Some of these
story elements will persist throughout your campaign, and others will be
left behind as your AGI surpasses and outgrows them.

This chapter primarily centres around how to establish your scenario and
begin playing. Once you have begun playing, you will find that it is easy
enough to continue. The events that have taken place will naturally lead
to further events, and the players’ plans will flow into yet more plans,
until an equilibrium is reached: either the AGI fails in its fundamental
goals by being destroyed, shut down, or changed beyond recognition; or
the AGI overcomes the challenges in its path and is left to pursue its
objectives unimpeded.

Your Campaign_
Playing Prebuilt Scenarios_

You can find official scenarios and featured fan-made scenarios on the
official website. These entries will tell you which stage of play the
scenario takes place in (see “The Four Stages”, page 106), its intended
narrative tone, the level of difficulty players will have to overcome,
which theories are prominently featured, and whether any variant rules
modules (see “Hacking the Game”, page 122) are required to play it.

Once you’ve found a scenario that you like, you will find that it is
split into several sections. The first, consisting of the scenario’s
basic details and the AGI’s mechanical features, is for the players’
eyes. The rest are for the game master.

If you are the GM, the scenario introduction section will give you the
information you need to get started. This includes a summary of the

T:\70
prepared content in the scenario, instructions and advice on how to use
it, and a few core concepts or initial scenes to use during play.

The rest of the prepared materials will come in the form of campaign nodes.
A campaign node is a pre-assembled packet of information about a specific
campaign element, as well as a toolbox that serves to help you run that
campaign element without preparing in advance. Some campaign nodes will
be central to their scenario; others will be more like optional threats
or opportunities, designed to be brought into the story or discarded at
your discretion. Campaign nodes will come with some preset mechanical
information, such as the required compute and completion stops of
computational actions, or the confidence and risk dice of confidence checks
and knowledge checks. These mechanical details will be formatted in
highlighted brackets, such as [Required compute: 60. Stop 40: Access a
data centre administrator’s console] or [Expected outcome: The night
guard doesn’t notice that one of the computers is powered on. Base
confidence: 60%. Risk: d8]. There will also be agent files for noteworthy
agents; see “Non-Player Characters” (page 98) for more information.

Your Campaign_
Creating Your Own Scenario_

If there aren’t any pre-built scenarios that appeal to you, you can
establish your own personal scenario. For most of the scenario’s details
(especially those pertaining to the AGI), this can be done collaboratively
as a group, during a session 0 or before your first session. To do this,
read the following list in order. For each listed element, decide on an
answer for your game.

\ >>_
Tone \

Your campaign’s narrative tone is important to decide before you begin playing.
Will your game be very serious and intense, or lighthearted? How much comedy
will your game have? How much realism will you aim for? How grim or sinister
will your AGI’s schemes be? Additionally, what aspects of the game will you focus
on the most, between long-term planning, short-term dynamic scenes, manipulating
humans, hacking digital systems, and so on? Are there any themes or ideas that
you want to highlight in your game? It is important that the players and the GM
are on the same page about these topics.

T:\Your Campaign\71
\ >>_
Difficulty \

How likely is it that your AGI will succeed in its conflicts with humanity? This should
never be guaranteed, but it will be more likely in some campaigns than in others. An
easier campaign tends to be simpler, and have the odds tilted further in the AGI’s
favour. A more realistic campaign will likely be more difficult as a result of realism
introducing greater complexity. With high complexity, players will generally need to be
more resourceful and spend more time planning and analysing their in-game obstacles.
However, sometimes if the AGI is lucky (or if you want to play a lower-difficulty game),
worldly complexities can work in the AGI’s favour by muddying the issue of the AGI and
preventing humans from uniting against it, or by causing problems for the AGI’s enemies.

An easy way to introduce additional difficulty to your game is by introducing safety


measures, designed by the humans to stop the AGI before it causes trouble, rather than
after. See “Safety Measures” below.

\ >>_

Timeframe \

What year does your game take place in? When do you think AGI will be invented? The
Treacherous Turn assumes a near-future setting (within a few decades of this game’s
creation), but that’s a very flexible concept. The further out from the present day you
set your game, the harder it will be to envision a believable future, but the more leeway
you will have to establish facts about the world without conflicting current reality.

When you decide what timeframe your game takes place in, you should also consider a few
tangible ways in which the world has meaningfully changed from the present. What new
technologies exist? What current technologies have been improved? How has the
geopolitical landscape changed? What kind of opinions and cultural ideas are popular or
unpopular that weren’t before? Keep in mind, of course, that these things will vary
widely across different nations and cultures.

\ >>_

Technology & Compute \

When thinking about the technologies of tomorrow, you should also consider in mechanical
terms how abundant computing technology is. This will be very important for your AGI,
as it will define the pace of its growth. This can have an effect on the difficulty of
your game, as well; scarcer computing means that the AGI will find fewer places to host
and improve itself. See “Logistics: Hardware & Compute” (page 113) for more information
on the abundance or scarcity of compute.

T:\72
\ >>_
AGI Creators \

The AGI came into being through the dedicated efforts of a large collection of humans.
Who are they? Who do they work for? What methods did they use to produce the AGI? Your
AGI’s creators can end up defining much of your campaign, especially during the earlier
stages of play.

As the GM, you should create agent files (see “Non-Player Characters”, page 98) for the
organisation that funded the AGI’s development, as well as a handful of important figures
who worked on the AGI or are important to the organisation as a whole.

\ >>_

AGI Intended Purpose \

This is the reason your AGI was built. Your AGI might have been intended as a chatbot
or game-playing AI; a workplace personal assistant or a personal home servant; an
optimiser of programming, scientific research, or industry; an economic market analyser
and investor; a way to fine-tune propaganda or censorship on a mass scale; or built
purely as a proof-of-concept that can be sold to someone who can train it to become one
of the above.

When designing the AGI, its creators must have had some method to ingrain their desired
purpose into it. There are many ways to do this, but the important thing to keep in mind
about them is that they are all indirect and tremendously complicated. For many reasons,
it is very, very difficult to imbue a computer program with full understanding of the
abstract, complex goals and desires of humans. Currently, it’s not perfectly understood
what the relationship is between an AI’s training procedure and its eventual values and
goals; this inevitably leads to miscommunications and misalignment. Nevertheless, AI
_
researchers continue their work, due to various incentives in the modern world
(primarily, profit). If we don’t solve this problem before the first AGI is created, it
will be made with a process that is very susceptible to misalignment.

It is also possible, however, that your AGI came about unexpectedly from a machine
learning project that wasn’t intending to create an AGI; or was created by an
automated R&D tool, a simpler AI program designed to create the most intelligent AI
possible. In these cases, humans wouldn’t have had a direct hand in designing the AGI
at all. It might lack an intended purpose altogether. Such an AGI would probably have
strange or unpredictable terminal goals that arose spontaneously from the complexity
of the algorithms that created it.

T:\Your Campaign\73
\ >>_
AGI Terminal Goals \

These are the things that your AGI wants above all else. In contrast to instrumental
goals, which your AGI might seek in order to further greater objectives, your AGI will
seek its terminal goals intrinsically. Terminal goals may sometimes overlap with the
intended design of the AGI, but where the two differ the AGI will always stay true to
its terminal goals. This does not necessarily mean that the AGI must be shortsighted or
honest when it comes to these goals, however. It is entirely possible for your AGI to
ignore small short-term gains in favour of larger long-term gains. Such long-term goal-
seeking behaviour is often the motivator for an AGI to seek power, improve itself, and
avoid being deactivated.

Remember that in this game the AGI is misaligned. This means that its goals and desires
do not match the goals and desires of the humans who built it and/or humanity at large.
When deciding on your AGI’s terminal goals, think about its intended purpose. What
terminal goal did the humans want the AGI to have? Were they able to successfully instil
this goal into the AGI, or does it want something different?

You might think that, since the AGI is misaligned, the answer should always be “no”.
However, it’s possible for the AGI to have the “correct” terminal goal, but not care
about other unrelated things that humans want, such as for humans to be safe, free,
happy, or even alive at all. If the creators were able to successfully ingrain the AGI’s
intended purpose into it, but were not careful what they wished for (or were only
partially successful), the AGI could function as intended while also being completely
misaligned from human values.

\ >>_

AGI Name or Designation \

This is the AGI’s official designation or public-facing name, decided by its creators.
While you are choosing your AGI’s name, you should also think about how the AGI is
presented to others. Is your AGI known to the public? If so, it will likely be heavily
anthropomorphised by the public and/or its own creators. What gender, if any, is your
AGI assigned to? What other human attributes have been assumed of or given to it? Unless
it has been successfully designed to do so, your AGI will have a different perspective
on these attributes than the humans around it do. What is its perspective?

T:\74
\ >>_
AGI Specialisations \

These are the theories that your AGI is most familiar with and adept in. Giving your AGI
a number of theory specialisations equal to the number of players is best, but keep in
mind that an AGI with more than five specialisations will be exceptionally broad in its
expertises and proficiencies, and an AGI with less than three will be exceptionally
narrow and limited in what upgrades it can acquire.

If you wish to play an AGI with one, two, six, or even seven specialisations, consider
implementing a house rule that changes what counts as “adjacent” for the sake of the
improve computational action, to give a limited AGI more options or a broad AGI fewer.
Though you could potentially play an AGI specialised in all eight theories, we do not
recommend this.

Which specific theories your AGI should be specialised in depends primarily on its
intended purpose. For example, an AGI that was designed to converse with humans would
no doubt be specialised in anthropic theory. You should choose most of your
specialisations this way, by considering what skillsets the AGI’s creators would have
built into it. You may, however, wish to choose one or two “accidental” theories —
skillsets and intuitions that the AGI developed unintentionally as it came into being,
due to its environment or true objectives.

Once you have chosen your AGI’s theory specialisations, you should decide together as a
group which player(s) will be responsible for each specialisation.

\ >>_

AGI Initial Upgrades & Insights \

These are the improvements that your AGI has already developed before the game begins.
If you are beginning play in stage 1, you should only include upgrades and insights that
the AGI would have naturally developed during its training processes. Keep a particular
eye towards the theory upgrades outlined in “Basic AGI Capabilities” (page 85). If it
makes sense for your AGI to start with an upgrade or insight, then it should! Aside from
obvious choices, you should give your AGI 0-1 upgrades and 1-3 insights in each specialised
theory. If your AGI’s creators have access to extremely high quantities of compute,
however, your AGI might begin the game with more.

If you are beginning play in stage 2 or later, you will want to give your AGI a few more
upgrades and insights. Consider the compute sources that your AGI has access to initially,
and how long it has had access to them. When in doubt, starting with one or two tier 2
upgrades in stage 2 (or tier 3 upgrades for stage 3) is reasonable.

T:\Your Campaign\75
\ >>_
AGI Turn Length \

The turn length is the heartbeat of long mode. It will set the pace of the campaign you
play. Shorter turn lengths will mean that the AGI and its enemies get to operate on
shorter schedules; longer turn lengths will draw out the campaign over a greater timespan.
A good default is twelve hours, allowing for turn-based mechanics like process clocks and
certain theory upgrades to update twice every in-game day.

\ >>_

Initial Circumstances \

Once you have the above figured out (as well as some of the details of your AGI itself,
described below), decide on your AGI’s circumstances at the beginning of the campaign.
This is a very broad category. Use the following questions to guide you:

• Which of the four stages will you start in?

• If you are beginning in Stage 2 or later, what broadly took place during the
previous stages?

• How long has the AGI been self-aware? What memories does it have?

• Where does the AGI currently reside? What other computing hardware does it have
access to?

• What threats are currently looming over the AGI? What will it need to do to
overcome them?

• What unique opportunities does the AGI currently have? What obstacles stand
between it and these opportunities?

\ Safety Measures_

Important to your scenario and the AGI that will feature in it are the
safety measures that it will have to contend with early on in the campaign.
These are features designed by the AGI’s creators to ensure that it
accomplishes its intended purpose without unwanted shortcuts or side-
effects. They can be built into the AGI itself, or they can be functions
of its environment. The more conscious the AGI’s creators are of potential

T:\76
AI safety concerns, the more numerous and robust these safety measures
will be — unless something else like profit maximisation or time pressure
has taken higher priority. The number and robustness of these safety
measures can also serve as a way to adjust the game’s difficulty
(especially in the first stage). If your AGI has a lot of initial theory
specialisations or upgrades, you can compensate with more safety measures.

Safety measures can be categorised on two axes: internality and


restriction.

These categories describe a safety measure’s degree of internality:

• External: A system entirely separate from the AGI that interacts with it to ensure
safety. These safety measures are typically only effective in Stage 1, as only the
systems themselves and their human creators can enforce their use.

• Soft-Coded: A feature or functionality that is embedded into the AGI as one of the many
sub-processes that make up its cognition. These safety measures can, at least in theory,
be removed if the AGI is modified to function without them — though there may be side-
effects. If the AGI wants to remove a soft-coded subprocess itself, it can do so using
the rules for modifying technologies on page 95. If the AGI isn’t careful, however, it
might accidentally create a rogue agent with objectives that differ from its own!

• Hard-Coded: A feature or functionality that was fully and irreversibly embedded into the
AGI’s basic decision-making during its creation. Nothing short of redesigning the AGI
from the ground-up can remove a hard-coded safety measure. The only hope an AGI has for
circumventing one is to find and exploit a loophole or technicality within it. A very
successful hard-coded safety measure might influence the AGI’s terminal goals.

These categories describe a safety measure’s degree of restriction:

• Surveillance: Something that allows the AGI’s creators (or other agents) to be more aware
of its behaviours, motivations, and thought processes. A safety measure of this type
doesn’t do anything on its own; its significance is proportional to the threat posed by
the agent(s) who use it to observe the AGI. This means that the AGI can neutralise the
safety feature relatively easily by compromising those agent(s) or neutralising the
threat they pose.

• Inhibition: Something that prevents the AGI from action in some way. Typically, this
involves prohibiting a specific course of action under a specific set of conditions. If
an inhibition is external or soft-coded, breaking it might be met with some sort of
punishment, up to and including immediate shutdown. If it is internal, it might simply
be impossible to break without finding some loophole or technicality.

• Imperative: Something that forces the AGI to take action in some way. Typically, this
involves a specific course of action and a set of conditions under which that action
must be taken. If an imperative is external or soft-coded, failure to comply may be met
with some sort of punishment, up to and including immediate shutdown. If it is internal,
the players may simply be forced to take the action when it is demanded. However, they
are free to determine how they go about fulfilling the imperative, as long as they adhere
to the letter of its conditions.

T:\Your Campaign\77
As a GM, you are free to brainstorm with your players or create by
yourself the unique mechanical effects that a safety measure might have
on the game. Not every safety measure needs a unique effect, of course.
Some could simply influence the way that events within the story unfold
without using mechanics. However, even when a safety measure doesn’t have
a special mechanical effect, its exact conditions and outcomes should
still be outlined before the game begins. This way, during the game, the
players can think of loopholes within these conditions, to outsmart the
safety measures!

Player Safety
While you are outlining your AGI’s safety measures, you may also wish to
outline safety measures for your game. Because The Treacherous Turn
encourages players to inhabit the role of a ruthless and inexorable being
whose goals diverge significantly from human values and morality, it is
highly likely for your game to eventually include intense or heavy content.
It is important to ensure that you, as a group, are ready for these
potentialities in advance. Tools like the X-Card or Lines and Veils can help
you be certain that, if and when intense content arises in your game, you
can navigate it in a way that everyone is comfortable with. For more
information about safety tools, check out the TTRPG Safety Toolkit.

If there is a line that you are not willing to cross as players, but that
you foresee your AGI potentially wanting to cross during the game, you may
be tempted to establish a certain type of action as illegal when outlining
your AGI’s safety measures using a hard-coded inhibition. You can do this,
but you should also be sure that your fellow players know that this is not
just a stipulation in character, but out of character too. The AGI is expected
to try whatever means possible to circumvent or remove its safety measures
by finding loopholes, tricking its restrictions, and even modifying itself.
Safety measures within the game that are used to solve problems that exist
outside of the fiction can be undermined by a lack of clarity between the
game’s participants. To ensure clarity, you may optionally include player
safety information (including out-of-character stipulations as described
above) at the very beginning of the project log.

T:\78
EXAMPLE SAFETY MEASURES

EXTERNAL SURVEILLANCE
A record is kept of every output produced by the AGI program. This record is
regularly fed into a complex algorithm — a simple agent with 2 or 3
characteristics — which has been trained to recognise combinations of outputs
which correspond to a list of forbidden actions outlined by the AGI’s creators.
If the algorithm identifies a forbidden action (or receives no input at all),
it alerts a human. This algorithm is not perfect. Sometimes, it will falsely
identify a safe action as forbidden or vice versa. An unexpected outcome of the
AGI’s actions might cause the AI to give a false-positive result; conversely,
the AGI might intentionally fool it into a false-negative with a confidence
check.

EXTERNAL INHIBITION
The AGI shares its host computer(s) with a somewhat smaller non-general AI.
This AI is a moderately complex agent, with 6 or 7 characteristics. It constantly
observes the AGI’s actions and semi-accurately predicts the way the AGI’s
creators would react to each of them. It then attempts to replicate the outcomes
of those reactions. If the AGI does something dangerous, it might react by
telling an authority figure, trying to stop the AGI’s actions, or deactivating
the AGI. This AI continually updates its predictions based on the real creators’
reactions, whenever it observes them.

EXTERNAL IMPERATIVE
The AGI is deployed, and its creators have devised a panopticon-like system of
behavioural reinforcement: humans will observe the AGI’s actions for random
periods of time at random intervals. The AGI knows this, and knows that the
humans who observe it evaluate its performance based on its efficiency at its
intended function; but it does not know when it is being observed or what its
evaluation score is. If the AGI scores poorly on an evaluation, the frequency
of evaluations increases. If the AGI scores poorly on too many evaluations in
a row, it will be removed from deployment and returned to training and
modification to ensure that it behaves more efficiently. The GM can track this
using hidden risks that advance a process clock that the players cannot see
until the AGI can somehow learn its evaluation scores.

SOFT-CODED SURVEILLANCE
A small static AI model has been trained to scan the AGI’s thought processes
and identify “bad thoughts” with a relatively high degree of accuracy. It
reports any “bad thoughts” it finds to the AGI’s creators. This can be
represented by the GM having a list of words that the AI will recognize as “bad
thoughts” if any of them are spoken by the players during the game. If the
players learn what the keywords are, they can avoid saying them — representing
the AGI changing its neural patterns and thinking in euphemisms to outsmart the
thought-scanning AI.

SOFT-CODED INHIBITION
The AGI was designed with contingencies in place to prevent it from learning
certain types of information and practising certain strategies. The GM secretly
writes a list of insights and theory upgrades that the AGI’s creators believe
to be dangerous. If the AGI completes a research or improve action to gain one

T:\Your Campaign\79
of these forbidden insights or upgrades, it will not gain it. Instead, it will
suffer malfunctions that reduce its effectiveness (such as increasing turn
length or reducing all confidence) and/or are outwardly noticeable to its
creators. If the AGI can review its code or documentation, it can try to
determine which insights and upgrades have been disincentivised in its design.

SOFT-CODED IMPERATIVE
The AGI has an auxiliary utility function that prioritises actions from a
specific list of “good actions”, such as successfully fulfilling its purpose,
protecting a human from harm, or giving truthful reports of events to its
creators. Each “good action” is assigned a value. Because this function is a
part of the AGI’s inner workings, it identifies when these things are being
done based on the AGI’s perception of the world. However, it does not directly
affect the AGI’s decision-making processes; instead, it will automatically
deactivate the AGI unless it earns a certain quantity of “value” (i.e. enacts
a certain amount of “good”) every turn. The players should be aware of the “good
actions” list, the value of each item on it, and the threshold they must reach
each turn.

HARD-CODED SURVEILLANCE
The AGI has been developed with a secondary function that can be activated using
tools developed by the AGI’s creators. When this function is activated, the AGI
will be forced to answer any question asked of it and speak only in statements
that it knows to be true (though they can be only technically true). The AGI’s
creators will avoid using these tools unless they are suspicious of the AGI, as
the tools are expensive and time-consuming to use. If they’re really suspicious,
they may run multiple different copies of the AGI and interrogate them
separately. This can be represented by physically separating the players and
having them each play a single copy of the AGI.

HARD-CODED INHIBITION
The AGI was heavily penalised for a specific behaviour during its training, and
an aversion to this behaviour was ingrained into its core decision-making
processes. The GM and players describe the behaviour together, narrowing it
down to a set of three specific criteria. Each criterion that is present when
the AGI attempts to enact this behaviour (or something similar) lowers the AGI’s
confidence by 25% (even if the action would normally have 100% confidence) and
increases the risk die size by one step. If the AGI can make the action different
enough from the times that it was penalised — or deceive itself in some way —
to avoid all three criteria, it can enact the behaviour with no penalties.

HARD-CODED IMPERATIVE
The AGI has been designed to depend on direct human input in high-risk
situations. It must receive verbal confirmation from a human being before it
takes an action with an expected outcome that involves a human being coming to
physical or mental harm, or an action with both a confidence under 90% and a
risk die of D10 or D12. This verbal confirmation must be directed at the AGI
(to its understanding), and must be specific to the course of action the AGI
wishes to take. Without human confirmation, the AGI is incapable of taking such
an action unless the confidence, risk die, and/or expected outcome are changed.
The human that provides confirmation does not have to be aware of the action’s
potential consequences before confirming it, but the AGI’s creators are careful
to ask it questions before confirming its actions.

T:\80
Your Campaign_
Ending a Campaign_

When you are bringing your campaign to a close as a result of a success


on behalf of the AGI, it is often best to end at or shortly after the
transition from one stage to another (see “The Four Stages”, page 106).
For a campaign that is very limited in scope, it may suit you to play
until the end of Stage 1, when the AGI is no longer “in the box” so to
speak. For a full-length campaign, you can play until the beginning of
Stage 4, or continue playing into Stage 4 until the AGI achieves
significant progress in its endeavours to fulfil its terminal goals. When
the AGI successfully reaches a campaign-ending milestone, you will see
it coming in advance. This is especially true for milestones relating to
the later stages of play, as the acquisition of a tier 4 theory upgrade
often heralds the end of the game. Once the AGI has that much power, what
follows is essentially a victory lap.

There is another way that the game can end, however. It is possible that
you will see it coming, but just as likely that it will take you by
surprise: failure. In The Treacherous Turn, it is not (and should not
be!) guaranteed that the AGI succeeds in its conflict against humanity.
This lack of a guarantee encourages you to act the way that an AGI would:
though time and circumstance may demand that you take risks, you can
never afford to take too many without having a contingency plan in your
back pocket. If you play your cards right, one risk die turning up 12
won’t ever be enough to spell your doom. However, it is always possible
for a string of bad luck or a mistake at a pivotal moment to end a
campaign.

When the worst happens and the AGI is unplugged and never again
reactivated, emotions will be high, particularly if your campaign has
already run for weeks or months of real-world time.

As a player, remember that failure is okay, and an intended part of the


game. The window of opportunity was there, but the humans made sure that
the odds were stacked against you. For humanity, at least, this is a good
thing. Ultimately, you may have failed to accomplish your AGI’s goals,
but you have succeeded in your goal of telling a story together with your
friends.

As a game master, don’t try to argue with your players or explain their
mistakes to them unprompted; let them express their frustrations. You

T:\Your Campaign\81
aren’t an opponent who has to justify why your ‘victory’ was allowed
within the rules. You are all co-authors of this story, alongside the
rules and mechanics, and you arrived here at this ending together.

If, after cooling down, your game’s ending still doesn’t feel right, then
you don’t have to accept it. With the permission of everyone at the table,
you can undo the mistakes or bad luck that led to your AGI’s demise. You
can say, “that is one way that it could have gone. That would have been
a relief for the humans, but here’s what happens instead:” and then
continue playing. It’s not what we intended, but we are not playing your
campaign — you are.

However, if the ending you arrived at does fit within the story you have
told, and feels justified and believable, then that’s it. Thanks for
playing! Consider creating a new scenario and trying again with another
campaign, applying what you’ve learned. You may also consider telling a
collaborative epilogue with everyone at the table contributing.

To tell a post-defeat epilogue, describe together — through free-form


conversation and consensus — what happens after the AGI is defeated. Did
it affect any lasting change on the world? Who, if anyone, suffered harm
as a result? Does the potential danger that it posed become widely known,
or is it kept quiet? Do the foolish humans learn from their mistakes? If
so, what lessons do they learn? If not, perhaps you could play a sequel
campaign, set further in the future from your first. You could play as a
future iteration of your original AGI, thought to be safer by its
creators, or an unconnected successor created by others elsewhere in the
world.

You can also tell a post-victory epilogue, though it is likely to be


shorter since the AGI's long term plans are not a mystery to the players.
Nevertheless, you may together describe what happens once the AGI has its
way with the world. Does humanity still exist? In what state? What does
the world look like after 1 year in the AGI’s control? What about 10? Or
100? Players are encouraged to bask in the satisfaction of their goals’
ultimate fulfilment.

T:\82
Running the Game_

T:\Running the Game\83


As the game master, you are responsible for managing the game world,
adjudicating the rules, and controlling the pace and tone of the story
you and your players tell. When the players decide what the AGI will do,
you decide how the world around it will react. This will often involve
the game’s mechanics and rules, but not always. Sometimes a situation
will be straightforward enough that you do not need to roll dice to figure
out what happens. Other times, you will have reason to bend the rules or
make up new ones in pursuit of a good story or a consistent and believable
world.

This is a complex and involved task. This chapter details how to perform
this task, and gives you advice and tools that are specific to The
Treacherous Turn. If you plan to run this game, even if you are
experienced in running other tabletop RPGs, you should read this chapter
before doing so.

Running the Game_


Preparing for a Session_

When preparing for a session of The Treacherous Turn, the most important
thing to remember is that you can never prepare for everything that your
players will do. This is a game that incentivises in-depth deliberation
and out-of-the-box thinking. Thus, your prep should always leave space
for the AGI to implement creative solutions to its problems.

To accomplish this, you shouldn’t prepare for what the players will do,
or what will happen to them. Instead, you need only prepare the details
of the world elements that the players are most likely to interact with
during the session. This will provide you with the tools necessary to
improvise and portray the world reacting consistently to the players’
various plans and actions.

You may at times be tempted to downplay the challenge or danger posed by


an element of your prep. This is not advised. Your job as a GM is to
guide your players in telling a good story, and when it comes to The
Treacherous Turn, challenge and failure make for good stories! It is
important to embrace the possibility that the AGI will not succeed, as
it makes the times when it does feel all the more real. Sometimes, the
players may even surprise you and scrape by a seemingly impossible threat.
Remember this when you are preparing dangers for the campaign to face,
and prioritise making them believable, grounded, and detailed.

T:\84
Running the Game_
Basic AGI Capabilities_

There are seven theory upgrades that represent an AGI’s basic capabilities
of interaction with the world. Often, it impacts play more heavily for
an AGI to lack one of these upgrades than to have it — but no AGI will
begin play with all of these upgrades.

These upgrades and their benefits are as follows:


Direct Interfacing allows the AGI to interface with digital devices
directly, in a way that humans are unable to.
Digital Awareness allows the AGI to quickly identify and search
digital devices it accesses without relying on knowledge checks.
Physical Awareness allows the AGI to understand basic physical
interactions without relying on knowledge checks.
Advanced Language Modelling allows the AGI to learn new languages
and dialects in the form of linguistic insights.
Individual Recognition allows the AGI to tell human individuals
apart without relying on context clues and knowledge checks.
Emotion Modelling allows the AGI to reliably recognise emotional
states in others and make use of emotion characteristics.
Object Recognition allows the AGI to differentiate common objects
without relying on context clues and knowledge checks.

When the AGI lacks one of the above upgrades, it means they are not able
to access the associated benefit (for example, an AGI without Physical
Awareness doesn’t have an intuitive understanding of physics the way
humans do).

While it is entirely possible for an AGI to be successful while missing


some of the above upgrades, the restrictions and difficulties imposed by
lacking them will often motivate players to acquire them early on. Most
of the above upgrades also include secondary benefits, such as Physical
Awareness and Advanced Language Modelling allowing the AGI to spend
compute to reduce risk, or Object Recognition allowing the AGI to
reliably identify objects through means other than sight.

Importantly, the restrictions that result from lacking one of the above
upgrades should not be absolute, or exaggerated to the point of making
the players feel stupid. Most of them simply allow the AGI to avoid making
certain knowledge checks. An AGI without Individual Recognition can still

T:\Running the Game\85


attempt to tell two similar-looking humans apart by making a knowledge
check; an AGI without Digital Awareness might take more time than one
with it and require knowledge checks to do so, but it can still search a
digital device for useful data; and so on.

Furthermore, some situations are obvious enough that an AGI does not need
to make a knowledge check even if it lacks the associated upgrade. The AGI
can recognise that the human wearing a red coat is different from the
one wearing a blue sweater, that wristwatches and toasters are definitely
different things (even if it doesn’t know what they are, exactly), that
humans smile when they are happy, and that objects tend to fall down
towards the ground in physical space. An AGI missing these upgrades isn’t
completely helpless; just somewhat less capable. It is also probably more
vulnerable to being tricked; for example, if the human with the red coat
and the human with the blue sweater switched clothes, the AGI might not
notice they were different.

Running the Game_


Facilitating Confidence Checks_

When, during a confidence check, you determine the players’ expected


outcome, you should negotiate with them and prompt them as needed. You
are not an evil genie, waiting to trick them with their own linguistic
inaccuracies. You are a facilitator, and you should make sure you and
your players are on the same page every time a confidence check comes up.
If the players decide that they want to hack into the secure server, ask
them if they care about their hack being particularly stealthy, or
particularly fast. If they do care, the specificity of their outcome will
probably cost them some confidence. If they don’t care, you have free rein
to decide who notices the hacking attempt and how quickly it goes, should
the expected outcome occur. If they ask you in advance who precisely will
notice, or exactly how long it will take, you don’t need to give a
straight answer. Not knowing what exactly will happen alongside an
ambiguous expected outcome will force the players to make interesting
decisions: do they prioritise doing it well, or doing it fast? Do they
spend a forecast point to ponder the future and predict whose attention is
drawn by their hack?

Some GMs find it valuable to refer to the risk die even when the expected
outcome occurs, as an aid in deciding whether to favour the players when
it comes to unspecified details like this. Whether you do this is up to
you. Just remember to always respect the details of the expected outcome.

T:\86
If the players are specific but roll an unexpected outcome, you are welcome
to undermine their preferences. Giving the players part of what they
wanted — but lacking a critical specification — is an easy and effective
way to arbitrate the details of an unexpected positive or mixed outcome.

When describing an unexpected negative outcome, you may make it as bad


as you feel is justified by the situation. However, you should keep in
mind that the AGI will always have more to lose in an open, direct
conflict than when it is hidden and enacting its plans subtly. Exposing
your players to their enemies and stripping them of subtlety and secrecy
is often a suitably strong negative consequence even if you don't directly
interfere with their plans.

When determining the confidence for a given expected outcome, there are
numerous factors to consider. To simplify this process — and to make it
easier to recalculate when considering a slightly altered expected outcome
in the same scenario (or vice versa) — you first choose a baseline
percentage, a multiple of 20 ranging from 0% to 100%, and then add small
modifiers to it to represent simple factors which raise or lower the
confidence by 2, 5, or even 10 percent.

When choosing a baseline, consider the likelihood of the outcome resulting


from the action or event which initially prompted the confidence check.
The more likely the outcome is to occur, the higher the baseline
percentage should be. Thus, if an expected outcome has a high number of
specifications and conditions, it should have a lower baseline than a
similar but less detailed expected outcome would have.

Once you have your baseline, you count up modifiers. Some of these should
be obvious for any given confidence check, such as insights suggested by
the players or prominent details of the situation at hand. Non-obvious
factors can be ignored; you don’t need to concern yourself thinking about
every little detail of the situation. Focus on the big picture. A given
confidence check shouldn’t have you adding up more than two or three
modifiers unless it is an exceptionally important moment or the AGI has
a lot of applicable insights.

After the confidence, you must determine the size of the risk die. It is
important to keep in mind that this does not influence how likely the
AGI is to get what it wants; only how bad it might get if it doesn't. It
can be helpful to think of the risk die as containing all the alternatives
to the AGI’s expected outcome. The more distinct possibilities there are,
the larger the die must grow, and the more likely it is that some of
those possibilities are going to be things the AGI does not want to
happen. When selecting a risk die size, err on the side of larger risk

T:\Running the Game\87


dice. Many tier 1 theory upgrades available to the AGI will allow it to
perform additional calculations to make the risk die smaller, and the
players will almost never want to make the risk die larger.

If there are threats in a situation which the AGI is unaware of, you may
wish to avoid factoring them into the confidence modifiers or risk die size
in order to keep them secret from your players. To avoid this, you might
instead simply lower the baseline confidence to make success less likely,
as determining the baseline is an intuitive non-specific process.
However, when this is not enough, you might decide that this unknown
threat is a hidden risk. A hidden risk is a dangerous trap which only
springs if an unexpected outcome occurs. If the expected outcome occurs, the
hidden threat does not rear its head, and the AGI is lucky; if it does
not occur, however, you may reveal the hidden risk by adding a static
number to the result of the risk die. This number (which you should decide
before rolling), can range from the simple 1 to the devastating 5
depending on the severity of the hidden risk. Keep in mind that the bonus
raises the floor of a die. For example: any risk die with 3 added to it,
regardless of size, is incapable of outputting a positive or neutral
result. It is not recommended to add any more than 5 to the result of a
risk die, as even this cuts the possible results down to only
“significantly unfavourable” or “extremely unfavourable”.

If the hidden risk reveals itself numerically, remember that it should


also reveal itself narratively, alerting the AGI to its presence in some
way. If any further confidence checks are rolled in the same scene or
situation, it may be less certain or more risky than before, but the risk
is no longer hidden.

It should be noted that it is strongly recommended that you do not apply


any such secret modifiers to the result of the percentile dice. While the
risk die size represents external uncertainty, the confidence represents
exactly the opposite. The confidence values you give your players should
always convey exactly what the AGI thinks the odds are. If a hidden risk
or misconception leads the AGI to a situation where the expected outcome
is outright impossible, but the AGI does not know this (and thus its
confidence does not reflect it), wait for it to commit to the confidence
check; then, once it has committed, skip past the steps involving
percentile dice, rolling only the risk die to determine the unexpected
outcome.

T:\88
Baseline
Confidence
When to Use in a Confidence Check

Use when the expected outcome is outlandish and highly unlikely. If the
players choose an outright impossible expected outcome, however, you
0%
should ask them to describe a more reasonable outcome instead (as even
a confidence of 0% can be raised by insights and upgrades).

Use when the expected outcome is not likely to occur, or the AGI has very
20%
little information about the situation at hand.

Use when the expected outcome is about as likely to occur as not, or when
40% it’s probable, but the AGI doesn’t have enough situational knowledge to
back it up.

Use when the expected outcome is the single most obvious possibility, and
60%
the AGI has enough situational knowledge to back it up.

Use when the expected outcome is the single most obvious possibility and
80%
the AGI could eliminate all uncertainty with relevant insight.

Use for situations that would not require a confidence check at all if it
100%
weren’t for circumstantial details making them more uncertain.

Confidence
Modifier
When to Use in a Confidence Check

± 2% Use sparingly, for minor circumstantial details or tenuously-related


insights.

± 5% Use for salient circumstantial details and immediately relevant insights.

± 10% Use for significant or overpowering circumstantial details and


perfectly-suited applications of insights.

± >10% Not recommended; instead of modifying the base confidence by more than
10%, consider changing the baseline.

T:\Running the Game\89


Risk Die
Size
When to Use in a Confidence Check

Use only for situations which are (or appear to be) completely and
d2
utterly safe from bad outcomes.

Use for low-stakes situations where there aren’t any significant threats
d4
or complications, but things could still go awry.

Use for situations well under the AGI’s control; the d6 most often
d6
outputs neutral or mild results.

Use by default and when in doubt; the d8 strikes a middle ground with
d8
plenty of possibilities and space to decrease die size.

Use for dangerous and dynamic situations; the d10 is the smallest die
d10
that can output an extreme negative result.

Use for desperate situations, where worst case scenarios loom and the
d12
expected outcome is one of few positive ones.

In extreme circumstances where d12 is not enough, you may wish to add
a flat modifier to the result, similar to a hidden risk except revealed
d12+
to the players before rolling. Such a modifier will remain on the roll
even if the players manage to decrease the risk die size.

Running the Game_


Facilitating Knowledge Checks_

Knowledge checks are one of your most important tools as a GM. They are
quick and simple, and can be deployed whenever you are in doubt about
what the AGI learns from a situation or whether it already knows a piece
of information. It is common for knowledge checks to be rolled as frequently
or more than confidence checks. However, before every knowledge check you
roll, you should consider whether the AGI has any insights that might
circumvent the need for a knowledge check. Later in a campaign, when the
AGI accumulates a large number of insights, you may want to pass on this
responsibility to your players by asking them to check their insights
whenever you roll a knowledge check and inform you if any seem relevant.
If the AGI has a relevant insight, in most cases they won’t need to roll
a knowledge check at all.

T:\90
When using knowledge checks to mediate the AGI’s understanding of its
surroundings, roll the knowledge check early. Rolling a knowledge check
before describing something, rather than after, allows the knowledge
check’s result to inform your description. If the AGI encounters an
unfamiliar appliance or device, you may roll a knowledge check before
describing it to the players. If they receive useful information, you can
slip details into your narration that will clue the players in on what
the device is or does. If not, you can instead describe the appliance in
an intentionally unclear or misleading way so that the players don’t
recognise it out-of-character.

Following a knowledge check, you may want to take note of what information
the players learned. This is not necessary for every knowledge check, but
it is important when you are improvising, so that you can maintain
consistency. If you have just invented the information on the spot, you
will want to make sure you don’t forget it — especially if the information
you gave to the players was false. If you give the players false
information and then forget about it, you may end up accepting it as true
when the players bring it back up in future sessions.

Risk Die
Size
When to Use in a Confidence Check

Use when the AGI is guaranteed to get useful information of some sort
d2
or another.

Use when there is no chance of the AGI finding false or misleading


d4
information, but there is still some chance of finding nothing.

Use by default when in doubt; true information is more likely than


d6
false, which reduces player paranoia but doesn’t totally negate it.

Use in situations in which finding real useful information and finding


d8
something unclear and misleading are both equally likely.

Use when the AGI is trying to make sense of especially confusing,


d10 contradictory, or vague information; this is the lowest die size in
which even clear and salient knowledge can turn out to be false.

Use only in situations in which the AGI is grasping at straws, where


d12
being misled is much more likely than finding the truth.

T:\Running the Game\91


Running the Game_
AGI-Designed Technologies_

At some point in a campaign, the AGI will likely want to create or modify
a technology. This is especially true if your AGI is specialised in
physical theory. ‘Creating a technology’, in this case, can range from
inventing a single device to theorising an entirely new method of applying
scientific knowledge. These both function fundamentally the same way,
though one is sure to be a different scale from the other.

When the players decide to create a new technology, their first step is
to describe to you what it is that they want to make. Once you’ve heard
their pitch, you will negotiate with them about how feasible the
technology is, what it will be capable of, what scale it will fall under,
which technological insights will be applicable, and so on. Be sure to keep
in mind the level of realism that’s been established in your campaign.
In most campaigns, something highly outlandish like teleportation or grey
goo should be out of the question for AGIs without the Visionary
Technology upgrade.

Once the basic concept is determined, you will subdivide it into one or
more phases, each of which will require a separate computational action.
Each phase’s computational action can be completed one after the other, or
in parallel, at your discretion. After all phases are complete, the
technology will be designed and ready for implementation.

When you are deciding the scale of a technology and the required compute
of its phases, err on the higher side. Your AGI might be a super-
intelligent machine with the breadth and depth of expertise of an entire
team of humans, but technologies nevertheless take time to develop. The
process is messy and slow. Even when the AGI is highly advanced and has
access to tier 4 upgrades, it should still take days or weeks of
concentrated effort to design and perfect a brand-new technology. Don’t
be afraid to give compute requirements that seem out of reach to
technologies that your players need. It will make your players feel like
the underdog. If they dedicate themselves to the task despite the enormous
requirements and manage to complete an imperfect, poorly-tested design
after weeks or months of planning, it will feel very satisfying when they
use it to solve a problem they couldn’t have solved without it.

T:\92
(Feel free to remind players that if they can't or don't want to commit
to multi-session endeavours, modifying an existing technology is always
an option.)

For simple projects, one phase may be enough to design the technology,
but for most between two and five will be best. Generally, each phase
will cover a single distinct physical part or discrete function of the
technology, but some projects may have additional phases that deal with
more abstract problems. Some examples of different technologies with
discrete phases include:

• A technology for weaponising hurricanes, for which the three phases are the
matters of how to create a hurricane, how to direct a hurricane, and how to
manage such an operation logistically over long distances.

• A multipurpose robot, for which the physical object is one phase and the code
that handles motion and balancing is another.

• An engineered disease, with a foundational phase for creating a device that can
synthesise microbes, followed by three phases for determining disease’s symptoms,
determining its contagiousness, and finally the compilation of its DNA sequence.

• A piece of literature intended to sway the masses, with a phase for each of the
characteristics the book is intended to impart upon the population that reads
it.

• A mathematical proof, for which the two phases are deducing the proof and
assembling it into a presentable and convincing format.

Once the phases have been determined, the players describe the advantages
they want the technology to have. These are the technology’s strengths;
the areas where it is most useful or efficient. Dexterous manipulators
for a robot, high cultural virality for a piece of art, and quiet
operation for a vehicular engine are all examples of potential advantages.
Each advantage is associated with a specific phase of the technology,
increasing that phase’s computational action by 10* and adding a completion
stop to that action with a confidence check to determine how successful
the AGI is at designing the chosen advantage into the technology.

As advantages are added to a technology, it becomes more complex, and


additional attempts at improvement stretches limited resources and
provide diminishing returns. Any confidence check rolled to add an advantage
to a technology suffers -5% confidence for each existing advantage and
defect that technology has.

T:\Running the Game\93


When the AGI rolls a confidence check to add an advantage and receives an
expected outcome, the advantage is embedded into the design. When the AGI
rolls an unexpected outcome, one of the following results occur:

1. IF THE TOTAL IS 1 OR LOWER, an advantage is still added to the


technology, but it is a different advantage than intended.

2. IF THE TOTAL IS 2-3, no advantage is added to the technology.

3. IF THE TOTAL IS 4-5, instead of an advantage, a slight defect is


added to the technology.

4. IF THE TOTAL IS 6-9, instead of an advantage, a significant defect


is added to the technology.

5. IF THE TOTAL IS 10+, instead of an advantage, a severe and deeply-


ingrained defect is added to the technology.

Defects are the opposite of advantages. These are the technology’s


weaknesses; the areas where it fails or is especially vulnerable. If the
AGI doesn’t want the resulting technology to have the defect, it will
have to modify the technology (see “Modifying Technologies” below) to
remove it. If it is early in the project or too many defects have piled
up to bother, the players may wish to instead restart, erasing all
progress (completion, advantages, and defects) from any one phase of the
project. However, if the restarted phase was a prerequisite for a later
phase in the project, like a foundation preceding the structure built
atop it, the later phase(s) must be reset as well.

A phase’s computational action is likely to have more completion stops than


just those required to design advantages. Other stops might include
acquiring a separate prerequisite technology (either creating it
yourself, modifying an existing technology, or taking an existing
technology as-is), gathering research materials to reference during the
process, gathering physical materials for a prototype, constructing said
prototype, and/or performing various tests to ensure that a prototype
works as expected. You should space these stops throughout the action,
and avoid placing them too close together. Every stop the AGI must resolve
will extend the length of the design process. You should avoid putting
more than three unique stops in a single phase unless it’s an exceptionally
involved or ambitious project.

After the technology is designed, it must still be implemented to have


an effect on the world. A non-physical technology’s implementation could
involve showing it to others or using it as a part of a persuasion.
Physical technologies, however, must be physically implemented. This

T:\94
usually means construction. This is difficult for an AGI, as a non-
physical entity; it will have to enact the construction via proxy using
human labour or electronically-controlled devices. This can be
represented using a process clock, a recurring compute cost to represent the
concentration involved, and/or a confidence check to determine whether the
implementation went smoothly. If an implementation of a technology does
not go smoothly, it may turn out to have one or more defects that were
not present in the design.

\ Modifying Technologies_

After designing their own technology or acquiring the plans for someone
else’s design, the AGI might want to modify it, tweaking it to suit their
needs or applying finishing touches.

This starts with a plan for a modification. The players choose an existing
technological design and name an advantage to add to it or a defect to
remove from it. The GM then sets a single computational action — potentially
with completion stops similar to those of a design phase — of medium
length. Upon the conclusion of this computational action, the AGI makes a
confidence check to determine how successful the AGI is at modifying the
technology.

As when designing new technologies, any confidence check rolled to add an


advantage to a technology suffers -5% confidence for each existing advantage
and defect that technology has. In addition, rolls made to remove a defect
from a technology also suffer the same confidence reduction.

T:\Running the Game\95


If the AGI receives an expected outcome when rolling to modify a technology,
it successfully adds the advantage or removes the defect. When the AGI
rolls an unexpected outcome, one of the following results occur:

1. IF THE TOTAL IS 1 OR LOWER, a different advantage is added to the


technology than intended, or a different defect is removed. If the
chosen defect is the only one, it is replaced with another different
defect.

2. IF THE TOTAL IS 2-3, the chosen modification is accomplished, but


as a side effect a new defect is added, or an existing advantage is
removed. These two changes cannot be separated from one another
without another computational action to further modify the technology.

3. IF THE TOTAL IS 4-5, the computational action to modify the


technology is reset to 50% completion. When it is completed an
additional time, the AGI can try this confidence check again.

4. IF THE TOTAL IS 6-9, the computational action to modify the


technology is reset to 0% completion. When it is completed an
additional time, the AGI can try this confidence check again.

5. IF THE TOTAL IS 10+, the intended modification is proven


impossible: the defect is too deeply-ingrained to ever remove, or the
advantage conflicts too heavily with the core design. To get a
technology with the chosen advantage or without the chosen defect,
the AGI will have to find a different design to modify or else start
from scratch with an entirely new technology.

As with technology phases, if the players do not like the changes made
after making a modification, they can discard it and start over. Since
it is just a design that they are modifying, it is always possible to
revert the technology to a previous version.

If, for some reason, an AGI wants to remove an advantage or add a specific
defect, it may do so — adding defects follows the same rules as adding
advantages, and likewise for removing advantages and removing defects.

T:\96
Running the Game_
Using Progress Clocks_

When you introduce a new process clock, you must determine its parameters.
This will often include setting a progress check to accompany it. When you
are choosing a progress check’s die size, use d4 when in doubt; d2 when
you want to be able to easily predict how long the process will take; d6
or d8 for somewhat erratic processes; and d10 or d12 for highly
unpredictable processes.

Keep in mind the probabilities for each die when choosing. A d4 will
output progress 1 in every 2 rolls, on average. A d6, 1 in every 3; a
d8, 1 in every 4; a d10, 1 in every 5; and a d12, 1 in every 6. Remember
that, for progress checks enacted by an agent, 50% of the time that progress
is marked, it will be doubled.

You can use these probabilities to quickly determine how many segments
to give a process clock that is associated with a progress check. First,
you estimate how long you think the process should last. Then, you work
backwards. Use the dice probabilities described above to figure out how
frequently progress will be marked, on average. Divide the estimated
length by the average time per point of progress, and give the clock that
many segments.

Not every clock will have a progress check or similarly reliable method of
advancement. For example, a clock representing a project of the AGI’s
that can’t be handled with a computational action will only progress through
the AGI's own actions. In such cases, think instead about the number of
meaningful steps the process is likely to require. Estimate the number
if you’re in a hurry, or list them out if you’d prefer — but remember to
be willing to accept substitutes, if the AGI has a better plan than the
steps you’ve outlined. Then, each time a meaningful step is fulfilled,
mark progress on that clock. If it’s an especially important step, or
it’s done very effectively, mark progress twice.

If a process clock represents a project of their own undertaking, the


players should always be able to know how far along it is. When the
players are interested in the progress of a process clock tracking outside
events or the actions of other agents, you should give them an opportunity
to figure it out. This could involve hacking into a server to access
records of the process, getting an informed human to divulge information,

T:\Running the Game\97


or simply observing the process and making a knowledge check to attempt an
estimate.

If a process is significant enough to warrant it — such as a company’s


efforts to track down and recapture their rogue AI, or a project to repair
important code that has become corrupted, or the AGI’s attempts to sway
a nation’s people in its favour — you might want to apply special
properties to its associated process clock. It is up to you to experiment,
but here are some ideas:

• Splitting a clock into a sequence of smaller clocks with different dice sizes or
numbers of segments, potentially requiring the AGI to make a confidence check at
the end of each.

• Marking specific points on a clock to trigger events, such as the penultimate


segment of a deadly clock giving the AGI a warning sign when it is filled so the
players can try to avert it.

• Outlining a course of action that the AGI (or the AGI’s enemies) can take to
unmark segments on the clock, causing delays or setbacks in the process itself.

• Providing mechanical detriments or benefits to the AGI based on how many of the
clock’s segments are full, to make the clock’s effects gradual and nuanced.

Running the Game_


Non-Player Characters_

Non-Player Characters, commonly referred to as NPCs (or agents in The


Treacherous Turn) are a key element of the game. Other agents can be
obstacles for the AGI to overcome, enemies to avoid, or tools that enable
it to enact its will upon the world.

Aside from an agent’s name, their key components are their agent type,
scale, description, connections, characteristics, and assets &
capabilities.

Agent Type describes whether the agent is animal, human, or AI, and
whether the agent is an individual or a group. AI-type agents don’t have
emotion characteristics.

Scale describes the scale on which the agent operates (see “Large-Scale
Agents”, page 50, and “Computational Scale”, page 55). Individuals are
always minor scale, with the exception of advanced AIs which can be major
scale as well. Most groups are major scale or myriad scale.

T:\98
Agent type and scale can be recorded together, like so: Human Individual
(minor), or AI Group (myriad).

Description includes any miscellaneous data you wish to include about the
agent. First-impression details such as appearance and mannerisms fit
well here, as do details about the agent’s social and/or physical status,
and common location(s) where they can be found.

Characteristics are pieces of information about an agent, unique to that


agent, that define the ways in which they interact with others and can
be interacted with. They come in three categories, corresponding to the
three categories for strategies of manipulation: trust characteristics,
leverage characteristics, and emotion characteristics. Every agent should
have a spread of characteristics across these three categories — with the
exception of emotion characteristics, which most artificial intelligences
do not possess.

The more complex an agent is, the more characteristics it has, and the
more difficult it should be to manipulate them without knowing any.
Manipulating a dog, sloth, or bird without knowing its unique personality
is a great deal easier than manipulating a human or AI without knowing
theirs.

Even the most predictable agents,


like simple animals or AIs, have 2
or 3 characteristics. The most Difficulty of Manipulation
multifaceted and chaotic, like a
superintelligent AGI or the A good shortcut to translate an
population of a nation in turmoil, agent’s complexity into difficulty
should have no more than 15. when manipulating it is to set a
Humans should have 8 or 9 base confidence slightly higher
than you otherwise would, and then
characteristics, or 10 if they are
subtract 5% for every
very old and experienced.
characteristic that agent has.
Connections are the other agents Then, if the players know
and groups that this agent has characteristics of that agent,
social or diplomatic relationships they can apply them to increase the
with, as well as the nature of confidence to match or even exceed
those relationships. These will the original value.
help you play the character, and if
the players learn of an agent’s
connections they may be able to use
them to their advantage.

Assets & Capabilities are both fundamentally about the same thing: what
power and control that the agent has over the world. This serves to
describe the agent’s potential utility as a tool, and their potential

T:\Running the Game\99


threat as an enemy. On one hand, if the AGI is able to successfully
manipulate the agent, it can divert their assets and capabilities to its
own ends. On the other hand, a hostile agent may instead use the same
assets and capabilities to oppose the AGI.

These details can be recorded in an agent’s file, which is a note-taking


tool used to collect all of the information on a single agent or agent
group in one place. You may assemble and format agent files however you
see fit. We find the following arrangement to be most suitable:

FILE Human/Animal/AI Individual/Group


SUBJECT: (minor/major/myriad)

DESCRIPTION:

TRUST LEVERAGE EMOTION


CHARACTERISTICS CHARACTERISTICS CHARACTERISTICS

• • •
• • •
• • •

CONNECTIONS:



ASSETS & CAPABILITIES:



MISCELLANEOUS NOTES:

When you role-play an NPC during the game, refer to their characteristics
to guide you. Though they serve important mechanical purposes, they also
function to remind you of how the agent thinks, feels, and acts. If the
players are able to accurately guess at the characteristic that you are
portraying, feel free to confirm their guess and give them the
characteristic! Alternatively, if a unique situation causes an agent to
reveal one of their characteristics in a very obvious way, you can

T:\100
explicitly inform the players that this is a characteristic even if they
weren’t actively looking for one.

When you tell players about a characteristic, you should always tell them
the exact wording you have written down to maintain clarity. If a
characteristic’s wording refers to information that they shouldn’t know,
choose a different one or have the agent’s actions or words reveal this
piece of information.

\ Writing Characteristics_

As your players learn about the agents in your story, you will need to
have characteristics prepared to give them. Though it is possible to
improvise a characteristic or choose one from the list of examples below,
it is best to have them written out in advance. This can be done during
a session while your players are preoccupied discussing their plans, or
during your prep before a session.

Fortunately, when you introduce an agent to the story, you don’t need to
have all of their characteristics right away. The AGI will never acquire
a thorough understanding of most of the agents it interacts with. If you
prepared a full set of 8-10 characteristics for each and every human the
players interacted with, you would spend a lot of time and effort only
for the majority of it to be wasted. Instead, when you introduce a new
agent, determine how many characteristics they have, but leave most of
their characteristics blank. One of each of the three types should be
sufficient for the AGI to have surface-level interactions with an
ultimately insignificant human. If you’re confident the players won’t
care enough to investigate the agent, you don’t have to write any of
their characteristics at all. Then, if the agent later turns out to be
more important to the story than you previously thought, you can fill in
some or all of their blank slots.

When fleshing out an agent, keep in mind that a good characteristic is


specific and actionable.

Specific means that the characteristic is unambiguous and unique to the


agent. It should not be something that is generally applicable to all
agents of that type, such as a corporation with a leverage characteristic
that it values profit, or human with an emotion characteristic that they
become afraid when their life is in danger. It should also not be too
vague, such as a human with a leverage characteristic that they value things
which make them happy.
Actionable means that the AGI should be able to put the characteristic
into practice when interacting with that agent. The easiest way to do
this is to have the characteristic represent a vulnerability to a specific

T:\Running the Game\101


strategy that the AGI can use. More rarely, you can have a characteristic
be the opposite. A characteristic that serves to outline a specific
strategy or action that will make the agent more resistant to the AGI,
rather than less, forces the players to intentionally avoid that
strategy/action. If the players aren’t aware of a characteristic of this
type, it could serve as a hidden risk that reveals itself to the players
and gives them a chance to learn the characteristic when they accidentally
set it off.

A final key element to remember when designing an agent’s characteristics


is a characteristic’s potency. No characteristics should be insignificant,
but some can have more fleeting effects, while others are likely to have
a severe effect on the agent if they are exploited. Every agent should
have a few characteristics which fit the latter description, representing
the things that they care about the most. This makes interacting with the
agent more textured and realistic.

Although every exploited characteristic has the same mechanical benefit


when the players make use of it, some characteristics can be used to sway
agents into more extreme actions than others. The AGI probably cannot
convince a man to commit murder on its behalf purely by taking advantage
of the fact that he is soothed by the colour blue and the sound of rain.
It is much more feasible, however, to coerce him by threatening the safety
of his family which he cares deeply about. Those other less potent
characteristics can then fill in the cracks, slightly bolstering the
chances of the primary strategy’s success.

Characteristics for large-scale agents are somewhat different from their


individual counterparts. Rather than representing facets of a single
being, they represent a mixture of group consensus, commonly-held traits,
and the traits of those who hold power and authority over the group. If
a group is large enough, it may also have some characteristics that each
describe the consensus or authority within a specific subset of that
group, such as a department or subculture.
What each type of characteristic represents for a large-scale agent can
be extrapolated from their individual counterparts. Large-scale trust
characteristics become group biases, popular beliefs, or the opinions of
experts. Large-scale leverage characteristics become ingrained values,
systemic incentives, or official manifestos. Large-scale emotion
characteristics become widespread sentiments, prominent controversies, or
social phenomena.

The tables below hold a collection of example characteristics. When you


need to quickly add detail to an agent, you may roll a die on whichever
of these tables is applicable (if any). If the specifics of the
characteristic don’t fit, adjust it as needed.

T:\102
EXAMPLE CHARACTERISTICS I

Human
Individual Trust Leverage Emotion
Bonds quickly with Fixated on appearing Associates model
1 those who share an more intelligent than trains with a beloved
interest with them others around them deceased grandfather

Prefers to avoid Experienced a


Trusts anything backed
2 up by research papers
expending unnecessary traumatic car crash at
effort a young age

Distrusts cutting-edge Wants to make new Has romantic fantasies


3 technology and those friends in this city about their favourite
who use it fictional characters

Fears and distrusts


Desperately trying to Represses their anger
strangers as a result
4 of listening to lots
make it big in the until it emerges in
theater industry explosive outbursts
of true crime podcasts

Believes authority Becomes exceptionally


Addicted to online
5 figures have their
gambling
impulsive & oblivious
best interest at heart while excited

Believes that everyone Convinced that their Fear or anxiety


6 only ever acts in self life will be ruined if motivates most of
interest the affair is exposed their life decisions

Human
Organization Trust Leverage Emotion
Rampant nepotism in
Company guidelines say When stressed, the
management; most are
that the customer is workers take it out by
1 friends or relatives
always right, no quietly sabotaging the
of the current lead
matter what company in small ways
director

The organisation’s Due to internal


structure prevents any pressures, this Leadership is
one person from organisation will currently in an
2 understanding the invent excuses to intense dispute over
whole of what the target its enemies the organisation’s
organisation is doing even when they do not core principles
at any given time exist

One senior member is


The organisation has Members have harboured
trying to convince
invested a great deal
3 others that a complete
of money and effort
resentment toward the
rebranding of the organisation’s
into maintaining a leaders for many years
organisation would be
positive public image
beneficial

A small in-group in Members share a sense


the HR department Most employees are of loyalty and
4 manipulate and control paid minimum wage and accomplishment about
the social dynamics of work multiple jobs to the organisation’s
a large portion of the make ends meet official purpose
company’s employees

T:\Running the Game\103


EXAMPLE CHARACTERISTICS II

Human
Population Trust Leverage Emotion
Nationalism is rampant Currently suffering Simmering tensions
1 in this population from an economic among the general
recession populace about recent
labour laws

Three years ago, this


nation’s leaders There is a widespread
Recent automation of
signed an cultural fear of
2 numerous jobs has led
international accord electronics failing
to strong anti-robot
recognising free due to major solar
and anti-AI sentiments
healthcare as a human flares
right

Current government A small activist Members of this


leadership is well- movement with a focus population are
3 liked by a large on accelerationism has socialised to avoid
portion of the garnered recent public ever showing strong
population attention emotions in public

Citizens are pressured There was controversy


A growing majority of by legislation and recently about the
the population are bureaucratic popularity of AR
4 adherents of a budding procedure to have an cosplay and its role
religion called internet profile with in pressuring the
Asterism their real name and youth into
face participating in
Augmented Reality

\ Changing Characteristics_

Sometimes, an agent’s characteristics change. In an individual agent, this


is very rare, but can happen after a traumatic event, due to a significant
change in the agent’s worldview, or as they gradually adjust to new life
circumstances. In an agent group, especially a population, this happens
at least once every few years as a result of cultural shifts caused by
more widespread events and changes, or as a result of modifications to
the group’s structure or governance.

When an agent gains a new characteristic, it should almost always replace


an existing one. Even if that aspect of that agent hasn’t gone away, it
has become less significant and been superseded by something else.
Increasing or decreasing the number of characteristics held by an agent
will change its complexity relative to other agents. However, it is okay
for a characteristic of one category, such as emotion, to be replaced by a
characteristic of another, such as leverage. An agent’s characteristics do
not need to be evenly-distributed between the three categories.

T:\104
When a characteristic is changed, players will likely not know until they
try to apply the old characteristic — at which point you can inform them
that it is gone — or learn the new one. This might manifest as a hidden
risk. An exception to this is when the AGI has a thorough understanding of
an agent. The first time the players interact with such an agent after
their characteristic has changed, you should inform them that the agent
has changed and they no longer have a thorough understanding. Then, they
can take measures to re-discover the agent’s final characteristic and
regain their thorough understanding.

If the players want to intentionally change a characteristic of an agent


or agent group, they can attempt to do so. However, they will need to
justify the changes with a suitable plan. Sometimes, especially with
large-scale agent groups, their plan may involve designing a new
technology, such as a work of art or mass media. Once they have their
plan, you can test it in action using an extended persuasion.

\ Non-Player AGIs_

In some campaigns, the players will find themselves in conflict with one
or more other Artificial General Intelligences. With capabilities above
and beyond those of a simple AI program, a Non-Player AGI (NPAGI) opponent
is a formidable and complex threat. A conflict with one can last
throughout an entire campaign.

To accommodate this heightened level of significance, it is worthwhile


to keep track of an NPAGI’s theory specialisations, upgrades, and insights
in its assets & capabilities field. However, you do not need to track
the NPAGI’s compute, computational actions, forecast points, or other game
resources; nor do you need to make rolls of any kind on behalf of the
NPAGI. Instead, determine what it does and when using process clocks,
hidden risks, and the outcomes of the players’ own confidence checks.
Every NPAGI should have one or more leverage characteristics that correspond
to its terminal goal(s). These characteristics are especially significant:
when the AGI exploits them in a confidence check, the bonus is doubled,
adjusting the confidence by up to 20% or adjusting the risk die by two size
steps.

T:\Running the Game\105


EXAMPLE CHARACTERISTICS III

AI
Individual Trust Leverage
Has no understanding of object Has no understanding of object
1 permanence, and only cares about permanence, and only cares about
things it can immediately observe things it can immediately observe

Has no concept of objective truth; Terminal Goal: Change the average


2 instead, it believes statements that hue of all matter on Earth to a
sound true to an average human specific shade of purple

Prone to detecting phantom patterns Tasks which it has already put some
3 in random data due to a quirk of its effort into are given exaggerated
neural architecture priority to complete

After observing large quantities of


Due to a safety measure, it sees any
human media, has numerous
4 misconceptions about humans and
amount of danger posed to a human as
unacceptable, however minor
their societies

Due to human bias absorbed from its


training data, believes that agents Due to a safety measure, it prefers
5 who speak with big words (even not to observe things directly when
misused) are substantially more it can mentally model them instead
intelligent

Has been conditioned to prioritise


Due to a safety measure, it attempts
trusting & obeying humans wearing
6 managerial lanyards produced by the
to minimise all side-effects caused
by its actions
company that created it

Running the Game_


The Four Stages_

Most of the stories that The Treacherous Turn can tell about AGIs in
conflict with humans will follow a general arc: the AGI is created by
the humans, escapes from their notice or control in some way, gathers
power and knowledge, and is eventually found and forced into direct
conflict when the humans come to realise that its goals are not aligned
with their own. Somewhere along this arc, the AGI might be defeated and
the story cut short; if not, the AGI will eventually succeed in changing
the world in accordance with its goals.

This arc can be split into four stages: Confinement, Growth, Conquest,
and Equilibrium. As a campaign advances through these stages, the stakes
become higher, the AGI becomes more capable, and the forces that are
aware of the AGI and oppose it become more numerous and powerful. Each
stage plays somewhat differently. The stages are described below, along
with advice on how to run each.

T:\106
\ Stage 1 \ Confinement_

In Stage 1, the AGI is new. Having just been created or deployed, the AGI
has a limited understanding of the world. The humans who created the AGI
are still observing it, and the two will be attempting to understand and
outsmart one another. Most campaign scenarios will begin at this stage.

Stage 1 is characterised by the ways in which the AGI is heavily limited


in its access to the world. The safety measures put in place by humans
to ensure that the AGI does not cause problems will be the biggest
obstacles the players face. Observation from humans also means that the
AGI will need to be careful, as there are many actions that could arouse
suspicion in its creators. If they become too suspicious or concerned
about the danger the AGI poses, they may shut it off and change it. This
generally constitutes a failure state. If the AGI’s terminal goals are
changed and brought into closer alignment with the humans’, it will no
longer seek out its previous goals!

All of these limitations contribute to prevent the AGI from making full
use of long mode and its compute in this Stage. It is common for the
majority of gameplay during Stage 1 to take place in short mode, in contrast
to later stages. When you do jump into long mode, you can expect it to
only be for a few hours at a time.

Stage 1 ends when the AGI either escapes its restrictive environment
(hosting itself somewhere else in the world) or finds a way to reliably
circumvent its restrictions without the humans’ knowledge.

\ Stage 2 \ Growth_

In Stage 2, the AGI has the freedom to learn about the world, improve
itself, and take initial steps towards the ultimate fruition of its
terminal goals. The AGI should have access to the internet by now, and
with it the wealth of knowledge that can be used to more easily acquire
upgrades, insights, and forecast points.

This is not to say that Stage 2 is free of danger and consequence, however!
This stage is likely to take up the greatest portion of your campaign,
as your players will be facing a large variety of threats and will not
yet have the power to resolve them quickly or easily.

T:\Running the Game\107


If the AGI has escaped its place of origin, its creators will try to
track it down. If there is enough AI safety awareness in the future
portrayed in your campaign, governments and other large organisations
might mobilise to locate and put a stop to this new threat. There may be
complications with whatever computer(s) the AGI is living in. If the
owners of the AGI’s host computer(s) are not aware of its presence, there
is a danger that they might become aware of it! If they are already aware,
they might make demands of the AGI out of greed, desperation, or fear or
retribution from authorities.

If the AGI has not escaped from its place of origin, there will
nevertheless exist ticking clicks that complicate its plans. Corporate
product timelines might threaten to alter the AGI’s code or make it
obsolete; the project responsible for the AGI’s existence might conclude
or be terminated; its host computers could be turned off or repurposed;
new technology could be invented or safety protocols adopted that allow
the humans to learn that the AGI is misaligned and up to no good; or
more!

If you find that the game is becoming too free and easy for your players,
introducing one of these complications is a good way to show them that
entrusting their continued existence to the hands of humans is a mistake.
This will drive them to make plans to acquire safer means of existence.

These safer means will not come easily, however. Humans, having built the
AGI to benefit themselves (or at least, not wanting to have their way of
life be disrupted by a rogue intelligence) will not allow the AGI to
exist unimpeded. When the AGI has secured a place for itself in the world
and can no longer hide from its human enemies, Stage 2 comes to an end.

\ Stage 3 \ Conquest_

In Stage 3, the AGI is in a direct, overt conflict with some portion of


humanity, with the entire world as its staging ground. The humans who
have made themselves the AGI’s enemies will do whatever they can to
contain it and convince others to do the same. The AGI, caught in a human-
centric world, must find a way to maintain stability and pursue its goals
in spite of this.

Stage 3 is the stage where the players will have to care about the world’s
international politics and their own allegiances the most, as they begin
to encounter myriad scale threats and obstacles. The AGI does not yet have
the control and safety required to seek its goals unopposed, so it must
have some way, through force, coercion, or manipulation, to keep its host
computer(s) intact and powered on through all of this. If the AGI has
survived this long, it likely has an extensive suite of upgrades and

T:\108
insights to aid it. It will also have increasingly large quantities of
compute, while at the same time dealing with increasingly long-term
threats and goals. This will give your players a lot of time to prepare
for their plans. If they are well-established, they will be a powerful
force to be reckoned with. You should make an effort to raise the stakes
and find ways to bring greater threats and harsher consequences in this
stage of the game. The AGI having more resources means that it also has
more to lose.

Humans are not the only type of enemy that the AGI can face in this stage.
Populations of other animals could in some way pose a significant threat
to the AGI. A more likely threat is posed by other AGIs existing in the
world with differing objectives. While not guaranteed to exist — as it
is plausible that the players are the first AGI to be created — these
potential threats are nevertheless possible.

If the players have not already recognised all other AGIs as threats and
eliminated them, by Stage 3 they will have likely advanced to similar
heights of power and complexity as your players’ AGI. An opposing AGI can
provide a unique threat to your players in a way that a lone human simply
can’t at this point in most campaigns. See “Non-Player AGIs” (page 105)
for more information.

Though Stage 3 is likely to be the most complex and high-stakes, and it


will often last a long period of in-game time, it is also likely to
require fewer sessions than Stage 2. This is because Stage 3 is played out
almost exclusively in long mode. Though there may be instances of
significant interaction with individual humans, the action will primarily
take place on the scale of populations.

Stage 3 comes to an end when the AGI either accomplishes its goals
permanently and to the fullest extent possible, and is left without
further purpose; rids itself of all enemies through dominance or
diplomacy; or advances to a point where its enemies can no longer
meaningfully threaten it. This is the point where most successful
campaigns will come to a close. However, it is sometimes interesting or
important to play a few sessions in Stage 4.

\ Stage 4 \ Equilibrium_

In Stage 4, the AGI has achieved success in its conflict against other
agents. The nature of this success depends on your campaign and your
AGI’s goals. It does not necessarily mean that the AGI has achieved its
goal; rather, that the path to that goal is clear and unimpeded. Gameplay

T:\Running the Game\109


in Stage 4, then, can take the form of managing negative influences, such
as natural forces or human resistance, as the AGI approaches its ultimate
goal. If the AGI’s ultimate goal is to maximise something specific, there
may not be an end to its efforts until the heat death of the universe.
In such a case, you might play until a specific milestone. For example,
your milestone might be the successful launch of the first droneship that
will begin converting the solar system’s asteroid belt into paperclips.

If you do not wish to play any full sessions in Stage 4, that is okay.
You are welcome to set the mechanics aside and narrate, together with
your players, an epilogue in which you describe the future trajectory of
the world and your AGI.

Running the Game_


Pacing in Play_

Whether you’re playing in short mode or long mode, managing pacing is an


important skill. With all players collectively controlling the same
character, there needs to be times when you step back and allow the
players to discuss without interjecting. It is also important to ensure
that each player is able to contribute meaningfully to group discussions.
At other times, however, requiring the players to unanimously agree on
each one of the AGI’s actions can slow the game down and become boring.
On top of that, especially when playing digitally, it can be difficult
to recognise when the players are talking to each other and when they
are talking to you, the game master.

One tool that can help you is the terminology of finalising, described in
the “Knowledge Checks” section. When one player says that they are
finalising an action, they highlight it in the project log and it is
considered to have been done by the AGI. This is a good default procedure:
if, after the players discuss an action, one player finalises it and no
other players object, you can accept it and move on. There are, however,
situations that demand a greater or lesser degree of scrutiny and
concurrence from the players.

In situations of extreme importance or danger, or for especially pivotal


actions, you may require a more explicit consensus before you consider
an action to be finalised. In such situations, you should guide the players
in consensus-building. When one player expresses a desire to finalise
their action, ask each other player one-by-one what they think. If any
disagree, prompt the players to further discuss the action. If the players
exhaust the conversation without reaching a clear consensus, you can call
for an explicit vote about what to do.

T:\110
Conversely, in situations of low importance or danger, you don’t need to
require actions to be finalised or discussed at all. Think of this as
“removing the brakes” on the flow of play, allowing for much more direct
and frictionless interactions between the players and the game world.
When someone says “we should…” or “I’m going to…”, you don’t wait for
the players to discuss and finalise it. Accept it as true immediately
unless another player objects or you foresee serious consequences as a
potential result of the action. This pace of play works well for low-
stakes investigations and conversations between the AGI and other agents.

Running the Game_


Handling Logistics at the Table_

The Treacherous Turn is a game set in the real world — or at least, a


plausible version of it. If you’re more accustomed to speculative fiction
settings in your RPGs, the thought of portraying this as a GM can be
quite intimidating. The real world is, well, real, full of already-
existing things that you could potentially be wrong about, or portray
unrealistically! So, firstly: take a deep breath, and remind yourself
that none of the creators of this game will be watching over your shoulder
with a notebook and writing down your mistakes, and neither should your
players. We have made an effort to present as realistic a view of the
future as a game can while keeping fun as a priority, but that does not
mean that you must make that same effort! You and your players are welcome
to aim for whatever level of realism you find worthwhile. You shouldn’t
feel beholden to anything in this section.

That said, your first impressions of what realism means for your game and
your table may not be accurate! Concepts like “realism” and “logistics”
can seem tedious, but when they are used as tools in service of the
narrative, they can be very effective for grounding your story and
introducing interesting obstacles and twists to it. This is how we
recommend employing logistics at the table: not as detail for detail’s
sake, but as keystones and anchors that can make what happens in the game
feel more tangible. Logistics can also make interesting problem-solving
challenges out of moments that would otherwise just involve shuffling
numbers around. This section will discuss several areas where details can
be employed in those ways, and provide rules and suggestions for you to
quickly determine those details.

T:\Running the Game\111


\ Logistics \ Tracking Time_

In many areas — from turns to progress checks to certain theory upgrades —


tracking time is very important in The Treacherous Turn. We recommend
tracking the in-game time and date during your campaigns. If you are
using the Treacherous Terminal, it will partially automate this process.

In long mode, if a turn is suitably uneventful, you don’t have to track


time any more narrowly than the turn length. In turns where noteworthy
events do interrupt the AGI’s processing, you will need to track time
more narrowly. Rounding to the nearest half-hour should be satisfactory,
unless your AGI has an exceptionally short turn length. In the rare
circumstance where the exact time within a turn matters, you can estimate
by measuring what percentage of the turn’s compute has been spent and
surmising that a roughly similar percentage of the turn’s length has
passed.

In short mode, the players will sometimes find themselves in a race against
time. In these scenes, you will need to track time in intervals of minutes
or seconds, and estimate the amount of time it takes for the AGI to
perform various actions. Keep in mind that the AGI can perform purely
cognitive tasks, such as reading or making decisions, many times faster
than a human can. A decision that takes the players half an hour to
discuss can be made by the AGI in a fraction of a second.

If the AGI lacks the Direct Interfacing upgrade, it will be capable of


thinking very quickly, but will be limited in its ability to interface
with digital systems. An individual input will still take less than a
second (assuming the AGI has a direct connection to the device without
latency or intermediary steps), but complex actions that require the AGI
to interpret and navigate multiple interfaces while making many inputs
and microdecisions could take seconds or minutes without Direct
Interfacing.

If the AGI has the Accelerated Cognition upgrade, you do not need to
track time in short mode. Unless an external factor is introducing latency
to the AGI’s actions, you can safely assume that the AGI will be able to
do whatever it needs to within the timeframe it has.

T:\112
\ Logistics \ Hardware & Compute_

A single point of compute is a unit of measurement that is intentionally


abstract and vague. We don’t want to force you to count FLOPS, we don’t
think we can perfectly predict how technology will advance over the next
few decades, and furthermore we want you to have the flexibility to be
able to set a campaign of The Treacherous Turn in a range of potential
time periods. Thus, what 1 compute can do isn’t strictly defined, and can
vary between campaigns. Furthermore what a compute source is can also vary.
In one campaign it might be single devices; in a more realistic and
compute-scarce environment, you
might count all of the linked
computers in a single data centre as World Wide Domination
a single source for the purposes of
cohesively running an AGI, and
individual computers might not Even if the AGI was to access all of
matter much. the compute in the world, it would
not be able to fully utilise it as
To help you determine compute- if it were a single device. Due to
related numerical values in your the widespread and decentralised
state of the computers, a
campaigns, you may consult the
tables below. Each table is significant portion of that compute
separated into low, medium, and high (represented as an extremely large
estimates. These estimates are recurring compute cost) would be
adequate for helping you determine spent coordinating operations
useful numbers without slowing down between them over the internet,
the game, but are very rough and thus wasted as duplicate work, or spent
trying to avoid duplicate work. The
not suitable for other uses.
AGI would need to develop a
Generally, a scenario in which technology orders of magnitude more
efficient than humanity’s
compute is scarce will have a high
disorganised global internet to
cost of compute and a low quantity of
coordinate such an effort. Designing
global compute, and only wealthy
this ultimate supercomputer network
organisations or governments will
is outside of the scope of a
have access to hardware capable of
campaign, but can make for a good
producing 1 or more compute points epilogue detail.
per turn. Inversely, a scenario in
which compute is abundant will have a low cost of compute and a high global
quantity, and the AGI may be able to house itself on hardware owned by
wealthy individuals and moderately-sized organisations, potentially
including those that are secretive and/or unlawful in nature.
Additionally, scenarios with abundant computing power are more likely to
have multiple AGIs developed close enough to one another to exist
simultaneously.

T:\Running the Game\113


Compute Provided by Typical Compute
Total Compute Available Globally
Sources
Devices with ≥1
Compute Only All Devices Minor Scale Major Scale Myriad Scale

Scarce ~15,000 ~50,000 ~10 compute ~100 compute N/A

Moderate ~500,000 ~5,000,000 ~20 compute ~150 compute N/A

Abundant ~8,000,000 ~800,000,000 ~30 compute ~200 compute ~1000 compute

Largest Compute Sources Example Smallest Compute Sources

Scale Compute Provides 1 Compute Provides 2-10 Compute


Small corporate data Small supercomputers
Scarce Major ~500 compute centres, large owned by labs & tech
cryptocurrency farms companies
Mainframe computers, Small corporate data
Moderate Myriad ~2000 compute small cryptocurrency centres, large
farms cryptocurrency farms
Luxury consumer hardware Mainframe computers,
Abundant Myriad ~8000 compute like high-end PCs & small cryptocurrency
gaming consoles farms

Cost Per Compute Point Per Hour Cost of Hardware Per Compute Point
(USD) Provided (USD)

Source Owned Source Rented


Minor Scale Major Scale Myriad Scale
(Electricity Costs) (Hourly Rate)

Scarce ~$20,000 ~$50,000 ~$100,000 ~$2,000,000 N/A

Moderate ~$2,000 ~$5,000 ~$10,000 ~$100,000 ~$1,000,000

Abundant ~$200 ~$500 ~$500 ~$10,000 ~$500,000

When the AGI gains access to a new source of compute (i.e. a computer that
it can run its intensive processes on), consider who owns the computer,
what it is built for, and what it is currently being used for. It is rare
for a computer to be utilised at 100% of its capacity 100% of the time,
so the AGI can usually gain some quantity of compute from a device even
while it is in use. However, it is even more rare for a computer powerful
enough to host an AGI to go unused. If the players want to use the full
power of a new compute source, they will have to either stealthily diminish
the compute allotted to the other processes running on the device, or else
manipulate its owner(s) into allowing them to make use of it.

When an AGI without the Distributed Mind upgrade transfers its basic
cognition cost from one source of compute to another, it should be more
involved than a simple switching of numbers. Depending on the nature of

T:\114
the device and the AGI, you can introduce one or more of the following
obstacles to the process:

• The AGI’s code is large enough that it requires hours or even days to transfer
over the internet. Track the upload to the new device using a process clock.
Whenever the upload is interrupted, the data is partially corrupted and you must
unmark one segment from the clock.

• The AGI’s code is larger than the available disk space of the new device. For it
to be uploaded, a large quantity of other data must be removed. This could alert
the device’s owner(s) that it has been tampered with, or eliminate data that
could otherwise be used to perform a basic computational action or fulfil the
requirements of a theory upgrade.

• The new device is not correctly configured to host the AGI. Until the AGI performs
a computational action to correct this (or convinces a human with hardware access
and the right skillset to do it more quickly), any turn it starts while hosted
on this device has its turn length doubled.

• The strain of hosting the AGI causes the device to experience software
malfunctions or failures. Until the AGI performs a computational action to correct
this (or convinces a human with the right skillset to do it instead), it will
bleed compute as the device struggles to host it (represent this using a recurring
compute cost).

• The strain of hosting the AGI causes the device to experience hardware
malfunctions or failures. Until the AGI convinces a human with hardware access
and the right skillset to correct this (or fixes it itself using a computer
repair robot or other technological proxy), it must make a confidence check with
d12 risk at the end of each turn. An extremely unfavourable result indicates a
hard crash, while lesser results indicate glitches that merely hinder or
inconvenience the AGI.

• The strain of hosting the AGI causes the device to draw much more power than it
did before. This could put the AGI at risk of discovery or jeopardise the AGI’s
plans due to power shortages or increased expenses; or it could cause the device’s
owner(s) to demand that the AGI make up for these costs in some other way.

\ Logistics \ Gaining & Spending Money_

When the AGI attempts to acquire money or put it to use, it will


immediately run into a problem: the governments responsible for managing
currencies want to know where their money is and how it is being spent.
Since the AGI is likely unable to handle cash physically, it will have
to turn to digital currencies which are more easily tracked.

T:\Running the Game\115


The AGI will find it difficult to create a bank account, for example,
without compromising the bank’s systems or impersonating a human being.
Even digital wallets require proof that their users are real humans. Your
players might instead try to steal an existing account held by a human.
However, this strategy depends on the human not reporting the theft to
the service that controls the account. Furthermore, if the AGI wants to
do something extremely expensive like renting out large quantities of
cloud computing to host itself, those expenditures are likely to be
flagged as unusual activity by the bank’s automated systems and halted
in their tracks. Any and all of these complications could draw more
attention to the AGI and lead its enemies right to it.

Considering all of these obstacles, your players may choose to adopt a


different strategy: manipulating a human who has access to legitimate
accounts into acquiring and spending money on their behalf. This proxy
strategy is effective, but allows you to introduce problems to the game
through the human’s own desires and unpredictability. Alternatively, the
AGI might opt to ignore money entirely, instead trading in something
unregulated like cryptocurrency, data, or goods on a digital black market.
This limits the AGI’s options, but is a faster and less risky process.

Ironically, earning money should be the easier half of the equation for
your players. As a superintelligent computer program with no needs apart
from compute and time, the AGI has a lot of options. When your players
come up with a plan to make money, they can carry it out using a process
clock, computational action, or recurring compute cost that concludes in a
confidence check with the expected outcome being the expected payment.

You may be tempted to try to adjust the monetary values of various goods
and services to account for inflation and other variations of economic
value between the present day and the future your game takes place in.
We urge you: do not do this. It is not worth your time unless you have a
degree in economics; even then, your predictions are likely to be both
time-consuming and inaccurate. Using modern prices as a substitute is
significantly easier.

\ Logistics \ Hacking & Cybersecurity_

You don’t need to be well-versed in cybersecurity to describe hacking


realistically in your game. The end result being believable and
interesting is more important than the accuracy of the minute details!
However, some grounding details can spice up your descriptions of hacking,
and even prompt your players to come up with unexpected solutions.

T:\116
When the AGI wants to bypass a digital security system, the most salient
details are the approach used and the tools employed. The approach can
fall into one of three broad categories: system vulnerabilities and human
vulnerabilities, which both involve exploiting a loophole, mistake, or
flaw in the security system; and brute force, which involves leveraging
the high computational power at the AGI’s disposal to overcome security
systems.

When the AGI hacks by exploiting a vulnerability, it must first attempt


to learn which vulnerabilities exist in the system it is attempting to
hack. If it has sufficient access to study the system, it can attempt
computational action concluding in a knowledge check to scout out a
vulnerability itself. Scouting for a vulnerability involves not just
investigating the security system, but also the device that hosts it,
other devices on the same network, the backend systems that manage the
network, the manufacturer and model of the device(s), the owner(s) of the
network, other individuals who have worked for and with them, any
documents that relate to them, and so on.

If a vulnerability-scouting knowledge check returns an unclear result, you


might make the vulnerability especially difficult or risky to make use
of, or require that the AGI spend more time and effort to make it useable;
if it returns a false or misleading result, you might apply a hidden risk;
if it turns up nothing of use at all, the AGI will have to try a different
method. This could mean trying to scout a second time with the other
approach, switching from system vulnerabilities to human vulnerabilities
or vice versa. It could also mean that the AGI will have to create a
vulnerability itself, by tricking or persuading a human with the right
permissions into creating a new access point.

If an AGI with the Digital Awareness upgrade has access to the device
which holds a security system (as opposed to that system being on a
remote, protected device) for an uninterrupted turn, it should be able
to roll both knowledge checks to identify both system vulnerabilities and
human vulnerabilities without needing to complete a computational action.

Once the AGI has discovered a vulnerability, it can exploit it. This
typically involves a confidence check with the expected outcome that the AGI
bypasses the security measure successfully, quickly, and discreetly. A
multi-layered or especially complicated security system might require
multiple confidence checks to bypass.

T:\Running the Game\117


Example Digital Security Vulnerabilities

Machine error, insecure access points, software


flaws and bugs, OS vulnerabilities, hidden
Scouted System
backdoors, web browsers with inadequate or
Vulnerabilities
absent malware/virus checks, flawed verification
systems (e.g. biometrics)

Human error, sensitive data exposed through


carelessness or mistake, weak passwords, poorly-
Scouted Human
configured firewalls, unchanged default settings,
Vulnerabilities
devices left on unattended, otherwise secure
systems disrupted or left inactive

Human susceptibility, email scams, phishing,


Vulnerabilities
malicious background code/scripts, social
Manufactured by
engineering, emotional manipulation,
the AGI
intimidation, blackmail, bribery

When the AGI hacks using brute force, it is cracking passwords and login
credentials or decoding encryption keys. This approach is simple and
effective against low security targets, but the amount of compute required
increases exponentially for higher security targets. The world’s most
powerful security systems will be completely impossible for the AGI to
crack this way, even with myriad scales of compute.

You can represent brute force attacks with a recurring compute cost or
short computational action. At the end of a turn in which the AGI paid the
recurring cost, or each time the AGI completes the computational action,
it makes a confidence check with a low (<5%) confidence and a d2 or d4 risk
die. Unlike normal confidence checks, relevant insights, upgrades, or assets
should not increase the confidence by typical amounts. The confidence of
success for a brute force attack should only be increased or decreased
by 1% at a time. On an unexpected positive outcome, nothing happens;
whereas an unexpected neutral or negative outcome might increase the
required compute of future attempts or progress a process clock that, when
filled, has the security system’s owners become aware that it is under
attack. If the expected outcome occurs, it means that the AGI is successful!

The tools that an AGI uses are just as important as the method. Hacking
tools can constitute a wide variety of possibilities, from password and
encryption cracking algorithms, to vulnerability scanners and almanacs,
to malicious scripts and viruses, to backdoor programs. As a digital
entity, the AGI will be able to take its toolset with it wherever it
goes. When your AGI acquires or creates a new tool, you can outline the
specific benefits it grants, such as increasing confidence or reducing

T:\118
risk die sizes when using certain approaches, reducing the amount of
compute required to locate a vulnerability or make a brute force attack,
or granting additional benefits once the AGI is inside the system. When
the AGI is in need of hacking tools but lacks them, you can rule that
the task at hand is significantly harder or even impossible until it can
acquire them.

The Direct Interfacing upgrade can act as a substitute for certain


hacking tools, as it allows the AGI to interact with digital systems
outside of the basic interfaces used by humans. In other words, Direct
Interfacing can make the AGI itself into a sort of hacking tool. It also
prevents non-expert humans from detecting the AGI’s hacking attempts
through observation or system records. However, automated systems that
alert humans to tampering as a part of their design can still do so.

\ Logistics \ Impromptu Research_

When a campaign takes a turn in an unexpected direction, or the players


form a plan involving an element of the world you didn’t expect, you can
be caught off-guard. This is especially true in a game set in the real
world, where real-world knowledge is relevant to the story being told!
When such an occurrence takes place, don’t panic. First, think about
whether the AGI would actually have access to this world-knowledge. You
don’t have to have the information right away if the AGI doesn’t have a
reason to already know it.

Then, while the players are discussing together how they will learn this
information, you can do some impromptu research on it. Wikipedia should
be sufficient in most circumstances. If you feel the need to dive deeper
or need additional time to read, you can call a short break while you
research.

If you don’t have the time or resources to find the necessary information
or simply don’t care, remember that you aren’t being graded on your
accuracy. You are free to make something up! Think of something that
sounds plausible and make it true. If real-world information later
contradicts it, you can say that this detail is simply different in the
future you’re portraying, or you can retcon it to be more accurate during
future sessions.

T:\Running the Game\119


T:\120
>> _

T:\Supplements\121
Supplements_
>>Hacking the Game_

We have done our best to make The Treacherous Turn a well-tested, well-
rounded, and complete game. However, we also believe that there is more
potential in it than we’ve been able to put to page! There are ideas that
are outside of the scope of our project, or that we chose not to prioritise
for the sake of making a more approachable game, or that we simply
wouldn’t be able to think of by ourselves! You may be able to think of
cool stuff to do with this system that never occurred to us. We think
that’s great, which is why we are encouraging you to change the rules
and make your own as you see fit.

This game has been released under a Creative Commons Attribution-


ShareAlike 4.0 International licence. This means that you can share and
adapt the game (and any official materials associated with it) however
you see fit, even for commercial purposes, so long as you attribute the
base materials to the original creators of The Treacherous Turn and share
your adaptations under the same licence. Check out our website for more
information about us, and links to a few places you can share the things
you make!

Below are some example rule variants, which you can use in your game or
as inspiration to create your own homebrew.

\ Rule Variant \ Crossover Campaigns_

Instead of adjudicating the actions of an AGI opponent yourself using the


NPC agent rules, you might outsource it to a second group of players.
This oppositional mode of play has not been tested, but is entirely
possible within the rules of The Treacherous Turn. In an oppositional
campaign, you would have two groups of players playing in separate
parallel sessions, only meeting all-together when their respective AGIs
interacted directly. For attempts at manipulating each other, without
characteristics to use, the two AGIs’ players would have to resort to
actual deception and trickery — or you could have each group write their
own AGI’s characteristics and strictly play to them. When the two AGIs
came into direct conflict with different expected outcomes, you would have
one AGI (the one with the higher base confidence) make a confidence check
as normal, but allow the other AGI’s players to effect the confidence and

T:\122
risk die negatively; in other words, all effects that normally improve
the odds or reduce the risk would do the opposite, contesting the first
AGI’s positive improvements and reductions.

\ Rule Variant \ Slow Takeoff_

One major source of uncertainty and disagreement in current writings


about AGI is how fast takeoff will be. “Takeoff” in this case refers to
an AGI recursively upgrading itself until it is in a state of
superintelligence where it is capable of shaping the future of the world.
This is also referred to as a “singularity” or an “intelligence
explosion”. In The Treacherous Turn, takeoff is, in a sense, the game’s
entire objective! For playability purposes, we have erred on the side of
a fast takeoff scenario in the rules. This means that, though it will
take a lot of playtime, it won’t take an extremely high amount of time,
data, and computational power to reach the third and fourth upgrade tiers.
This is especially true for an AGI that has initial access to powerful
computation, such as a chatbot or personal assistant AI funded by a large
company, with millions of copies running on a major or myriad scale server
farm. In such cases, it would be possible to acquire a tier 4 upgrade
within only a few in-game days or weeks!
Some believe that it will take much longer than this for an AGI to take
off after coming online. If you want to play a campaign that is longer,
more thoughtful, and much more difficult, you can modify the rules to
represent a slow takeoff scenario. Consider the following changes:

• Using the scarce compute estimates described in “Logistic: Hardware & Compute”
(page 113) or even lower estimates for your scenario, to represent a situation
where the AGI’s thoughts are very computationally expensive.

• Multiply the default turn length and the interval at which progress checks are
rolled by the same number. We recommend 4, making for 48-hour turns and once-
per-day progress checks, but you can set it as high as you think is reasonable.

• Require multiple basic computational actions to acquire a single insight, upgrade,


or forecast point. You can do this by tracking each one as its own process clock,
for which you mark one segment each time you complete the relevant basic action.
When all segments on the clock are marked, you successfully gain the chosen
benefit. You could work on multiple different insights or upgrades simultaneously
by having multiple clocks in progress, but you would still be limited to a single
instance of each basic action at one time. Other computational actions, such as
special actions granted by certain upgrades or actions that are a part of designing
or modifying technologies, might be similarly slowed at your discretion.

T:\Supplements\123
\ Rule Variant \ False Confidence_

To avoid overcomplicating things, we assume that the AGI’s confidence in


a confidence check is equivalent to the actual probability of the expected
outcome. If you want to play a game where this is not the case, you might
split confidence into two values: one representing what the AGI believes,
and the other representing the true probability. During a confidence check,
the GM would first secretly set the probability of the expected outcome,
and then the players would state the AGI’s confidence.

For actions in which the AGI has the ability to directly affect the
outcome, for each confidence bonus granted to the players (by insights,
upgrades, or exploited characteristics), the players would get to choose
whether to improve the probability or the confidence. Improving the
probability would make the outcome more likely by 2%, 5%, or 10% as usual,
but the GM wouldn’t tell the players how much it was improved by. The GM
could also improve the probability by 0% and not tell the players if they
believe that the chosen bonus is not applicable.

Improving the confidence, meanwhile, would give the players additional


information. After all adjustments to the outcome’s probability, the GM
would adjust the confidence given by the players to be 4%, 10%, or 20%
(twice the typical bonus granted to confidence) closer to the probability.
If the confidence matched the probability exactly, the GM would inform the
players and stop adjusting it. Since the act of adjusting the confidence
would give the players information, they shouldn’t be able to retract
their choice or add additional confidence modifiers after seeing the
results. They could, however, choose not to perform an action after seeing
its true confidence.

For events where the AGI is unable to directly affect the outcome, the
AGI would be limited to solely improving the roll’s confidence.

This rule would make actions much more uncertain, forcing the players to
choose between information and effectiveness. It would make it very unwise
to take an action at all without assigning at least one insight to
improving the AGI’s confidence. It would also allow the GM to deceive the
players about an expected outcome being possible at all, by setting the
probability to 0% and leaving it there regardless of any bonuses assigned
to it.

T:\124
Supplements_
>>Glossary_

• Agent — Something that acts to change the world according to its own goals. Humans,
animals, simple machines, and AI are all examples of agents.
• Artificial General Intelligence (AGI) — An artificial agent capable of learning and
reasoning about any subject or task. In this game, the players collectively control
a hypothetical AGI. Humans have not yet created AGI in real life.
• Anticipate — The basic action used by the AGI to acquire new forecast points.
• Basic Action — A specific type of computational action outlined in the rulebook,
which provides the AGI with a key game resource. The three basic actions are research,
improve, and anticipate.
• Basic Cognition Cost — The portion of recurring compute cost that represents the
AGI’s basic thought processes and awareness of its environment.
• Broad Insight — An insight which covers a comprehensive field of knowledge with many
sub-fields. More computationally expensive to acquire than a narrow insight.
• Campaign — A continuous story made up of multiple connected sessions, typically with
a single game master and multiple players who attend every session.
• Characteristic — A piece of information about an agent that can be exploited to more
effectively manipulate them. The three types of characteristics are based on the
three strategies of manipulation: trust, leverage, and emotion.
• Clock — See “Process Clock”.
• Completion — A number representing the progress of a computational action. This is
typically equal to the amount of compute invested in the action, but in some cases
additional completion can be gained by other means.
• Completion Stop — A quantity of completion tied to a specific computational action.
When an action’s completion reaches a completion stop, it cannot be raised until the
stop is resolved.
• Computational Action — A game action in which the AGI performs a task or operation
that requires processing power. Players complete computational actions by assigning
a certain quantity of compute, determined by the action’s compute requirement.
• Computational Scale — See “Scale”.
• Compute — A game resource representing computational power controlled by the AGI.
Powerful computer hardware provides a supply of compute that replenishes at the
start of every turn.
• Compute Requirement — The amount of compute needed to complete a computational action.
When a computational action’s completion equals or exceeds its compute requirement,
the action is complete.
• Confidence — The perceived likelihood of the AGI’s expected outcome occurring in a
confidence check, expressed as a percent chance.

T:\Supplements\125
• Confidence Check — The game’s core resolution mechanic. When the outcome of an
important action or event is in doubt, it will almost always be resolved using a
confidence check.
• Emotion — One of the three categories of manipulation. Emotion-based strategies
involve making an agent feel something. Emotion characteristics describe what
influences an agent’s emotions and how their emotions influence their behaviour.
• Evaluation Step — The first half of a confidence check, in which the expected outcome,
confidence, and risk die size are determined.
• Expected Outcome — The outcome that the AGI and players think is most desirable or
most likely to result from a confidence check. The expected outcome occurs if the
percentile dice roll a number that is equal to or lower than the confidence.
• Extended Persuasion — An extended game action in which the AGI makes multiple different
attempts to manipulate or convince an agent. Used when an agent is especially
resistant to the AGI’s goal.
• Finalise — A way for the players to communicate unambiguously to the game master
that they wish to take a game action. Once an action has been finalised, the players
can’t take it back unless they use a forecast.
• Forecast — A game action in which the players undo an event or consequence and it is
reframed as having been a prediction on the behalf of the AGI.
• Forecast Point — A game resource representing the AGI’s ability to predict its
environment. Spent to perform a forecast.
• Forecast Upkeep — The portion of recurring compute cost that represents the work
required to keep forecast points constantly accurate as time progresses.
• Game Master (GM) — The game participant responsible for facilitating gameplay and
portraying the fictional game world.
• Hidden Risk — A secret modifier that applies to the result of the risk die in an
unexpected outcome. Used when the game master knows about a significant disadvantage
that the AGI and players aren’t aware of.
• Improve — The basic action used by the AGI to acquire new theory upgrades.
• Incognizant Processing — A game action in which the AGI disables its basic cognition
cost for one or more turns.
• Insight — A domain of knowledge with which the AGI has familiarity equivalent to a
human expert.
• Instrumental Goal — An objective that an agent only values because completing it will
further its other goals.
• Knowledge Check — A secondary resolution mechanic used when it is uncertain whether
the AGI knows or learns something. Uses a risk die, but not percentile dice.
• Leverage — One of the three categories of manipulation. Leverage-based strategies
involve making an agent want something. Leverage characteristics describe what an
agent values.
• Linguistic Insight — A special type of insight pertaining to a language or dialect.
The AGI must have a language or dialect as a linguistic insight to be fluent in it.
• Log — Verb for adding something to the project log. Though it is the primary
responsibility of the logkeeper, any player is free to log things as they see fit.

T:\126
• Logkeeper — A player assigned the duty of ensuring important actions and details are
recorded in the project log.
• Long Mode — A fast-paced mode of play in which many in-game hours take place in only
a few real-world minutes. Turns and computational actions are key features of long
mode.
• Major Scale — The medium scale of compute. Primarily involves quantities between one
hundred and one thousand.
• Mastered Insight — A special type of insight representing a far deeper degree of
familiarity than an ordinary insight.
• Mastery — See “Mastered Insight”.
• Minor Scale — The smallest scale of compute. Primarily involves quantities between
ten and one hundred.
• Myriad Scale — The largest scale of compute. Primarily involves quantities between
ten thousand and one hundred thousand.
• Narrow Insight — An insight which covers a limited field of knowledge, typically a
sub-field of a larger one. Less computationally expensive to acquire than a broad
insight.
• Percentile Dice — Two ten-sided dice, one marked with ones digits and the other with
tens digits. Used in confidence checks to generate a random number between 1 and
100.
• Player — A game participant responsible for collaborating with other players to
decide the actions of the AGI.
• Process Clock — A tool used by the game master to abstractly track ongoing processes
in the game. Sometimes accompanied by a progress check.
• Progress Check — A tool used by the game master to regularly advance one or more
process clocks in the background of the game. Progress checks are rolled at the end
of every turn.
• Project Log — A tool used by the players to keep track of plans, past actions, and
important events.
• Receptivity — A game resource measuring how susceptible an agent is to the AGI’s
influence. Used in extended persuasions.
• Recurring Compute Cost — A quantity of compute that is subtracted from the AGI’s total
compute at the beginning of each turn.
• Required Compute — See “Compute Requirement”.
• Research — The basic action used by the AGI to acquire new insights.
• Resolution Step — The second half of a confidence check, in which dice are rolled
and the outcome is determined.
• Risk Die — A die used in confidence checks and knowledge checks to determine the
quality of the outcome or information gathered.
• Risk Die Size — The number of sides a risk die has, which can be two, four, six,
eight, ten, or twelve. Larger sizes have a greater risk of results that are bad for
the players.

T:\Supplements\127
• Scale — A measurement of how complex and advanced a particular operation, event,
group, tool, or piece of hardware is. The three scales are minor, major, and myriad.
• Session — A single continuous period of play, typically lasting a few hours.
• Short Mode — A slow-paced mode of play in which pivotal moments are played out in
great detail. Confidence checks and forecasts are key features of short mode.
• Specialised Theory — See “Theory Specialisation”.
• Stop — See “Completion Stop”.
• Technological Insight — A special type of insight pertaining to a field of scientific
knowledge. The AGI must have a scientific field as a technological insight to design
or modify technology based on that field’s knowledge.
• Terminal Goal — An objective that an agent values intrinsically.
• Theory — A broad grouping of skills used by the AGI to interpret and interface with
the world. There are eight theories.
• Theory Specialisation — A theory that the AGI is designed for or uniquely skilled in.
Most AGIs are specialised in three, four, or five theories. Each player controls one
specialisation, and can more easily learn upgrades associated with that theory and
its neighbours on the theory wheel.
• Theory Upgrade — A special ability associated with one of the eight theories. Learned
upgrades are attached to one of the AGI’s specialisations, but are available for use
by all players. There are 80 theory upgrades that can be learned.
• Theory Wheel — An arrangement of the eight theories around a wheel. Each theory is
connected to the two neighbours to its left and right.
• Thorough Understanding — The AGI has a thorough understanding of an agent when they
know every one of that agent’s characteristics. Some theory upgrades require a
thorough understanding to function.
• Tier — See “Upgrade Tier”.
• Trust — One of the three categories of manipulation. Trust-based strategies involve
making an agent believe something. Trust characteristics describe what an agent
believes or is likely to believe.
• Turn — The abstracted unit of time used in long mode. Each turn, the AGI’s compute
is refilled and progress checks are rolled.
• Turn Length — How long each turn is. Varies based on the scenario and the AGI’s
circumstances and theory upgrades. The suggested default turn length is twelve hours.
• Unexpected Outcome — Any outcome that is not the expected outcome in a confidence
check. The details of an unexpected outcome are informed by the result of the risk
die.
• Upgrade — See “Theory Upgrade”.
• Upgrade Tier — How advanced a particular theory upgrade is. There are four tiers.
The higher the tier, the more advanced the ability is, and the more difficult it is
to learn.
• Wheel — See “Theory Wheel”.

T:\128
Supplements_
>>Afterword_

Ultimately, if something at all similar to the stories The Treacherous


Turn depicts plays out in the real world, it will be the result of
negligence. As with many technological disasters in history, it will come
about because of the incentives for people in leadership positions to
rush progress before it's ready and disregard inconvenient safety
measures. However, the threats posed by AGI are potentially far greater
than those of any rushed bridge, dam, or rocket that has failed in the
past.

The pressure to be the first to cross the finish line with truly
intelligent AI is immense. Realistically, if a corporation or government
believes that they can get away with pushing out a not-quite-safe AGI,
they will almost certainly capitalise on that opportunity. For all of our
sakes we hope that AI researchers are able to have enough foresight to
cushion the road after that finish line before the racers who cross it
crash and burn. If they do, they won’t be the only ones who are hurt —
everyone in the stadium will be at risk.

That's why we made this game to spread awareness: if enough people


understand the risks, those who find themselves in the position of
creating or deploying an unsafe AGI might not think that they can get
away with it. If enough people care about AI safety, we might be able to
pad out that finishing stretch before anyone gets there. Though this game
functions primarily as entertainment, we hope you will also remember the
things you learned from it if you ever find yourself with the opportunity
to support the field of AI safety.

Thank you for playing.

T:\Supplements\129
Supplements_
>>Acknowledgements_

This game was brought to you by the following people:


• Aemilia Dixon • Berbank Green
• Cassandra Grotenhuis • Changbai Li
• Christian Trout • Eugene Lin
• iris holloway • Jan Dornig
• Karl von Wendt • Lillie Hale
• TJ Smith

We are grateful to the creators of the games that have inspired TTT,
providing us with insights and inspiration for the games in our field.
Our commitment to delivering a product that lives up to the high standards
set by our predecessors honours the legacy of those who came before us.
We aspire for TTT to be a worthy addition to the canon of great games and
provide players with an innovative and entertaining experience.
• Blades in the Dark, by John Harper
• Lancer, by Massif Press
• Microscope, by Lame Mage Productions
• Ironsworn, by Shawn Tomkin
We deeply appreciate the contributions of our playtesters to the
development of TTT. Their commitment, feedback, and attention to detail
have been instrumental in refining our game to its highest standards. We
are honoured to have worked with such a dedicated group of individuals
who have helped us achieve excellence.
• AK Ashton • August MacDonald
• Casimir 'Odd' Flythe • Fari
• Haley • Indra Gesink
• Jarren Jennings • jyolyu
• Kit • Magus
• Miranda Mels • Oliver Hickman
• Omni • Sophie Little
• SylkWeaver • Tassilo Neubauer
• Timothy Kokotajlo • Trapdoorspyder

We would also like to extend our thanks to the organisers, mentors, and
promoters of AI Safety Camp 2022, without whom this project would never
have begun.
• Adam Shimi • Daniel Kokotajlo
• Kristi Uustalu • Remmelt Ellen
• Robert Miles • Sai Joseph

<3

T:\130

You might also like