Professional Documents
Culture Documents
Project 2-1
Analysis of AI approaches to playing Dots and
Boxes
Lindalee Conradie
Husam Abdelqader
Jonathon Bird
Haoran Luan
Kiril Tikhonov
Abstract—At first glance, applying AI techniques to games game popular among children and adults around the world.
may seem frivolous, but it is only for the uninformed. Games Despite apparent simplicity of the game’s rules, the game
are an important subject of AI research, and the field itself has many possible strategies for it. Even though it is a finite
mainly benefits from the development of games research. It is
expected that the application of artificial intelligence in games game, the scalable size of the board and possible moves
will provide new approaches that can be transferred to real- makes it complex to evaluate all the possible moves. Due
world applications. This paper presents an investigation into AI- to this, we will evaluate each implemented technique, its
techniques in the context of the classic board game Dots and average winning rate against other techniques, and answer the
Boxes. A simple heuristic approach, Monte Carlo Tree Search, following questions: Which approach is most effective overall?
Q-learning, MiniMax with alpha-beta pruning were developed,
and their performance compared. Which algorithm performs best against a basic strategy in a
reasonable processing time (less than 100 milliseconds)? How
much better is the advanced strategy compared to the basic
I. I NTRODUCTION strategy when considering only the next step? How does node
The use of technology is becoming more predominant with limit on MiniMax algorithm affect it’s processing time and
every discovery and research that’s being made. Financial win rates on different board sizes? How does the number of
institutions, legal institutions, media companies, and insurance simulations in MCTS affect processing time and win rate on
companies are all figuring out ways to use artificial intelligence different boards?
to their advantage. From fraud detection to writing news First, we will briefly discuss some terminology, rules, and
stories with natural language processing and reviewing law common strategies to play the game of Dots and Boxes.
briefs, artificial intelligence’s reach is extensive. The appli- This is followed by a brief overview of how the game was
cation of AI techniques has led to major improvements and implemented. Further on, the different AI techniques are
developments in computer science, mechanical engineering, discussed: a brief overview of each technique, a description
medical diagnostics, portable technology, and marketing. of its implementation, followed by a complexity analysis.
But what does all this have to do with applying AI tech- The experiments performed are then presented, followed by
niques to games? Just as in games, real-world problems consist a discussion of the results of these experiments. Finally, a
of a goal state and many options/solutions to get there. This conclusion is drawn from these discussions.
also includes having a number of variables that may influence
our decisions. Using AI techniques in games is a way for us
II. D OTS AND B OXES GAME
to evaluate how that technique makes decisions and acts in
different circumstances. It provides measurable results for the Dots and Boxes [3] is a simple pen and pencil game for two
further development of these methods. players (sometimes more) first published in the 19th century
While some research was done on solving the Dots and by a French mathematician Edouard Lucas. The game starts
Boxes game[1][2], we decided to take a different perspective with an empty grid of dots. Players then take turns to add
and test how well different approaches can handle a game like either a single horizontal or vertical line between two unjoined
Dots and Boxes. We can furthermore assess which technique adjacent dots. Once a player completes the fourth side of a
performs better with which games and under which conditions. 1 × 1 box, he earns a point and is then obliged to take another
In this report, we will look at how the AI techniques we turn. The game is finished when it’s not possible to add any
have studied and implemented will perform when playing the more lines to the grid. The player with the most points is then
game of Dots and Boxes. Dots and Boxes is a combinatorial the winner of the game. Furthermore, the board may be any
2
size grid. The game Dots and Boxes is similar to other board two outer-most empty lines. A chain containing three or
games in the sense that it’s impartial. This means that the more boxes is called a long chain;
current score and which player did what move does not affect • Cycle: a chain whose ends meet up forming a closed
the further possible moves. Furthermore, Dots and Boxes is circle.
a zero-sum games, which means that since there are a finite
number of points available, each point that one player gets is
B. Strategy
a point that the other player won’t be able to gain. In addition
to being impartial, the game is also fully observable. The common strategies for playing Dots and Boxes include
taking boxes where possible whilst also avoiding to draw the
third edge on a box which would result in the opponent taking
the box. Due to the fact that most players then avoid drawing
the third edge, most of these games will result in drawing two
lines per box until it’s absolutely necessary to draw the third.
Once this happens, a chain is created where the opponent can
then complete nearly all the boxes in the chain.
The following is the description of some fundamental strate-
gies involved in Dots and Boxes [4].
Double-dealing move: leave the opponent with two boxes
Fig. 1: Player one turn. Fig. 2: Player two turn. with valence 3, which he takes by making a single move which
is called a double-crossed move. The point of declining the
last two boxes in the chain is that the opponent is forced to
open up the next chain regardless of whether he takes the two
offered boxes. The double-dealing move for cycles is to split
the last four boxes in half forming two chains.
Whoever can force their opponent to be the first one to
play in a long chain is said to have control. Now if you have
control, you can maintain it by declining the last two boxes
of every long chain except the last (you should take all of the
last chain). If there are long enough chains around, then you
win by getting and maintaining control up to the end.
Fig. 3: Box filled by player two. Fig. 4: Game results. Getting Control: The long chain rule
The chain rule tells you how many chains you should make
Black edges are ones that have already been placed by to force your opponent to open the first long chain or cycle:
players. Furthermore, when hovering over possible edge place- • If there is an odd total number of dots, then the first
ment, it is highlighted with the color of the player. Once a player should make an odd number of chains and the
player completes a box, his initials or personal score is shown second player an even number of chains.
inside the box with the color of the player who completed it. • If there is an even number of total dots, then the first
At the end of the game the player who won the game is shown player should make an even number of chains and the
on the display according to how many games he won out of second player an odd number of chains.
the total number of games played. The two players are then
For the purposes of the long chain rule, cycles do not count
given the opportunity to play again, and finally the score of
as long chains.
the game is displayed.
Control and Chain/Cycle Length
When you have control, you need to have chains that are
A. Terminology long enough to overcome the cost of maintaining control.
Hence, the player who is going to get control should try to
The following is a list wherein the most important terms
make the chains as long as possible and try to avoid cycles
and definitions used are explained:
(especially quads). If the cost of maintaining control is more
• Dot: a single point on the board to which a line can
than the number of boxes you are going to get, then at some
connect; point it is going to pay to relinquish control by taking all of
• Line: an edge between two adjacent dots;
a chain or cycle. Conversely, if you are going to lose control,
• Box: the area on the board that is enclosed by four lines
then you should try to keep the chains as short as possible and
which add a point to the player that placed the final line; try to create a cycle particularly a quad [? ].
• N x M board: A board that is N dots tall and M dots
wide;
• Valence: an evaluation of the number of empty lines a
III. G AME IMPLEMENTATION
box has, this is between 0 and 4 inclusive. The game is represented by a graph data structure. The dots
• Chain: a sequence of boxes with valence 2, where every are vertices and the lines are edges. The state of the game is
empty line is part of two adjacent boxes, except for the stored as an adjacency matrix.
3
Space complexity: O(bd) (based on depth-first traversal), on the expansion up until it reaches a leaf node, after
where b is the number of legal moves at each node and d is which it call on the simulation.
the depth of the search. 2) Expansion: If the tree chooses a node whose children
have not yet been generated, it will generate the children
C. Monte Carlo tree search for this node and add them to the tree.
3) Simulation: Once the tree has reached a leaf node which
1) Overview: The basic idea of MCTS [5] [6] is to attempt
is a final game state, it will evaluate who the winner is
to approximate solutions that are difficult to calculate by
at this game and feed this result to the back propagation.
collecting data from simulated samples and then averaging
4) Back Propagation: According to whether the leaf node
the result over the number of simulations.The advantage of
the tree encountered is a winning state or not, it will start
MCTS is that the only information that it requires is how the
back tracking through the tree, going through every node
state changes when a move is made. Due to the fact that the
the tree visited to reach this leaf node until it reaches the
simulations are played until the end of the game, the result of
node of the tree. For each of the nodes it visits along the
the game is used to update the values in the previous states,
way, it will update their inner variables which represent
therefore the search doesn’t need to evaluate the intermediate
the number of times the node has been visited, and the
states in the game.
number of time it lead to a win.
MCTS builds a portion of the game tree using the simulated
games where each node contains the number of times that node The process explained above simulates one possible way a
was reached, the number of times an action was chosen and game could proceed and end, to give a better evaluation the
the total reward for each action from all the simulations. tree simulates a number of games given at initialization, the
In the game, at each move that it needs to make the MCTS higher this number the better the evaluation, however more
player will perform simulations which start at the current state simulations lead to more processing time.
of the game. Lastly, when the tree has simulated the number of games it
Each simulation consists of selection, expansion, playout has been asked to. It will evaluate which of the current root
and then backpropagation. In the selection stage, the tree is children are the best option for us to win the game, it does
traversed using a decision rule at each node. The expansion this by using the following evaluation function [7]:
stage happens when an action is selected that doesn’t have a
r
wi ln Ni
coinciding node in the tree, this is where new nodes are added + 2
ni ni
to the tree. Then, during the playout stage, the simulation will
continue until a terminal state is reached. After the game is • wi : the number of times this node lead to a win
finished, the values for the nodes in the selection stage are • ni : the total number of simulations ran by the tree so far,
updated based on the result of the game. this is given by MCTS.
• Ni : the number of times this node has been visited
2) Implementation: We implemented the traditional version
of MCTS, selection, expansion, simulation and back propaga- Furthermore Dots and Boxes has a game tree size of (n!,
tion, with each node of the tree being a representation of the n=number of edges), therefore while MCTS is exploring and
state of the game at that step with the following variables simulating different play outs of the game, it can consume a
stored at each node: huge chunk of the available memory. In order to minimize the
• A variable that stores the number of times this node has
occurrence of this and to make the algorithm more efficient as
been visited the game moves forward, once the MCTS decides on a best
• A variable that stores the number of times this node has
move, all of the other branches will be deleted out of the game
lead to a winning leaf tree, this reduces the tree size thus making the algorithm more
• Object State that stores the current status of the board at
time and memory efficient.
that node. 3) Complexity Analysis: Time complexity:
• A reference to the parent of the node
The runtime of our algorithm can be computed as:
• A list containing all the children of this node
O(mkI/C).
Here, m is the number of random children that are considered
When the tree is created it is given the initial state of the
per search, k is the number of parallel searches, I is the
board to be the root node of our tree, and expand to create it
amount of iterations that have been performed, and C is the
is children. Afterwards, on each MCTS turn the tree is fed the
number of cores.
current state of the board, it locates it in the tree or creates
Space complexity:
the node if it has not yet been created and sets it at the root
The space complexity is O(mk)
in order to start the MCTS steps:
1) Selection: This the first step in the process of evaluating
our next best possible move. The tree will start by
selecting a random child of the root node, and reference D. Deep Q learning
it at the current node. It will keep on choosing a random Overview QLearning is a reinforcement learning algorithm
child from the node we are currently looking at, until that learns the ’quality’ of each action at each particular state.
it hits a leaf node, or a node whose children have not For any finite Markov Decision Process QLearning will find an
yet been generated, in the case of the later, it will call optimal policy by maximizing the expected value of the total
5
reward. Given infinite time, QLearning will find the optimal shown in the tables. We also recorded the processing time
policy for any finite Markov Decision Process. for each move. Experiments were performed on the following
QLearning is applicable to Dots and Boxes as it can board sizes: 3x3, 3x4, 4x4.
represented as a Markov chain. It’s discrete time; as each turn • Experiments with different values of number of simu-
is a separate point in time, stochastic; as player inputs mean lations are run in order to determine how it affects the
it’s not fully deterministic, and only the current state matters win rate and if the running time trade off is feasible.This
for choosing actions. With the addition of actions and rewards is done using the following board sizes: 3 × 3, 3 ×
this becomes a Markov Decision Process. The actions are the 4, 4 × 4, 5 × 5 and the following amount of simulations,
edges. 100, 1000, 10000, 100000. Each experiment was run for
1) QLearning Neural Network: The Q function can be 100 games.
approximated by a neural network. The deeplearning4j (dl4j)
plugin was used for the neural network. The implementation B. MiniMax vs BaseBot
involved setting up the Markov Decision Process environment MiniMax is tested in how it fairs against the basic strategy
and adjusting the hyper-parameters of the network. on the board sizes ranging from 3x3 to 6x6 and how limiting
The neural network has a state and an observation space. the maximum number of nodes (see the adaptive depth system)
The observation space is the part of the state that the neural affects MiniMax’s performance.
network can see. In this implementation the observation space Settings: For 3x3 to 4x4 board sizes 1000 games are played
was the matrix, while the state included all the information testing the different maximum node values from 10,000 to
needed for the methods to run, like playerScore or whose turn 100,000,000. After that 100 games are played testing node
it is. This is because only the matrix matters, not the score or expansions from 10,000 to 10,000,000 due to processing time
other information, in terms of the best move to play. constraints. The results and processing time are recorded
It couldn’t be set up to train against itself so it trains and the average processing time for MiniMax is shown. For
against BaseBot. BaseBot makes sense as an opponent as it more detailed statistics on processing time, see the appendix.
will always punish the bot for setting up boxes and the bot BaseBot’s processing time is always 0 ms.
can’t rely on it making mistakes and setting up boxes for it.
This is the reward system implemented: C. BaseBot+ vs BaseBot
Event Reward These experiments test how a short-sighted (having only the
Win 50 next move in mind) player using a basic strategy aligns with
Draw 0
Loss -10
a blind player using an advanced strategy.
Settings: 100 games are played on the 3x3, 4x4, and 5x5
The neural network has been trained on a 3x3 board size, board sizes, the results are shown, and the average processing
as P1 as you can simulate the largest number of games on the time for Basebot+ is shown. For more detailed statistics on
smallest board sizes and you need a large amount of games processing time, see the appendix.
to train a reinforcement neural network..
D. MiniMax vs MCTS
E. BaseBot+: Rule-based Experiments testing how MCTS fairs against MiniMax with
varying levels of maximum node limits for MiniMax and
We also decided to improve our BaseBot with the domain- number of simulations for MCTS.
specific strategies used for MiniMax algorithm and thus create Settings: The experiments are played on the 3x3, 4x4, and
a rule-based bot based on it. Instead of searching through all 5x5 board sizes and the results and average processing time
possible game states and finding the best set of moves, he for both bots are recorded. For more detailed statistics on
uses (strategic) rules to choose his next move. In this way he processing time, see the appendix.
may represent a more expert player who is familiar with these
rules. E. MiniMax vs BaseBot+
MiniMax is tested against a short-sighted opponent who
V. E XPERIMENTS
is also taking into account chain parity and double-dealing
We conducted a number of experiments on Dots and Boxes in addition to utilizing randomness with varying levels of
to quantify the strength of the implemented AI techniques. maximum node limit for MiniMax.
With our experiments we want to examine how well the al- Settings: 100 games are played for each experiment. The
gorithms can play against purely random moves; RandomBot, 100,000, 1,000,000 and 10,000,000 node limits are tested.
casual players; BaseBot and shortsighted advanced players;
BaseBot+. a We also compare the performance of the algo- F. Neural Network Experiments
rithms in matches against each other.
Experiments test how the neural network performs against
RandomBot, BaseBot, and MiniMax on a 3x3 board as player
A. MCTS Experiments 1, as that was the only setting it was trained on. The results
Settings: Bots play 500 games, then switch places and play are shown as well as the average processing time. For more
another 500 games. The percentage of games won or tied are detailed statistics on processing time, see the appendix.
6
4x4
G. Neural Network Experiments Number of Simulations MCTS DRAW RANDOMBOT
Experiments test how the neural network performs against 100 93 7 0
1000 93 7 0
RandomBot, BaseBot, and MiniMax on a 3x3 board as player 10 000 99 0 1
1, as that was the only setting it was trained on. The results TABLE VII: Results of running different number of simulations on a 4x4
are shown as well as the average processing time. For more board for MCTS vs RandomBot
detailed statistics on processing time, see the appendix.
5x5
Number of Simulations MCTS DRAW RANDOMBOT
VI. R ESULTS 100 100 0 0
1000 100 0 0
A. MCTS results 10 000 - - -
TABLE VIII: Results of running different number of simulations on a 5x5
1) Number of simulations: Below are the tables with the board for MCTS vs RandomBot
results of running the MCTS algorithm against the basebot.
3x3
Number of Simulations MCTS DRAW BASEBOT
100 1 0 99
1000 0 0 100
10 000 0 0 100
TABLE I: Results of running different number of simulations on a 3x3 board
for MCTS vs BaseBot
3x4
Number of Simulations MCTS DRAW BASEBOT
100 4 12 84
1000 3 16 81
10 000 0 14 86
TABLE II: Results of running different number of simulations on a 3x4 board
for MCTS vs BaseBot
4x4
Number of Simulations MCTS DRAW BASEBOT
100 2 0 98
1000 2 0 98
10 000 0 0 100
TABLE III: Results of running different number of simulations on a 4x4 board
for MCTS vs BaseBot
5x5
Number of Simulations MCTS DRAW BASEBOT
100 0 0 100
1000 0 0 100
10 000 - - -
TABLE IV: Results of running different number of simulations on a 5x5 board
for MCTS vs BaseBot
3x3
Number of Simulations MCTS DRAW RANDOMBOT
100 40 40 20
1000 51 27 22
10 000 45 18 37
TABLE V: Results of running different number of simulations on a 3x3 board
for MCTS vs RandomBot
3x4
Number of Simulations MCTS DRAW RANDOMBOT
100 89 5 6
1000 77 11 12
10 000 93 3 4
TABLE VI: Results of running different number of simulations on a 3x4 board
for MCTS vs RandomBot
7
B. MiniMax vs BaseBot
VIII. C ONCLUSION XVII Results of running different node limits and dif-
The number of simulations for Monte Carlo has little ferent number of simulations on a 5x5 board for
effect on results but increasing the simulations increases the MiniMax vs MCTS . . . . . . . . . . . . . . . . 8
processing time. Increasing the maximum number of nodes for XVIII Results of running different node limits for Min-
MiniMax increases win rates and processing time, however, iMax vs BaseBot+ . . . . . . . . . . . . . . . . . 8
the increase in win rates becomes less pronounced the smaller XIX Results of the Neural Network . . . . . . . . . . 8
the board. BaseBot+’s advantage over BaseBot proves the ad-
vanced strategy is better even when only considering the next R EFERENCES
move. MiniMax with a node limit of 10,000,000 consistently [1] Joseph K. Barker and Richard E. Korf. Solving dots-and-
performs in a reasonable time, with no average time exceeding boxes. In Proceedings of the Twenty-Sixth AAAI Confer-
100 milliseconds even on the larger board sizes. That version ence on Artificial Intelligence, AAAI’12, page 414–419.
proves to be the best against the basic strategy, consistently AAAI Press, 2012.
beating BaseBot by the largest margin. It also proves to be the [2] Daniel Allcock. Best play in dots and boxes endgames,
best algorithm overall, consistently getting a higher win rate 2019.
than its opponent, by a large margin. [3] S. Li, Y. Zhang, M. Ding, and P. Dai. Research on
integrated computer game algorithm for dots and boxes.
L IST OF F IGURES The Journal of Engineering, 2020(13):601–606, 2020.
[4] E.R. Berlekamp. The Dots and Boxes Game: Sophisticated
1 Player one turn. . . . . . . . . . . . . . . . . . . 2 Child’s Play. CRC Press, 2000.
2 Player two turn. . . . . . . . . . . . . . . . . . . 2 [5] Stuart Russell and Peter Norvig. Artificial Intelligence: A
3 Box filled by player two. . . . . . . . . . . . . . 2 Modern Approach. Prentice Hall Press, USA, 3rd edition,
4 Game results. . . . . . . . . . . . . . . . . . . . 2 2009.
[6] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas,
L IST OF TABLES P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez,
I Results of running different number of simula- S. Samothrakis, and S. Colton. A survey of monte carlo
tions on a 3x3 board for MCTS vs BaseBot . . . 6 tree search methods. IEEE Transactions on Computational
II Results of running different number of simula- Intelligence and AI in Games, 4(1):1–43, 2012.
tions on a 3x4 board for MCTS vs BaseBot . . . 6 [7] Karol Wal˛edzik and Jacek Mańdziuk. Multigame playing
III Results of running different number of simula- by means of uct enhanced with automatically generated
tions on a 4x4 board for MCTS vs BaseBot . . . 6 evaluation functions. In Jürgen Schmidhuber, Kristinn R.
IV Results of running different number of simula- Thórisson, and Moshe Looks, editors, Artificial General
tions on a 5x5 board for MCTS vs BaseBot . . . 6 Intelligence, pages 327–332, Berlin, Heidelberg, 2011.
V Results of running different number of simula- Springer Berlin Heidelberg.
tions on a 3x3 board for MCTS vs RandomBot . 6
VI Results of running different number of simula-
tions on a 3x4 board for MCTS vs RandomBot . 6
VII Results of running different number of simula-
tions on a 4x4 board for MCTS vs RandomBot . 6
VIII Results of running different number of simula-
tions on a 5x5 board for MCTS vs RandomBot . 6
IX Results of running different node limits on a 3x3
board for MiniMax vs BaseBot . . . . . . . . . . 7
X Results of running different node limits on a 3x4
board for MiniMax vs BaseBot . . . . . . . . . . 7
XI Results of running different node limits on a 3x3
board for MiniMax vs BaseBot . . . . . . . . . . 7
XII Results of running different node limits on a 3x3
board for MiniMax vs BaseBot . . . . . . . . . . 7
XIII Results of running different node limits on a 3x3
board for MiniMax vs BaseBot . . . . . . . . . . 7
XIV Results of BaseBot+ vs BaseBot . . . . . . . . . 8
XV Results of running different node limits and dif-
ferent number of simulations on a 3x3 board for
MiniMax vs MCTS . . . . . . . . . . . . . . . . 8
XVI Results of running different node limits and dif-
ferent number of simulations on a 4x4 board for
MiniMax vs MCTS . . . . . . . . . . . . . . . . 8
11
IX. A PPENDIX
A. MiniMax vs MCTS results
3x3 MiniMax WIN: DRAW: MCTS WIN
Min Max Maximum nodes 10000000 1000000 100000
P1: MiniMax P2: MCTS
Results 0:100 0:100 0:100
Median time (ms) 17 17 17
Q1 (ms) 16 16 16
MCTS 100 Q3 (ms) 19 18 18
Average time (ms) 18.05 17.54 17.83
SD 3.74 4.14 5.47
Median time (ms) 0 0 0
Q1 (ms) 0 0 0
MinMax Q3 (ms) 16 6 2
Average time (ms) 29.28 3.96 2.79
SD 70.24 11.40 9.97
P1: MCTS P2: MiniMax
Results 0:100 0:100 0:100
Median time (ms) 17 17 17
Q1 (ms) 16 16 16
MCTS 100 Q3 (ms) 18.00 18 19
Average time (ms) 17.45 17.47 18.11
SD 3.68 4.12 5.89
Median time (ms) 0.00 0 0.00
Q1 (ms) 0.00 0 0.00
MinMax Q3 (ms) 67.00 2 2.00
Average time (ms) 30.29 4.97 1.96
SD 61.71 14.95 11.44
P1: MiniMax P2: MCTS
Results 0:100 0:100 0:100
Median time (ms) 29 29 31.5
Q1 (ms) 19.75 18.75 20
MCTS 1000 Q3 (ms) 46.00 42 45.25
Average time (ms) 34.98 34.05 35.98
SD 27.44 39.94 31.30
Median time (ms) 0.00 0 0
Q1 (ms) 0.00 0 0
MinMax Q3 (ms) 13.00 6 2
Average time (ms) 26.79 4.55 4.05
SD 70.53 15.05 16.02
P1: MCTS P2: MiniMax
Results 0:100 0:100 0:100
Median time (ms) 26 30 28
Q1 (ms) 17 17 17
MCTS 1000 Q3 (ms) 46 53 50
Average time (ms) 35.51 42.13 38.33
SD 31.53 35.59 31.96
Median time (ms) 0 0 0
Q1 (ms) 0 0 0
MinMax Q3 (ms) 68 4 2
Average time (ms) 33.04 7.41 1.39
SD 76.39 19.82 4.28
P1: MiniMax P2: MCTS
Results 0:100 0:100 0:100
Median time (ms) 109.5 129 150
Q1 (ms) 30.75 30.25 33
MCTS 10000 Q3 (ms) 394.5 456.25 443
Average time (ms) 235.22 268.97 248.95
SD 253.80 295.10 254.19
Median time (ms) 0 0 0
Q1 (ms) 0 0 0
MinMax Q3 (ms) 15 8 2
Average time (ms) 37.40 8.5025 4.62
SD 92.26 24.39731604 16.53
P1: MCTS P2: MiniMax
Results 0:100 0:100 0:100
Median time (ms) 119.5 139.5 164
Q1 (ms) 21 21 22
MCTS 10000 Q3 (ms) 430 488.25 565.25
Average time (ms) 269.01 282.85 324.24
SD 301.62 315.33 349.70
Median time (ms) 0 0 0.00
Q1 (ms) 0 0 0.00
MinMax Q3 (ms) 92.00 3 3.00
Average time (ms) 51.64 9.25 2.71
SD 115.35 27.15 12.39
12