You are on page 1of 39

Subsumption vs.

Perceptual-Control Architectures in
Behaviour-Based Robotics

By Yasir AlGuwaifli

Course: MSc Advanced Software Engineering
Supervisor: Professor Roger Moore
University of Sheffield
Department of Computer Science

October 2013

This report is submitted in partial fulfilment of the requirement for the degree of MSc in
Advanced Software Engineering by Yasir AlGuwaifli.


All sentences or passages quoted in this report from other people's work have been
specifically acknowledged by clear cross-referencing to author, work and page(s). Any
illustrations which are not the work of the author of this report have been used with the
explicit permission of the originator and are specifically acknowledged. I understand that
failure to do this amounts to plagiarism and will be considered grounds for failure in this
project and the degree examination as a whole.

Name: Yasir AlGuwaifli
Signature: ______________
Date: 31
October 2013


This paper attempts to compare and contrast (Hierarchical) Perceptual Control theory with
Subsumption. It tries to find the weaknesses and strengths within each paradigm. The
results are obtained through a series of unified experiments with a specific scenario that
allows the demonstration of features from both theories. The overall results favoured
Subsumption in many areas, and showed Hierarchical Perceptual Control theory to have a
number of weaknesses. In the end, it was difficult to conclusively determine the
superiority of either theory, due to the lack of further testing of both theories in other

Table of Contents

Chapter 1: Introduction ............................................................................................. 5
1.1 Preface ....................................................................................................... 5
1.2 Understanding the Problem ....................................................................... 5
Chapter 2: Current Scene .......................................................................................... 7
2.1 Brief History ............................................................................................. 7
2.2 (Hierarchical) Perceptual Control theory (H) PCT ................................ 8
2.3 Subsumption ........................................................................................... 10
Chapter 3: Requirements and Analysis ................................................................... 12
3.1 System Measurement and Evaluation ..................................................... 12
3.2 Approach to the Problem ........................................................................ 13
Chapter4: Scenario Design ..................................................................................... 13
4.1 Initial steps .............................................................................................. 13
4.2 Preferred Design ..................................................................................... 14
4.2.1 Hierarchical PCT ................................................................................ 15
4.2.2 Subsumption ........................................................................................ 17
Chapter 5: Implementation ..................................................................................... 20
5.1 Tools and Platforms ................................................................................ 20
5.2 The Interface Layer ................................................................................. 21
5.3 HPCT ...................................................................................................... 21
5.4 Subsumption ........................................................................................... 25
Chapter 6: Experiment and Results ......................................................................... 27
6.1 Testing ..................................................................................................... 27
6.1.1 HPCT .................................................................................................. 29
6.1.2 Subsumption ........................................................................................ 31
6.2 Observations and Summary .................................................................... 33
Chapter 7: Conclusions ........................................................................................... 35
References ............................................................................................................... 36
Appendices .............................................................................................................. 38
Appendix A ......................................................................................................... 38


Chapter 1: Introduction
1.1 Preface
The creation of automation caused a revolution in which nearly all routine work
was automated. This contributed to the rapid development of industry (also known as the
Industrial Revolution). Robotics is a form of that automation; it is an old field that is
linked to philosophy and psychology, both of which have their roots in history. The first
form of a robot was made around 1200 by Al-Jazari. His hand-washing automata
(Rosheim, E., 1994;, 2013) was a quite sophisticated device for its
time. Artificial intelligence continued to develop throughout history, but was not
producing well-developed paradigms that could carry out the kind of sophisticated tasks
suggested in some science fiction films. It was not until the 1980s that a breakthrough was
made in this field. Since the 1950s until mid-1980s, during that period, several
psychological and behavioural models came out, the majority of which were characterised
by a combination of complexity and inefficiency. An example of one of the old models
would be 'Good Old-Fashioned Artificial Intelligence' (GOFAI), which was very much in
the lead in this area until the debut of other paradigms like Subsumption architecture was
released. The break from the former dominant paradigms was a result of the fundamental
issues that stemmed from their theories.

1.2 Understanding the Problem
Recent history marks GOFAI as the most dominant model used in the AI field.
However, even though it used in research, many problems surfaced in different parts in it.
For example, problems with using GOFAI came from the theory of the paradigm itself.
GOFAI depended on the idea that objects in the environment had a symbolic
representation and, depending on the context of the environment, symbols could be
relevant and sometimes they are not, which required an immense system of connected
symbols to be able to react. However, according to (,
2013), the fundamental problem in GOFAI is that the model should understand the
concept of relevancy and be able to determine what is relevant and what is not.
Furthermore, it should not allocate or consume resources for the irrelevant environmental
symbols in a context.

In the 1980s, Rodney Brooks (1985) came up with a model to deal with AI
problems. His approach looked at the paradigm's behaviours and their priorities. He then
suggested a layered model, in which each layer represents a finite-state machine that is
based on a certain behaviour. These layers are then associated with a certain priority level,
in which layers from low-level and upwards are made to control inputs and output from
layers above them via what are called 'inhibitors' and 'suppressors.' This type of approach
focuses on suppressing certain signals on the input of a layer and inhibiting signals
coming out of a layer, which allows for certain behaviours to take control when required.
For example, when a vehicle is required to move to a certain place while an object is in its
way, the layer responsible for avoiding crashes takes over and suppresses the other layers
to allow for manoeuvring and avoiding the object. After this is accomplished, the vehicle
returns to its normal behaviour.
In a different field, psychological researchers were also studying behaviour. They,
however, were studying it from an organism's point of view. Psychological researchers at
this time developed the idea of the cause-effect approach, which suggests that every
behaviour happens because of an external stimulus. G. Cziko (2000) referenced the
following quote from Powers:
"The analysis of behavior in all fields of the life sciences has rested on the
concept of a simple linear cause-effect chain with the organism in the
middle. Control theory shows both why behavior presents that appearance
and why that appearance is an illusion. The conceptual change demanded
by control theory is thus fundamental; control theory applies not at the
frontiers of behavioral research but at the foundations" - page 67.

Powers, who comes from a psychology background, is suggesting that the theory
of cause-effect is wrong as it only looks at one side of the story. In the 1970s, William
Powers (1973) suggested a new approach to studying behaviour. The approach was based
on two different perspectives: the first is the old model of stimulus-action, which looks at
the environment stimuli and reacts to it whenever there is change. The second is based on
internal stimulus or internal goals. This approach is based on the idea that living creatures
will work, regardless of stimuli, to achieve an internal goal. When combining these two

approaches, we get what William Powers call "The perceptual control theory" (include
page number of the citation here).
In essence, both theories study behaviour. In spite of the differences in their fields,
both attempt to provide a better understanding of behaviour control. The goal of this
project is to tackle these two paradigms and find the strengths and weaknesses of each one.
This will be accomplished by conducting several experiments designed to provide a clear
evaluation of each paradigm.

Chapter 2: Current Scene
2.1 Brief History
The early days of AI attracted many scientists from different fields in the U.S. The
thought of understanding intelligence, as humans know it, was appealing to many
researchers. According to Stuart and Norvig (1995), academic evaluation of AI started
when McCarthy planned a workshop for different scientists who were interested in certain
fields. The birth of AI occurred during that workshop. Several scientists took different
paths to studying AI. The routes taken to reach a better understanding of AI were quite
different and sometimes drastic; and throughout history, many different approaches to AI
were developed. Early AI research began by tackling fundamental problems like problem
solving, which resulted in the invention of 'General Problem Solver' (GPS). GPS handled
tasks like solving puzzles and similar problems of the same size. This model took the
human approach to solving such problems and imitated it. The imitation of the human
approach resulted in great success for this model and subsequent models, which lead to
what is known today as the 'Symbolic system.
The previously mentioned paradigm GOFAI was one of the models that came from
the Symbolic paradigm. It, however, proved impractical (as previously explained).
Researchers continued to come up with different theories and paradigms for developing AI,
some of which were simplistic approaches like task tackling through modelling, planning,
and execution (Rodney Brooks, 1985), and Artificial Neural Networks. Some paradigms
would get attention for a while, lose it, and sometimes become popular again later. This
however, did not apply to behaviour-based paradigms and the models that came from them.
Starting with PCT, it was developed from the perspective of a psychological background

that focused on the behavioural side of living creatures. PCT was a stimulus to explore
this area and to inspire other developing models, which resulted in the Subsumption model
(which adopted the behavioural approach). This makes the motivation behind this project

2.2 (Hierarchical) Perceptual Control theory (H) PCT
PCT encapsulates different concepts from a variety of scientific fields. In one
article, Dag Forssell (2008) discussed the different aspects of the theory; he described the
ideas that form the basis of PCT. According to Forssell, PCT is based on Control theory,
which defines the feedback and feed forward control mechanisms. Perceptual control
theory makes use of the feedback approach specifically, negative feedback. Negative
feedback suggests a system consisting of three components: an input, which processes the
sensory data feed; a comparator that compares the feed data with specific values, which
aims to keep the sensory data on a certain level by adjusting other settings; and the final
part, which outputs the value produced from the comparator to the actuators. In PCT, the
same cycle happens with few changes, the inputs coming from the environment go
through the sensors of the system, then to the comparator which then compares the
variables or perceptual signals with the reference signal. Finally, the output or actuators
return what is referred to as an error signal to the environment,

which results in a behaviour that tries to control the perception (Figure 2.1). Richard
Kennaway (1999) refers to this cycle as a virtual machine.
The other side of PCT that Forssell described was related to psychology. Living
creatures will behave according to their perception of the environment. This suggests that
the internal systems of creatures will attempt to reach their ultimate goals through
alternating their behaviour while reacting to their external environment and the reaction to
the external environment is based on control theory. In order to apply this theory to
organisms, Powers suggested a hierarchical model consisting of multiple virtual machines
aligned vertically and horizontally (matrix). Each virtual machines output or error signal

is the reference signal of the machine below it, until reaching the lowest virtual machine,
that machines output reaches the environment directly through the actuators. Powers also
defined different categories or levels for this hierarchical model. Dave Center (1999)
explains the levels in sequence. He says that each level has a specific delegation that deals
with a certain type of perception, and the perceptions handled by each level vary in
sophistication. However, they all deal with reference signals coming from higher levels
that set ultimate goals, with the exception of the highest virtual machine, which does not
take any reference signal from any other machine.
There is a cost for this theoretical hierarchical model, and the cost comes from
what is referred to as conflict and how that is dealt with. To illustrate, consider the
following scenario: a virtual machine somewhere in a students brain decided that it is a
goal for the student to pass an exam. In order to accomplish this, the lower machines
decided that the student must stay up all night and forgo sleep. This is a possible conflict,
as the student must get some sleep to gain some energy. This sort of scenario creates a
conflict that in reality results in psychological disorders like depression and anxiety,
Higgins et al. (2010), say that these conflicts will remain because arbitrary decisions will
be made which might be biased to sub-goals, thus leaving ultimate goals unfinished.
This hierarchical structure allowed for a more comprehensive system that can be
applied to real life situations. The previously explained paradigm tries to capture how
behavioural decisions are made in real life by taking priorities into consideration and the
level of cleverness involved in deciding on certain behaviours.

2.3 Subsumption
Coming from a behavioural model, the Subsumption theory considers how
behaviour is executed. The University of Michigan (, 2013) describes
Subsumption as a layered architecture where each layer consists of a finite-state machine
(FSM) (e.g. Figure 2.2). Unlike GOFAI and sub-symbolic models, the philosophy behind
this architecture focuses on the fact that environment modelling is irrelevant. The central
approach to Subsumption theory is behaviour (as previously mentioned). Although it also
gets fundamental properties from PCT, it has characteristics that make it different. The
layers in Subsumption deal with certain types of behaviour, and regardless of how
sophisticated that behaviour is, it will only be executed based on certain sensory values.

Layers will vary in responsibility skill but the triggers will always be prioritized. For
example, if a vehicle is running to a destination and there is some object that represents an
obstacle for the vehicle, the vehicles priority will be to avoid this object in order to reach
its destination. However, it cannot do that without suppressing the inputs to other layers or
inhibiting the output of other layers. The idea is to keep sensory data flowing to all FSMs
on each layer unless a certain layer would like to suppress that flow for a certain amount
of time. In that case, the suppression should take place on all layers below that layer, or to
be more specific, layers with less priority. The exact same concept is applied to inhibitors.
The major difference here is that Subsumption does not incorporate the ideas of 'conflict'
or 'reorganization' which require considering the goals of other layers. The way
Subsumption handles execution is by having rapid reactions to the data flow. Dr. Busquets
(, 2013) describes the data flow as concurrent and asynchronous to all layers,
and this is where the mechanism of suppression and inhibition comes in. The idea is to
control what comes into a layer, what comes out of it, and when that takes place.


Chapter 3: Requirements and Analysis
In order to understand the differences and find different characteristics of both
paradigms, we must decide on what we can test precisely. Because the research approach
demands using specific methods to build results, there will not be any discussion around
that area. Therefore, in next step, an attempt is made to find general properties of these
paradigms and how they can be evaluated.

3.1 System Measurement and Evaluation
Most of the past publications on AI involved either intensive software
development that created the test environment and then the virtual robot, or approaches
that were even more challenging where a real robot was created. In both scenarios, the
work involved producing very complicated artefacts that were specific to a certain
scenarios. This limits the system observations to a specific configuration that would not be
applicable to other scenarios, similar to what Brooks, R, made when he discussed the
processing efficiency and erroneousness of one of the layers for the robot that was built
for his experiment. Since the aim of this project is to compare two different paradigms, we
suggest a plan to use an existing configuration with a scenario that suits both theories so
that the evaluation process can be fair. But first, we must define a set of properties that
will create the appropriate setting for the evaluation criteria.
Considering the small amount of AI research in which PCT/HPCT was applied and
the efficiency tests made in the field, it was decided to use a generic set of standards to
find the discrepancies between the two models. Therefore, the following points would be
recorded in the evaluation of every model:
Erroneous/Accuracy: the error margin that comes from executing any part
of the behaviour of implemented layers / virtual machines. For example,
the number of times where successful/unsuccessful attempts are made to
avoid obstacles in a path.
Ability to adapt: the ability to cope with slight changes in the environment,
for example, when a robot takes a path between two points, if this path is
changed in shape, would that change the ability of the robot to reach its

Design complexity: this is an essential property for both theories. If the
implementation of the theory is impractical or has a high level of
complexity, this would reduce its usability in real environments.

Since this project targets a precise development platform with a robot that has
simple capabilities, the chances to have a scenario that fits this category becomes small, so
the idea is to create a behaviour-based scenario that can demonstrate the capabilities of
both of the models. In the next section, we explain the approach used to create a
behaviour-specific scenario.

3.2 Approach to the Problem
In order to produce results, an experiment must be implemented that is designed to
test the feasibility and practicality of each model. This will be done through the creation of
a scenario. The aim of this scenario is to create a set of behaviours that would be
implemented for each theory; the set of behaviours should demonstrate
weaknesses/strengths of each model through the properties specified previously. This will
come from a number of tests that will produce the evaluation results. Allowing equal
environment variables will help show any discrepancies and ultimately lead to a fair
comparison. The tasks to be used do not have to be sophisticated in order to provide clear
answers. The actual design and implementation will use iterative style (improvements
over each iteration) and will be demonstrated in the next chapter.

Chapter4: Scenario Design
4.1 Initial steps
In an attempt to design a scenario that could highlight the differences between the
two paradigms, an initial assumption was made, which suggests that the NXT robot should
take a path between two points. At the beginning, this plan looked like a suitable one,
however, just after starting the design of the actual scenario, problems began to emerge.
The problems were due to the fact that the robot to be used in the experiment does not
have the appropriate sensors to allow for path tracking and precise location detection. This
led to only one option, which was to implement an algorithm called Dead Reckoning.'

The algorithm states that the start of a path can be formed from an initial point, and the
goal point can be set by the calculations of the robot wheel speed. The problem here is that
the path should be accurate and the points cannot be large areas. Dead Reckoning cannot
maintain such accuracy, as the algorithm lacks constant consistency. It can provide it
eventually but it cannot provide it at any given point in the system lifecycle. The next step
was to change the chosen behaviour, this helps move away from the positioning problem
and location determination.

4.2 Preferred Design
In order to build a design that can simulate both paradigms without introducing
any inconsistency, it is important for the design to be location- and position- agnostic. The
reason for adopting such a rule is to eliminate any dependency on the environment, i.e.,
making the environment a place to interact with instead of determining course of certain
actions. This does not mean that the robot will not react to environmental stimuli (e.g., by
avoiding obstacles), but rather that positioning or location will not be allowed to
determine key events like directing the robot north/south or introducing slope in the
The proposed design is basic, yet it demonstrates features of both models
thoroughly. It fits the implementation of both paradigms nicely, and as previously stated, it
forces implementation of some of the key features. The design is structured in the
following way: it consists of two distinctive behaviours. The first behaviour is the
avoidance behaviour, which is responsible for avoiding objects using the ultrasonic sensor.
The second is the ability to drive in a consistent path that is circle shaped; the drive path is
based on a radius value of the circle shape. The first step in constructing this design is to
get computed values or the possible outputs of each behaviour. Starting with the circular
motion behaviour, it was decided to rely on computed speed to maintain the drive path. To
derive the equation that will calculate the required speed for each motor, we used the
following steps but before describing these steps, it is important to make it clear that the
aim here is to calculate the speed of each motor to keep the robot moving in circles with
specific radius.

To compute the speed required to run in a circular motion we used the velocity
formula, which is: =

represents the speed which equals distance ( ) divided by time ( ). So to
calculate the value of the motor to the inner side of the circle centre we used the
following equation: 1 =
2 1

The previous equation numerator is the formula to calculate the circumference of a
circle, and 1 stands for the value of the circle radius.
To compute the value of the motor to the outer side of the circle, we used the
following approach: 2 =
2 2

2 2
. to simplify the
approach (2) was assumed static at a speed of 63.
Since () is a constant and is irrelevant in this behaviour, we can derive the final
equation to calculate the overall value of (1) using the equations and to
form the following equation: 1 =
2 1
2 2
2 =
1 2
1 2

() is the space between the first motor and the second motor, which is 13 cm.

The following step was used to create avoidance behaviour: the method used here is
based on static values of the ultrasonic sensor, which attempts to first deviate the robot
from where it is heading and if that values remain the same, it tries to perform a semi-180
degrees turn. When the sensor reads any value above 75 cm, it ignores it. However,
between 75 and 65 cm, the speed of the inner motor is reduced to 40 regardless of its
current speed (this value comes from trial and error attempts to check reaction control).
The next range is between 65 and 60 cm and that reduces the speed of the same motor by
4.25, which makes it 35.75. The next ranges keep reducing the speed by 4.25 until the
sensor reads a value in the range of 40 to 35 cm, at which point the speed is reduced to 23,
which makes the robot turn away from what it is currently facing.

4.2.1 Hierarchical PCT
The PCT design consists of multiple virtual machines arranged in a hierarchy,
hence the name of the paradigm. The final design is based on two levels of the PCT

hierarchies: the intensity level and the sensation level. Dave Center describes the intensity
level as the awareness threshold; it acts as the medium of the stimulation where it
transfers the stimulation from the environment to the system. In reality, the intensity level
is genetically inherited property with certain abilities, but in the case of machinery and
robotics, it represents the sensors with different capabilities like the distance measurement
ability that different sensors can capture. The second level is the sensation. According to
Center, this focuses on perceiving sensory data. This essentially makes sense of one of
multiple data coming from the intensity level. A good example of the sensation level is the
red, blue and green value that is used to form a colour; if a certain type of sensor reads an
RGB value of 255-255-0, the sensation VM can interpret that as a yellow colour.
The design for HPCT here consists of the two previously explained levels. Both of
the VMs are attempting to capture a certain intensity level, however, they do not attempt
to identify any patterns (like in Configuration level). For example, the avoidance VM
tries to respond directly to certain intensity values with specific error values. The same
process happens in the VM that maintains the path of the robot, so in essence both of the
VMs belong to the sensation level. The important part of this design is the reorganisation
process. Since one VM can produce a counter-behaviour to another VM and both VMs are
on the same level, we should anticipate the conflict between them. Therefore,
reorganisation is required to handle that conflicting behaviour. If reorganisation is not
applied, the system would be left with a permanent unacceptable error state (fatigue).
According to McClleland (2012), the first step in reorganisation is to alter the reference
values of the VMs that are creating the conflict, and this can be arbitrary approach. A node
that is responsible for detecting the conflicts should signal the alteration. The second step
in reorganisation is to form new connections with other VMs, which is slightly advanced
for the kind of behaviour used for this project. Therefore, the designed behaviour is based
on controlling reference values. The control happens on the lower priority VM here, which
is the circular motion VM. The reason for this classification is that we do not want the
robot to collide into any objects in its way. Figure 4.1 shows the overall design of the
HPCT system.


4.2.2 Subsumption
Subsumption follows the evolutionary approach, meaning that each new behaviour
(layer) builds on (subsumes) the previous one; hence the paradigm name. Subsumption is
decentralised in nature. This is shown in the finite state machines (FSM) used for each
layer. Each FSM only considers its own state and optimisations. As described in previous
chapters, Subsumption is a radical method of designing AI compared to GOFAI. This
whole system takes consideration of its own architecture and focuses on a highly dynamic
approach. The final design of the chosen scenario consists of two layers: the first layer
contains the avoidance module, which deals with avoiding objects. This layer has a basic
FSM that on specific states suppresses the layer (second layer) that subsumes this one. The
second layer handles movement behaviour and essentially contains a FSM that detects the
state of the robot drive path. Again, this basic FSM has two states that attempts to keep the
robot on a specific path. Figure 4.2 shows the final design. This design would allow layer
two (which has less priority) to run only if layer one is not executing. If layer one (highest
priority) decides to execute, then the upper layer (layer two) would be suppressed during
that cycle.



Chapter 5: Implementation
5.1 Tools and Platforms
Many platforms are candidates for implementing such an algorithm, but it is
important not to reinvent the wheel. Using the previous approach, we would allow for
more iterations on the core design of this project and would save much of the effort that
otherwise would be used to implement a library from scratch. Therefore, the number of
possible platforms decreases significantly. This is because not all frameworks have their
own implementation of the Lego NXT library; currently there are only three frameworks
that would leverage all Lego NXT features through open-source libraries. The first open
option comes from Java framework and specifically the (, 2013) library. This
updated library is well established and widely used. However, the issue with this library is
the way it runs. The implementation runs directly from the Lego NXT brick, which
requires flashing and uploading of compiled code to the brick. The second option comes
from C# language and that has several libraries that are out of date. The selected platform
is Python. Python has an up-to-date library for NXT (NXT-Python, 2013), which supports
all NXT sensors and also supports Bluetooth commands without requiring any further
steps (such as flashing the brick memory). Furthermore, Python offers multiple cross-
platform libraries that connect with other frameworks/languages, providing a variety of
The Python library acts as a thin layer that leverages commands and
communication to the robot. However, as a requirement, (, 2013) was chosen
to develop the core modules of the experiment. Pure Data offers an interactive graphical
environment to design applications, which helps accelerate the iterations since changes do
not require any compilation and execution (the framework is always in execution mode).
The framework works on a specific editor that provides all documentation and tools
without having external references. The best option for connecting these two different
frameworks (Python and Pure Data) is via a protocol called Open Sound Protocol (2013)
or OSC. OSC is a protocol that allows the use of synthesized sounds in networking. This
makes OSC very powerful in communicating virtually with any development platform.

5.2 The Interface Layer
Since Pure Data cannot communicate directly to the NXT robot, a thin layer is
required to interface between the robot and Pure Data. The sole purpose of this layer is to
leverage the control of the robot through Python. The implementation of the interface
module is basic in terms of responsibilities. It should receive Pure Data commands and
forward them to the robot for execution. Since Pure Data and Python do not have any
common library that leverages communication between the two frameworks (other than
OSC), the implementation was done via an OSC library for Python.
The Simple OSC library (2013) has the ability to run a server that listens to
messages and the ability to run a client that broadcasts messages. The Python
implemented provided the functionality to read sensors and control the robot through
using two libraries: the previously mentioned Python-NXT library and the Simple OSC
library. These libraries are used through asynchronous or parallel programming. At the
beginning of executing the Python module, an object of class is instantiated
with the NXT object after locating and establishing a connection with the robot. Then the
OSC client is initialised along with the server. The server requires some extra information
to complete initialisation, which includes the address, port and server mode and the
callback functions that would handle incoming messages. The OSC message handler and
the NXT object are running on the main thread all the time; however, since the sensors
must be read constantly, the OSC broadcasting takes place on a separate thread. This is
implemented using the MultiProcessing library from Python, which allows the creation of
a thread pool and provides a way to control those threads. For the complete
implementation, see Appendix A.

5.3 HPCT
The implementation of HPCT is based on the demonstrated design in the previous
chapter. The core design is implemented on Pure Data and all the commanding and sensor
reading takes place through an OSC server/client from and to Python. The contents of the
Pure Data module are four Pure Data abstractions that handle all the system flow. The first
abstraction (Figure 5.1) is the one that handles the avoidance behaviour, and this contains
multiple VMs. The reason for grouping them is that logically the VMs are identical in
behaviour. The only difference is that the reference signals are different for each VM.

Each VM works through listening to the ultrasonic sensor readings and comparing the
readings with its static references. If that reading happens to be in the VM range, it would
send a packed message to expressions (expr) that would make the necessary calculation to
produce the error. Otherwise, nothing would be sent and the system would allow the other
parts to run. In addition, every time a VM initiates the calculation, a shared reference is set
for the reorganisation node to use. The reason for having a property like a shared reference
is to avoid having multiple references to compare at the same time. The problem with
using multiple reference variables is that the sensor would send multiple signals,
sometimes resulting in triggering more than one VM, which is an undesired execution of
the system. In this linear execution of the avoidance VMs, we are still able to express the
triggering part and force the reorganisation without multiple unexpected changes. This can
be confused with a single VM that has multiple reference values but that is not true. The
adoption of the linear approach is only to allow dealing with a single trigger at a time
instead of having multiple triggers, as previously stated.
The second abstraction contains the VM for maintaining the circular path (Figure
5.2). This VM is also used to set the speed of the right motor only after calculating the
radius value from an external text file. The reason behind disregarding the left motor value
is that the only important factor in maintaining a circular path is in all cases is the value of
a single motor, the second motor will always refer to how fast the robot can move in
circles. Thus, one of the reasons for disregarding the left motor values is to minimise the
complexity of the system and reduce the time the system needs to calculate the error
signals. The other reason not to compute the error value for the left motor is that the robot
does not have any real sensor readings of the motors. This makes retrieving the motors'
speed from the robot itself insignificant, i.e., if the robot drives on a hilly area and the real
speed is reduced, the robot motor values will not be different from the assigned ones.
Therefore, the implementation used shared variables that are accessible from all virtual
machines, which makes it simulate the current speed of the motors.
Another important issue is the fact that the right motor speed is calculated relative
to the radius of the circle. The reason behind this is that Pure Data produced many
problems when using real numbers. The calculations produced decimal numbers that were
causing a problem when sent to the OSC server, so the solution was to round up the
computed value to a whole number (integer type). The third abstraction contains the

reorganisation node (Figure 5.3), which handles the reorganisation of reference values in
the selected VM (in this case, it is the circular motion VM). The way this runs is by
detecting specific error signals from the avoidance VMs. It allows the circular motion VM
to run by default, but when the avoidance VM attempts to avoid an object, i.e., send an
error signal, the reorganisation node starts adjusting the reference signal of the circular
motion VM. It starts by subtracting one from the reference value of the circular motion
VM until it meets the avoidance reference that was producing the error at that time, which
results in reducing the error signal and removing the error state from the system (conflict
state). When the avoidance VMs are not sending error signals of any type, the
reorganisation node works to recover the original reference values, which makes the robot
run on the desired circular path.



5.4 Subsumption
Subsumption implementation was far less complicated than HPCT. The
implementation consists of two Pure Data abstractions. First is the layer one abstraction
(Figure 5.4), which is in this case the avoidance FSM. It runs exactly like the HPCT
implementation but the concept here is different. Subsumption uses FSMs to provide the
behaviour and in the case of avoidance, each expression (expr) corresponds to a certain
state in the avoidance FSM. Therefore, the approach is the same in HPCT, however, the
results produced by the FSM are different from HPCT. FSM sends direct commands to the
actuators (level zero) instead of calculating error values (like in HPCT). The other part of
the avoidance layer is the suppress mechanism for the upper layer or in this case layer two,
the circular motion abstraction. Every time the avoidance layer FSM is in one of the
undesired states, the suppressor is activated to suppress layer two via a bit controller that
locks the message flow from that layer. Then when the FSM is not active the suppressor is
deactivated and layer one signals layer two to continue to execute.
Layer two FSM is quite basic in logic and has few states (Figure 5.5); it computes
the required speed for the right motor to run on certain radius and sends a message to the
actuators directly. To keep the environment exactly the same the decimal values were
rounded here as well. There is also an expression to detect the speed of both motors. That
will message the actuators the desired speed whenever the robot is running on
lower/higher speed than required. However, right now this expression is somewhat
redundant, as the first layer (avoidance layer) is by default signalling the second layer to
resume execution and send continuous messages to the actuators regardless of the speed
the robot is running on. Another reason for considering this part of layer two as
unnecessary is that the motors do not have any sensors to provide real values of how fast
the motors are running. Unlike a speedometer, the motor readings would only provide the
assigned value, which will not reflect real speed, like in the example where the robot is
driving on a hilly area where the assigned value can be something completely different
from actual speed. The rest of the system is the same as HPCT; the OSC sensor readings
are routed to the assigned layers and the actuators (layer zero) is slightly different from
HPCT (Subsumption takes direct messages instead of computing error and current values).


Chapter 6: Experiment and Results
6.1 Testing
The testing of the implementation was run through two different situations. The
first test was for the normal course of the implemented design (Figure 6.1).This test was
done in an area that did not have any obstacles or obstacle-like objects other than one
object that was placed in the circular path of the robot. The purpose of this test is to see
the normal course of execution for the implemented behaviours. In both implementations
the behaviour should be relative to the highlighted path in the figure, however, the overall
results produced might be different. The second test (Figure 6.2) is to approach the
unexpected boundaries of the behaviours. In this test, the robot is placed in a collision
state, i.e., starting with avoidance instead of driving on a path. This is done through
placing the robot in a corner and seeing the robots ability to escape that spot and return to
driving in circles. The purpose of this test is to find the unexpected results of each

implementation. This will provide a clear way to find the inconsistencies. Again, the robot
should take a path relatively close to the path highlighted in the previous figure.


6.1.1 HPCT
The first test for HPCT showed the expected behaviour. The robot moved
according to the relative radius (40 cm), which happens to require a speed of 47 for the
right motor. When it came close to the obstacle, it deviated from it and started to
reorganise the reference values of the circular motion VM. After it made it to the safe
range, i.e., ultrasonic values above 75cm, the reorganisation node handed control to the
circular motion VM and the reorganisation node started adjusting reference values to meet
the original radius. The robot showed a sign of reference values recovery after moving
away from the obstacle, which was represented by a slow curvy path to reach the desired

The second test showed an unexpected behaviour of the avoidance VM. As
previously described, the test started with the robot in a collision state (or reorganisation
state in the case of HPCT). The robot starts by adjusting to move away from the enclosed
corner and immediately continues to adjust when moving to the right. Then, when facing
the opposite side of the corner, the avoidance VM stops attempting to avoid collision and
starts to return to a normal state, which is driving in a circular path. However, the
reorganisation takes time to recover to the original state, so while the reorganisation node
is working to recover to the desired reference values, the robot is still running with a low
reference value, i.e., a slower than expected speed for the right motor. Thus, the robot does
not take a curvy path; instead, it makes a U-turn and ends up facing the wall again. This
occurs several times before the robot is in enough space to drive in the allocated path. On
multiple occasions the robot never managed to recover from that state of trying to
reorganise the reference values back and forth. Figure 6.3 shows the ultrasonic readings
and the right motor speed over an interval relatively close to ten milliseconds. In this
attempt of this test, the robot never managed to fully recover. Running the same settings
used in the first test, the robot was supposed to drive on a radius relative to 40cm, which
required a speed of 47.The reorganisation effect is showing clearly at the beginning of the
left side of the chart where the ultrasonic sensor values keep rising and dropping
frequently. Later in the same chart, the effect of reorganisation is not as strong as it was
initially, yet the gradual recovery/loss of the motor speed is visible.

6.1.2 Subsumption
Using the same setup as the HPCT tests, the execution of the first test of
Subsumption implementation showed a high similarity with HPCT in the produced
behaviour. It started driving exactly on the same path except that when it reached the
obstacle, it avoided it and went directly back to its original speed. This essentially
discarded the time required by HPCT to reorganise reference values back and forth to
recover speed. The second test showed a very different result from HPCT. Using the same
configuration, the robot ran from the corner and attempted to escape that enclosed area.
The difference here is that the Subsumption design managed to quickly move away from
the corner spot within a span of three to four turns. In Figure 6.4, the motor speed-
readings and ultrasonic sensor values show much smaller intervals of speed alteration. The

reason for this is that Subsumption does not consider any layers other than the one
currently executing. The first layer can suppress the second layer when required, and
execute its FSM to reach a targeted state, and then stop suppressing higher layers. As
described previously, the avoidance behaviour would not be triggered unless the ultrasonic
sensor reads a value below 75cm. That is why the figure shows many low and sharp drops
(only a few of which trigger avoidance).


6.2 Observations and Summary
Testing revealed an interesting result of both designs. In HPCT, it produced a less
efficient result, especially in the second test. This is mainly because of how the paradigm
approaches the notion of counter-behaviour or conflict in the context of HPCT. The theory
is based on a psychological background, thus it simulates reality and has a real approach
to conflict that allows error margins and does not neglect other low or high targets or
reference values. Although this approach mimics how real organisms deal with conflicting
behaviour, it did not work as intended in the designed scenario. The design also lacked
accurate responses even in normal scenario tests; it required additional time to recover
from avoiding obstacles, which made it turn for a longer time and sometimes face the
same obstacle again. The design also showed great complexity in many aspects. The
implementation of negative feedback VMs and the computation of error signals created
great difficulty in tracing errors in the system, especially when combining that with
reorganisation. Even though the system is decoupled and each VM has its own properties
when attempting to trace an unexpected value from the actuators up to the VM that
produced that value, design and implementation can be problematic even in the most basic
scenarios. On the other hand, HPCT was applied in a basic mechanical scenario here. It
could be very viable when used in other fields of AI, such as natural language processing
or in analysis systems that are used in aviation, medicine, finance, or any other field.
Subsumption presented strong results in comparison with HPCT. This is mainly
because of how Subsumption structures the system. The concept of isolated layers made
designing behaviours a much easier task. This is because an FSM is responsible for a
single behaviour only and that behaviour will deal with the actuators directly to eliminate
the kind of interference we see in HPCT. The idea of suppressors and inhibitors works
well to orchestrate the multi-layer design. The beauty of Subsumption is that adding a new
behaviour is not a complicated process; it works like building blocks where a behaviour
can be added in any layer desired. This allows for a more robust and manageable design.
Table 6.5 summarises the results for both theories.
Regardless of the results produced, it is difficult to provide a judgement on the
former theories for a context other than the current one. The applications of such
theoretical paradigms are vast and can take many forms other than robotics.


Hierarchical Perceptual Control
Testing showed some errors and sometimes
even failure in the ability to function in
some tests. The level of accuracy is not as
desired, however, this does not eliminate
the efficiency of this theory in other
Produced high accuracy
and ability to control
behaviours well.
Design Complexity
The design is relatively complex even on
the basic level. In reality, HPCT cannot be
implemented without the concept of
reorganisation because conflict will occur
regardless. Reorganisation adds a great
deal of complexity to the design due to the
large number of VMs that need to be
considered in the implementation of
Subsumption was
somewhat easy to design
and implement. Adding
new behaviours is a much
easier task than in HPCT.
Ability to Adapt
Looking back at the accuracy of this
design, it is difficult to say that it can
adapt. However, since this project was
centred on a single scenario with specific
application, it becomes difficult to believe
that the level of adaptation in this theory is
untimely insignificant.
The level of adaptation
shown by Subsumption was
distinctive. The testing
showed low marginal errors
in certain scenarios but
overall the results were
Table 6.5


Chapter 7: Conclusions
The project aimed to compare HPCT and Subsumption by having a unified testing
scenario. The level of detail in the designed and implemented scenario presented a great
opportunity to explore nearly all the features in each theory. Subsumption demonstrated a
number of strong points in its design and testing. In both design and implementation,
Subsumption showed high robustness and ease of use. The testing revealed a high level of
effectiveness in Subsumption. HPCT had an almost unsuccessful testing. The design and
implementation of HPCT has been challenging and has produced a great deal of
complexity. Even though the testing environment was the same for both theories, the
comparison showed a limited granularity of both theories, leaving an opening for further
exploration in different contexts.


Brooks, R. A. (1985) A Robust Layered Control System for a Mobile Robot. Massachusetts Institute of

Brooks, R. A. (1997) From earwigs to humans. Robotics and Autonomous 20, 2(4), 291-304.

Center, D. (1999) Strategies for Social and Emotional Behavior: A Teacher's Guide. Available from:
< Method/Ch10_PerceptualControl.pdf>. [31 July 2013].

Cziko, G. (2000) A Psychological Perspective on Purpose: Organisms as Perceptual Control Systems. The
Things We Do. 1, pp.67.

Forssell, D. (2005) Once Around the Loop An interpretation of basic PCT. Available from:
<>. [13 October 2013].

Forssell, D. (2008) Perceptual Control Theory: Science & Applications: A Book of Readings. Hayward, CA:
Living Control Systems.

Good Ol' Fashioned AI. (2013) Good Ol' Fashioned AI. Available at: [19 July 2013].

Good Old Fashioned Artificial Intelligence. (2013) Good Old Fashioned Artificial Intelligence. Available at: [17 July 2013].

Higginson, S. Mansell, W. Wood, A. M. (2010) An integrative mechanistic account of psychological
distress, therapeutic change and recovery: The Perceptual Control Theory approach. Clinical Psychology
Review, 31, 249-259.

History of Computers and Computing, Automata, The Arabic Automata. (2013) History of Computers and
Computing, Automata, The Arabic Automata. Available at: http://history- [17 July 2013].

Kennaway, R. (1999) Control of a multi-legged robot based on hierarchical PCT. Journal on Perceptual
Control Theory, 1.

LeJOS, Java for Lego Mindstorms. (2013) LeJOS, Java for Lego Mindstorms. Available
at: [Accessed 07 October 2013].


McClelland, K. (2012) Perceptual control and social power. Pacific Sociological Association, 37, 461-496.

NXT-Python - A pure-python driver/interface/wrapper for the Lego Mindstorms NXT robot. - Google
Project Hosting. (2013) nxt-python - A pure-python driver/interface/wrapper for the Lego Mindstorms NXT
robot. - Google Project Hosting. Available at: [17 October 2013].

Open Sound Control. (2013) Available from: <>. [Accessed

The problems of relevance and dynamism in GOFAI as conveyed by Michael Anderson in Embodied
Cognition: A Field Guide | Ontology in the Flesh. (2013) The problems of relevance and dynamism in
GOFAI as conveyed by Michael Anderson in Embodied Cognition: A Field Guide | Ontology in the Flesh.
Available at:
dynamism-in-gofai-as-conveyed-by-michael-anderson-in-embodied-cognition-a-field-guide/. [19 July 2013].

Powers, W. T. (1973) Behavior: The Control of Perception. Aldine Pub, Chicago.

Pure Data PD Community Site. (2013) Pure Data PD Community Site. Available
at: [17 October 2013].

Rosheim, Mark E. (1994) Robot Evolution: The Development of Anthrobotics. Wiley-IEEE.

Simple OSC module 0.3.2. (2013) Available from: <http://www.ixi->. [Accessed 24 October 2013].

A simple finite state machine in Erlang and F# | (2013) A simple finite state machine
in Erlang and F# | Available at:
[02 August 2013].

Stuart, R. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach. 2
ed, Prentice Hall,
Englewood Cliffs.

Subsumption Architecture. (2013) Subsumption Architecture. Available
at: [02 August 2013].

Subsumption architecture. (2013) Subsumption architecture. Available
at: [06 August 2013].


Appendix A
#!/usr/bin/env python

__author__ = 'Yasir'

import multiprocessing as mp

import nxt.locator

from nxt.sensor import *
from nxt.motor import *
from simpleOSC import initOSCClient, initOSCServer, setOSCHandler, \
startOSCServer, sendOSCMsg, closeOSC

class MotorControl(object):
nxt_obj = 0
m_left = []
m_right = []

def __init__(self, bot):
self.nxt_obj = bot
self.m_left = Motor(self.nxt_obj, PORT_B)
self.m_right = Motor(self.nxt_obj, PORT_C)

def motor_osc_handler(self, addr, tags, data, source):

def motor_command(self, d):[0] if not d[1] else d[1], regulated=True)[0], regulated=True)

# OSC server function, forwards all required values to PureData OSC client
def sensor_broadcast(Control_Object):
ultrasonic = Ultrasonic(Control_Object.nxt_obj, PORT_4)

while 1:
distance = ultrasonic.get_distance()



# print distance
sendOSCMsg('/ultrasound', [distance])
time.sleep(0.10) # 10 ms

# Not in use
def shutdown(pl):
input("Press any key to close OSC server")

if __name__ == '__main__':
b = nxt.locator.find_one_brick()
controls = MotorControl(b)

# takes args : ip, port

# takes args : ip, port, mode --> 0 for basic server, 1 for threading
server, 2 for forking server
initOSCServer(ip='', port=20002, mode=1)

# bind addresses to functions
setOSCHandler('/motors', controls.motor_osc_handler)

# and now set it into action

# starting the sensor broadcast in parallel
pool = mp.Pool()