You are on page 1of 135

Mathematical innovations in the future

Comparing the music of Haydn & Debussy

SAINT OLAVES ACADEMIC JOURNAL

Editorial

ISSUE 2, SEPTEMBER 2014


except perhaps on Friday afternoons. We hope
this Academic Journal will keep you awake into
the small hours too.

Jack Bradfield & Abhishek Patel


Editors of the St. Olaves Academic Journal 2014
_______________________________________________________

Leonardo da Vinci once said that Learning


never exhausts the mind. Although we may
not be made of the same stuff as Leonardo, some
of our peers come pretty close, as readers of this
erudite volume will soon discover.
At first the prospect of editing Saint Olaves
Academic Journal seemed a daunting task, but I
think its a testament to our school that it
wasnt nearly as tricky as we anticipated.
Articles came at us from all kinds of sources:
emailed by students, passed on by teachers, and
one even thrust into my hand in the lunch
queue. There was no shortage of scholastic
endeavour on offer. Shakespeare once described
Southwarks Saint Olaves boys as creeping like
snail unwillingly to school. Four hundred years
on and there is nothing sluggish about this
generation of young men and women.
Also included in the Journal are a greatest-hits
selection of HPQ and EPQs from our Year 11s
and 13s. These students have pushed Olavian
scholarship even further, and their essays are a
delight to read, combining as they do meticulous
research and incisive insight. Authoritative,
provocative and stimulating there is not a dry
piece of academic prose amongst them. Some of
them are even written in German and Spanish.
We have writings from across the board: from
essays on Fassbinder and Linguistics, to the
Chaos
Theory
and
The
Philosophical
Justification of Quantum Mechanics. Only an
Olavian would attempt to write an essay
comparing Caesar Augustus to John Maynard
Keynes. In fact we see in every article an
attempt to challenge accepted orthodoxy and
thinking. Learning never exhausts our minds

Scholarship. A few may argue that this word


has lost its meaning here at St. Olaves.
Described
as
semantic
satiation,
the
phenomenon occurs when the listener perceives
a word as a mere, meaningless sound due to
extensive repetition. However, it is the always
visible academic endeavour at St. Olaves, which
continually
reinforces
the
meaning
of
scholarship. It is the depth of knowledge,
passion and research found within each of these
articles, which allow them to be praised as
works of scholarship.
This merit and hunger for learning is what we
aimed to encapsulate when we first set out to
produce the Academic Journal. We were
reassured when receiving each article, all
portraying true enthusiasm on an array of
subjects, that this publication would be a
success. Our venture was made easy by the
students eagerness to contribute and by the
constant selfless help from Mr Budds. Now, we
are very proud to see this service reignited by
Jack Bradfield and Abhishek Patel. Leaving a
legacy was seen by many of us as something
that could lessen our heartfelt sorrow as we said
goodbye to our time at St. Olaves. We feel
indebted to, among others, the new editors and
Mr Budds for allowing us to do so and know that
this years edition will be even more
triumphant.

Dawud Khan & Vithushan Nuges


Founders of the St Olaves Academic Journal
_______________________________________________________

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Contents
Societies News
Political Economy Society

Matthew Allen (Year 12)

Art History Society

Matilda Boyer (Year 12)

Adrian LaMoury (Year 12)

Abhishek Patel (Year 12)

History Society

Matthew Roberts (Year 12)

Classics Society

Joe Cordery (Year 12)

Medics Society

Liam Carroll (Year 12)

Weronika Raszewska (Year 12)

Sam Luker-Brown (Year 12)

Matthew Roberts (Year 12)

Shunta Takino (Year 13)

10

Max Lewthwaite (Year 12)

12

La coupe du monde 2014

Maya Makinde (Year 12)

17

Rainer Werner Fassbinder Der Vater des Deutschen


Autorenkinos im Kontext

Alaric Belmain (Year 12)

18

Chris Leech (Year 13)

20

Emily Macpherson-Smith (Year 12)

21

What similarities or contrasts can be drawn between the


Music of Haydn and Debussy?

Lucy Morrell (Year 12)

23

Enduring Love and Closer are traditional narratives with


nothing new to offer. How does your reading agree with this
statement?

Jack Bradfield (Year 12)

25

Alastair Haig (Year 12)

28

Abhishek Patel (Year 12)

30

Art Club
Natural Sciences Society

Physics & Engineering Society


Literature Society

Arts & Humanities


Was Emperor Augustus a Keynesian?
The Scarcity Paradox
Roman religion was essentially a mechanism of social
control. Discuss.

Elige un tema de la obra que has estudiado que te parece


importante. Explica cmo desarrolla el autor este tema y por
qu te parece clave
Close Analysis of Raymond Carvers Little Things

Maths & Science


Mathematical Chaos in a Nutshell
Is Human Intelligence a product of Genes or the Environment?

SAINT OLAVES ACADEMIC JOURNAL


Should the Milwaukee protocol be used as a treatment for
Rabies?

ISSUE 2, SEPTEMBER 2014


Caterina Hall (Year 13)

32

Stellar Physics

James Kershaw (Year 12)

33

The Power of Capsaicin

Connor Smieja (Year 12)

34

Spacecraft Propulsion

Akhilesh Amit (Year 12)

35

Elena Rastorgueva (Year 12)

37

Daniel Fargie (Year 12)

38

How Physics completed Chemistry

Quang Tu (Year 12)

40

The Truth about Confirmation Bias

Chandan Dodeja (Year 12)

42

Matipa Chieza (Year 12)

43

Danielle Hasoon (Year 12)

45

Saad Khan (Year 12)

46

Eamon Hassan (Year 12)

47

How did ammonite faunas change during the British Albian?

Thomas Miller (Year 11)

50

The British Economy: Why did it enter recession and how can
the national debt and deficit be dealt with?

Rishil Patel (Year 11)

55

Rowan Wright (Year 11)

63

Oscar Hinze (Year 11)

67

Jeevan Ravindran (Year 11)

71

Is Parthenogenesis in insects a viable alternative to sexual


reproduction?

Alexandros Adamoulas (Year 13)

79

To what extent has the definition of English been changed


since it has become a World Language?

Sinead OConnor (Year 13)

91

Louise Selway (Year 13)

95

Skanda Rajasundaram (Year 13)

104

Peter Leigh (Year 14)

118

The Biophysics of Flight


Why is engineering the key to a strong economic future in the
UK?

How 300mg of Aspirin can turn your day around


The Story of the Atomic Structure
Real World Applications of Sci-Fi technology
GFP: The Shining Light of Biomedical Research

Higher Projects

Will the mathematical innovations of the future come from


computers?
Is Interstellar Travel achievable within 100 years?
How has Colonisation Impacted Sri Lanka?

Extended Projects

Is Quantum Mechanics Philosophically Justified?


Should we continue to screen for breast cancer in the UK?
What was the impact on classical scholarship of Michael
Ventris decipherment of Linear B?

SAINT OLAVES ACADEMIC JOURNAL

Societies News

ISSUE 2, SEPTEMBER 2014

Art History Society

Saint Olaves has a large number of student-led societies


that provide extra-curricular enrichment to a variety of
departments, and giving students the opportunity to
discover more about topics of interest. These overviews
show the wonderful activities and events that each society
has offered in 2014.

Political-Economy Society

Political-Economy society has had yet another stupendous


year of academic forays into the twinned realms of politics
and economics. With such a wealth of topical news items
pertaining to the areas the society concerns itself with,
never has there been a shortage of interesting and
pertinent discussions and presentations within PoliticalEconomy Society; but the sheer breadth of talks given this
year has been truly exceptional. Talks this year ranged
from impassioned discussions on the economic structure of
the USSR and whether it deserves the socialist label it
granted itself, to a presentation on the origination and
redevelopment of political pressure groups in the USA;
Political-Economy society has been lucky enough to
experience some of the most passionate and articulate
student-led presentations ever seen in the societys history.
Yet aside from the fantastic array of student members
enlightening and inspiring with their presentations,
Political-Economy society also has a tradition of reaching
out into the wider world and inciting external speakers to
present to the society. A particular highlight of one such
occasion was Jo Johnson, Member of Parliament for
Orpington, who gave an interesting discussion on
contemporary political issues in the UK which certainly
served to engage all present in the world of politics. This
was just one among a plethora of truly exceptional talks
given by external speakers to the society and hopefully
will be followed by plenty more to come.
Political-Economy society, being open to all students in
Year 11 and above, attracts a demonstrably superb
membership of critically fascinated and greatly passionate
individuals from all sides of the political spectrum and is
home to lively and good-natured debate between peers.
These qualities have been glowing very brightly this year
and hopefully shall be ever more incandescent for the next.

This very fine and successful year at Art History society


has breached every traditional or modern concept of art
and challenged every message and movement. We've had
consistently engaging, enthusiastic and thought provoking
speakers from both Year 12 and Year 13 challenging the
topics of death, mental illness, obsession, fear, sex and race
in art as well as questioning the importance of circles. We'd
like to thank the two Co- Presidents this year James Laing
and Louis Newby for their dedication to the art
department, the knowledge they shared as well as the cake
they brought with them. Art history society looks forward
to recommencing after summer, and we'll see you all there.

Matilda Boyer (Year 12)

Art Club

In Art Club this year we have thrived. We have pushed the


boundaries of art; it has become more than just a club, it
has become a necessity of young people striving for artistic
excellence. Above all, Art Club has become a community, a
group of individual characters, summoned together by the
simple call of assembly notices. Its a way of life. We have
explored a great deal of media, widening our awareness of
the realms of art and its dimensions. Featured activities
have been: popping ink-filled balloons, wire sculpture,
blind drawing, colourful bubbles, balloon modelling, speed
drawing, mono-printing, automatic sculptures, and shadow
art, to name but a few. These activities, we hope, have
inspired our young budding artists of the future to create,
to evolve and to value their artistic culture and
endeavours. Its not just about developing practical skills
and techniques with activities they may never previously
have experienced, but also about nurturing new ways of
thinking, enabling them to look at life in a different way, to
challenge the accepted, and to live the art life.

Adrian LaMoury (Year 12)

Matthew Allen (Year 12)


4

SAINT OLAVES ACADEMIC JOURNAL

Natural Sciences Society

Natural Sciences Society has had an extremely eventful


and enjoyable year. With a packed programme of
presentations, attendance at UCL lectures and quizzes, not
to mention the Olavian Lecture Series, the scientific
enrichment at the school has reached unprecedented levels.
As the new presidents of the society (Abhishek Patel,
Raunak Rao and Elena Rastorgueva), we have continued to
give students the opportunity to write scientific articles,
and have published our Spring Term Society Journal.
Article topics ranged from the mysterious concept of dark
energy to the malnutrition crisis in Sub-Saharan Africa.
Each writer demonstrated great passion and enthusiasm in
scientific concepts that interested them most. It is also
encouraging to see some younger students express their
interest in science through several articles being submitted
for the year 9 and 10 Triple Helix Competition. We are
looking to publish our second journal in the Autumn Term,
which will be the sixth issue for the society.
At the society gatherings every Friday, we have been
privileged to hear from seven external speakers, who
presented on a multitude of topics. First, Dr Tom Clarke
from Imperial College London presented on how the
immune system recognises the presence of bacteria and
protects us against infection, discussing some of his
biomedical
research,
and
advancements
in
the
understanding of immunology. March was a dynamic and
vibrant period for the society, giving rise to the name
Science Month. We heard from Professor Julian Evans
from University College London, who gave a thoughtprovoking talk on How do we nurture creativity in the
science curriculum?, capturing the imagination of many
students and staff alike. This was followed by a
presentation of astronomical proportions by Professor Carl
Murray, titled Saturns Rings from Cassini, with some
fascinating images from the space probe that Professor
Murray had been working in close collaboration with for
over 20 years. Next, Dr Philip Zegerman from the
University of Cambridge gave a presentation titled Beer,
bread and frogs: the best recipe for cancer research,
discussing various aspects of cell biology and how they can
be applied to the fight against cancer.
During the final week of March, former Olavian Asher
Leeks presented on the topic What makes us human?, and
Dr Julian Ma of St Georges University London gave a talk
titled Biomedical Research what is the point?, outlining

ISSUE 2, SEPTEMBER 2014


how his team of scientists discovered the first ever vaccine
against tooth decay, his latest research into using plants to
create medicines against HIV and other sexuallytransmitted diseases, and the importance of biomedical
research in the future to ensure that treatment is
accessible, affordable and up-to-date for everyone around
the globe. We also welcomed former Olavian Natsai Chieza
in the summer term who presented on her research on
genetically engineering bacteria to produce coloured dyes.
On top of this, we were delighted to give students the
opportunity to present on widely ranging topics, including
The Science behind Dying by Isaac van Bakel, The Game
of Life by Daniel Barovbe, and Mad Cow Disease by Ben
McKechnie.
Thanks to the hard work of the three previous presidents of
the society (Jenni Visuri, Fraser Boistelle and Harry
Jenkins), the headmaster, and members of staff in the
science department, the Olavian Lecture Series has
continued with high levels of success. Students, staff,
parents, friends and the local community have enjoyed
presentations by Lord Professor Robert Winston, Sir
Richard Friend, Professor Robert Freedman, Dr Adam
Rutherford and Professor Steve Jones during the Autumn
and Spring terms.
We look forward to welcoming new sixth-form students in
the forthcoming academic year to this dynamic, vibrant
society that encapsulates the essence of science at Saint
Olaves.

Abhishek Patel (Year 12)

History Society

It has been another busy and productive year at history


society; the usual array of highly informed student led
talks being added to by a broader and more eclectic array
of events for the society. The weekly meetings have gone
from strength to strength with original, enthusiastic and
well researched student presentations and debate on any
historical topic you care to imagine; the rule of
Charlemagne,
The
Indo-Pakistan
Wars
and
a
comprehensive History of Brazil all pleasingly off
curriculum with the passion of each student evident in
their work. We have also produced our fourth annual
history society journal, on the theme of War to
commemorate the centenary of the outbreak of the Great
War. Again, entirely written and compiled by students, its

SAINT OLAVES ACADEMIC JOURNAL


a true testament to the academic rigour of the Historians of
Saint Olaves.
Weve also broadened our horizons this year with our
former
(and
sorely
missed)
president,
Aiyan
Maharasingam, leading the infamous History Society trips
to lectures, museums and houses of historically ill repute
all over London. Equally, weve opened the doors of Saint
Olaves to all manner of historians and history related
speakers. Dr Lawrence Goldman of St Peters, Oxford gave
a highly informative talk on the historiography of FDR and
the New Deal. Samuel Jones, Head of Staff at the Tate
spoke about his work, his love of art, and the flexibility
offered by a history degree. Lastly, it was standing room
only for a talk by Godfrey Bloom; former Olavian, former
UKIP MEP and walking scandal. In a talk that was
picketed by some students and deplored by others; genuine
debate didnt change either sides mind, but certainly gave
everyone present some food for thought. We are looking
forward to continuing the historians struggle next term.

ISSUE 2, SEPTEMBER 2014


by Joseph Cordery and Daniel Finucane in the new school
year; a very exciting future programme indeed. There's
also an extremely competitive Classics Cake Competition I dare you to not enjoy that meeting. It will be hard to top a
very successful year of exciting talks and events with
Classics Society, but we are looking to the future (a
pastime we classicists are unaccustomed to) to do just that.
Thank you to all those that contributed and helped and
provided spectacular enthusiasm, and I would like to invite
all members of the school to come along and jump into the
wild and wacky, mysterious and magical, and generally
great Classical world.

Joe Cordery (Year 12)

Medics Society

Matthew Roberts (Year 12)

Classics Society
Medics society has had a fantastic year under both the
leadership of the new and previous leaders. I think I speak
on behalf of all the Medics when I say thanks to Zeinab,
Tolu and Skanda for their great leadership and dedication
to the Medics Society.

The Classics Society this year has had a very interesting


series of presentations and trips that brought in larger
numbers than ever before. An exciting talk about the
'Mysteries of Linear B' from the eminent Peter Leigh, the
debate on 'Homeric Religion' by Max Lewthwaite, a dive
into the sub-marine 'Palace of Cleopatra' in the bay of
Alexandria from Joseph Cordery, and a revealing lecture
on 'Alex the Great, Alex the Mate' by Dan Finucane
provided just a faction of the list of talks on the good, the
bad - but never ugly - ancient world.
Add this to the scholarly and well received Classics Journal
produced this year - which included some very clever and
very funny articles by pupils of all ages - and an erudite
display of enthusiasm and knowledge is revealed. But we,
like the Romans, like to indulge in a bit (a lot) of pure
entertainment and fun. With a trip to the Globe to see
Simon Armitage's 'Last Days of Troy' and a planned visit to
the National in September to see the critically acclaimed
new production of Euripides' 'Medea' as well as other
shows, you can be assured that Classics society is only
getting more exciting.
We will also be attending lectures, platform sessions, and
putting on a Greek play open to all at the school directed

With myself, Ella Day and Matipa Chieza being elected as


the new leaders of the society, we knew we had a lot to live
up to! We wanted to revolutionise the Medics Society as a
sign of new change and so we decided to create a fresh new
identity for the society. This will lead to the creation of the
new Medics Society Logo. We also wanted to start our year
of leadership with a bang, so we created the MedSoc
Lecture Series where we had a multitude of lectures by
prominent specialists in their field of Medicine and Medical
Sciences, presenting on what pathways Medicine can bring
them down.
It is with great excitement that I hear most Medics have
taken to reading The Epigenetics Revolution after hearing
Professor Nessa Carey present on the topic! I hope you all
find it as fascinating and as exciting as I did.
We hope to continue the success MedSoc has had this year
when we welcome the new Medics that are to join in
September. Also we hope to have many more fantastic
lectures from Old Olavians like Professor Tony Young,
(Director of Medical Innovation at the Royal Society of
Medicine) and we further hope to extend our focus at
MedSoc to include speakers from other fields of Medicine
with individuals such as Dr Henrietta Bowden-Jones
(Consultant Psychiatrist) in the line up for the new MedSoc
Lecture Series in September. Lastly, we wish all the

SAINT OLAVES ACADEMIC JOURNAL


Medics the best of luck with their forthcoming UKCAT
exams.

ISSUE 2, SEPTEMBER 2014

Literature Society

Liam Carroll (Year 12)

Physics & Engineering Society

This is a very exciting time for all the Physics and


Engineering enthusiasts as we begin our quest to extend
and develop our society; in effect making it even more
awesome. As a society, we have a long history of providing
the young Olavian scientists with an entertaining and
diverting way to spend their Tuesday lunch times. We are
currently open to all students in years 11, 12 and 13 with
an interest in pursuing a career in the sciences or
mathematics; however this is by no means a requirement!
We have been extremely pleased to see an increase in
number of students who are simply curious about the world
around them and thus enjoy asking questions. Our
structure involves an intriguing speech, conducted by a
member of the society, every Tuesday at 1pm in S6. This
creates a unique opportunity for the students to share their
knowledge about their favourite physics concepts with
their peers. The main goal of the society is to create an
environment in which the students are allowed to
investigate the crazy and interesting side of physics that is
rarely taught in textbooks. We strive to encourage
everyone to explore some of the very recent scientific
discoveries and to conduct some wider reading regarding
their favourite topics.
In December of 2013 a monumental event took place, and
after a long and testing selection process, two new leaders
have been chosen for the society. Joined by a goal to
improve the already fantastic society, Weronika Raszewska
and Akhilesh Amit had a first-hand experience of the
saying that With great power comes great responsibility.
With a bunch of great ideas but not much practical
experience we would have been doomed if not for our
amazing predecessors. Therefore we would like to extend a
quick thank you to Dominic Robson and Keir Bowater for
their guidance and for putting up with us as a whole.

To those in the know, there was only one place to be over


the Thursday lunchtimes of the past school year. Myriad
trips to MacDonalds; attendance at Political Economy
Society; whatever it is that Year 11s do with their
lunchtimes; any and all prior commitments were
unanimously and invariably foregone for Room 8s literary
discussion of the highest order.
Incumbent President Fintan Calpin began the year with a
discussion of Beowulf and the literary significance of the
saga even today in general, the member-given talks were
of a particularly high standard, with discussion of
everything from ghosts to videogames, Shakespeare to the
Georgian Poets. I personally was very proud of how many
talks moved past being about just one book that one person
had read onto broader themes that everyone could weigh in
on. We also had an astonishing external talk from slam
poet Joelle Taylor.
The other highlight was of course the publication of For
Words the journal was a long time in the making, but
stands as a work of such range and professionalism that I
doubt any of us begrudged the wait. We can only hope that
this years edition manages to fill such enormous
proverbial boots.
In a shock twist, there was no one Society President to take
over from Finn, but three Jack Bradfield, Rachel Wood
and Sam Luker Brown. I am sure I speak for all three of us
when I say how honoured and excited I feel at the prospect
of running Literature Society over the coming year.

Sam Luker-Brown (Year 12)

Weronika Raszewska (Year 12)

SAINT OLAVES ACADEMIC JOURNAL

Arts & Humanities


Was Emperor Augustus
a Keynesian?

"It is astonishing what foolish things one


can temporarily believe if one thinks too
long alone, particularly in economics."
John Maynard Keynes
1,819 years mark the time between the death of Imperator
Caesar Divi Augustus the first true Emperor of Rome and the birth of John Maynard Keynes the single most
influential economist of the 20th Century. Those centuries
saw the fall of the Ancient world, the rise of the old world
and the meteoric ascension of the new. Some would argue
that any mutual examination of these two men is rendered
null and void by the epochs that yawn cavernously between
them. However, I would argue that the chronological
distance between these two men lends itself to a sense of
detachment when comparing their ideas, actions and
words; allowing for a purer distillation of their beliefs
unsullied by the smears of their contemporaries.
Before we ask if Augustus himself was a Keynesian and
define what we mean by Keynesian economics we must
first establish whether the Roman economy was
sufficiently developed such that we can apply 20th Century
models and theories to its functions and politics. For the
sake of ease, I intend to examine the economy of Rome
through the lens of Polanyis three part definition
reciprocity, redistribution and exchange1. These definitions
identify three separate solutions to the economic problem
a feudal system of social obligation, a centrist system of
redistribution and a free market system of exchange. It is
evident that a ruler in an economy confined to any single
one of these systems could not be described as a Keynesian;
as the interaction between market forces and state
intervention is a fundamental aspect of Keynesian
economics. Whilst some would argue that Rome only really
fulfils one of these criteria, Id argue that the Roman
economy was a sufficient enough blend of all three, such

Polanyi, K (1977) The Livelihood of Man New York: Academic press

ISSUE 2, SEPTEMBER 2014


that it was developed enough to be labelled with a term
from a patently more advanced economy and economist.

The Roman Economy


The often overlooked fact about the Roman economy is its
immense size and complexity. In his dissection of the
Roman economy, Goldsmith estimates the population of
the empire in 14 AD to have been roughly 55 million2
whilst other studies range around this figure, as high as
100 million, Goldsmiths estimates remain somewhere in
the middle ground of more extreme estimates. These 55
million people living under the cosh of Rome were involved
in a huge range of economic activities industrial,
agricultural and manufacturing with extensive evidence of
mechanisation through primarily hydraulic means. For
example water sluicing in Iberian mines allowed the
Roman economy to produce a raw tonnage of ore
unmatched until the industrial revolution3. In demographic
terms, approximately 5% of the Roman population were
enslaved4 a major component of the spoils of war which
drove growth massively from 200BC onwards. A very
striking aspect of demography of the early empire was the
enormous wealth inequality contributed to by the
omnipresence of slavery. This had an immense impact on
the plebeian lower classes, who rarely owned land; whereas
senators estates manned primarily by slaves rather than
tenant farmers, sprawled for hundreds of acres across the
Italian countryside. These immensely wealthy senators
represented a cadre of society that paid at private expense
for the majority of public buildings in the late republic, and
arguably formed the basis for the Keynesian actions of
Augustus in the early Empire.

Fig. 1 Gini coefficient graph of the late Republic5


On the surface, the Roman economy may appear to fit more
in the model of social reciprocity than any other; the
Goldsmith, R W (1984) An Estimate of the Size and Structure of the
National Product of the Early Roman Empire Review of Income and Wealth
2

Volume 30
3 Wilson, A (2002) Machines, Power and the Ancient Economy Journal of
Roman Studies vol. 92
4 https://www.princeton.edu/~pswpc/pdfs/scheidel/010901.pdf - 5/7/2014
5 https://www.princeton.edu/~pswpc/pdfs/scheidel/010901.pdf - 5/7/2014

SAINT OLAVES ACADEMIC JOURNAL


oligarchs of the senate, in a system technically democratic
in nature but closer to feudal fealty in reality, funded and
supported myriad projects for the betterment of the Roman
people. Triumphant generals would regularly fund the
maintenance of infrastructure such as roads from their
personal wealth, under the direction of the senate6 thus
increasing aggregate demand in the Roman economy.
Furthermore, the annona, or grain dole for Roman citizens
provided 84,000 tonnes of corn for 200,000 people per
annum in the city7 this massive state led injection into
the circular flow of income is plentiful evidence for the
centralised manipulation of aggregate demand in the late
Roman republic. These acts evidently show an economic
model with both redistribution and reciprocity; to the
extent that a central authority either the senate or an
autocratic Principate, would have enough established
authority to actively manipulate the level of aggregate
demand within the economy; thus conforming to a
Keynesian model of economic control.
However, for reasons that will be discussed imminently, a
purely reciprocal and redistributive economic model would
not fulfil the criteria required for the Keynesian model.
Therefore, in order to label Augustus a Keynesian, we must
establish the presence of market forces in the Ancient
Roman World and prove that the Mediterranean market
for goods and services was sufficiently developed for
interplay between state manipulation of demand and
natural commercial activity an uneven dominance of one
over the other would prohibit us from describing
Augustuss policies as Keynesian. There is a wealth of
evidence for private enterprise and private sector
transactions which show the genuine presence of a
developed economy in the ancient world. Firstly the
shipping trade was dominated by private firms competing
and fulfilling sophisticated contracts including insurance
frameworks, letters of credit and a quality assurance
scheme for transported grain8. Furthermore, large firms
concentrated in specific provinces of the empire were able
to cut administrative costs through mass production of
goods such as the large number of metallurgy workshops
concentrated in Iberia. The fact that these economic
transactions took place outside of the auspices of
centralised governmental control clearly shows that there
was a genuine free market economy in the Ancient
Mediterranean fulfilling Polanyis three separate
economic descriptors and thus allowing us to view the
actions of Augustus as the political ruler of a developed
economy, who can feasibly be described as a Keynesian.

Keynesian Economics
In order to truly understand whether Augustus was a
Keynesian, we need to understand what is meant by
http://www.historytoday.com/logan-thompson/roman-roads - 11/7/2014
http://www.fee.org/the_freeman/detail/poor-relief-in-ancient-rome 2/7/2014
8 Temin, P (2006) The Economy of the Early Roman Empire Journal of
Economic perspectives pg 133-151
6
7

ISSUE 2, SEPTEMBER 2014


Keynesian economics. Keynes set out the core of his
economic beliefs in his magnum opus The General Theory
of Employment, Interest and Money; they can be
summarised thus9:

Demand is the most crucial aspect of the economy;


demand not only determines output, but also plays the
primary role in the cycle of boom and bust.
Manipulation of aggregate demand by the state is a
vital tool in any developed economy; government
spending is the best response to take in the face of
recession.
A successful economy will have significant input from
both the private and public sectors breaking away
from the laissez faire consensus that triumphed
throughout the 19th Century.

Given the constraint of Polanyis three stratified answers


to the economic problem, it is clear that Keynesian
economics relies on a mixed economy with a strong
exchange based private sector which drives investment
and a large proportion of ordinary transactions. However
in a Keynesian system, the State must also act in the
redistributive or reciprocal modes for the provision of
public goods in times of prosperity and the artificial
increase of aggregate demand in response to recession in
order to drive the recovery. It is evident that the Roman
economy combines Polanyis three categories enough for
Keynesian fiscal policy to be employed in the Ancient
World.
As Keynes was primarily writing in the aftermath of the
Wall Street Crash and Great Depression, it is a fair
assertion that Keynes primarily believed in using public
spending to alleviate the damage of the boom and bust
cycle. This is far and away the most compelling similarity
between Keynes and Augustuss doctrines as the first
Emperor inherited a nation torn apart by more than fifty
years of warring, desolation and genocide. His response,
over a 41 year reign, was to spend enormous amounts of
his own wealth and public money thus restoring the
output of the economy to its previous high, through
expansionary fiscal policy; to this end Augustus was a
textbook Keynesian.

Bust and Boom


The Roman Civil Wars of 49 BC to 30 BC quinquimated
the population of Rome with twenty percent of the adult
male population dying in the conflicts10. These wars were
particularly costly to Rome as since the fall of both
Carthage and Corinth in 146 BC, Rome was the
undisputed ruler of the Mediterranean. This meant that
casualties or damage on either side of the campaign would
both equally damage the strength and capacity of Rome
and its economy unsurprisingly, come Augustuss
9

http://www.maynardkeynes.org/maynard-keynes-economics.html 3/7/2014
Scullard, H H (1959) From the Gracchi to Nero Routledge publishing

10

SAINT OLAVES ACADEMIC JOURNAL


ascension in 27 BC, Rome was in a worse position than it
been since the sack of Italy at the hands of the Gauls in
390 BC. Augustus sought to repair the damage done by the
wars through an extensive program of spending and
infrastructure development; this action inadvertently
helped the recovery even further with government
spending feeding back into the wider economy thanks to
the multiplier effect, leading to a greater than proportional
increase in aggregate demand at a time when political
instability and conflict had left consumer and business
confidence in the future of Rome at an all-time low.
Expansionary fiscal actions included the repair of
dilapidated roads across the empire, done at the expense of
the senate11 and the building of aqueducts with public
money. Augustus who offered the public coffers more
than 150,000,00012 sesterces (0.75% of contemporaneous
GDP) personally claimed responsibility for the building
and repair of 82 temples across the Empire. The sheer
quantity of materials and labour this required would have
been of enormous benefit to firms operating around the
Mediterranean and thus represents a Keynesian injection
of public spending into the circular flow similar to the
building of Hoover dam or the Autobahn projects of 1930s
America and Germany. Whilst Augustus himself would
have had no awareness of Keynesian theory, aggregate
demand or the multiplier effect, his actions are preeminently Keynesian in nature he sought to actively
repair and replenish the economy through lavish public
spending; simultaneously winning popularity and founding
a dynasty that would last for centuries passing its name on
to the royal families of Germany and Russia centuries
later. Furthermore, the Keynesian approach genuinely
worked the period following his rule saw peace,
prosperity and a HDI figure unparalleled until the 1700s13.
However, it could be argued that Augustus was not
genuinely a Keynesian. The primary argument in favour of
this is that Augustus didnt take any public debt during his
rule in traditional Keynesian theory the shortfall from
spending and tax cuts is recouped through debt; however
due to the huge amounts of money flooding into Rome from
the provinces, there was no such shortfall. Similarly, the
growth experienced throughout this period was due to an
influx of wealth from newly conquered provinces such as
Egypt. The injection of this income was inevitable as the
Romans expanded their borders and due to the hierarchical
Roman system, this wealth went straight into the hands of
the ruling elite the senatorial oligarchy. While its a
nuanced distinction to draw, it is clear that there is a
difference between Keynesian public spending and the
socially obligated actions of the money grabbing patricians.
However this argument is null and void as regardless of
the mechanism or intention of the spending, Augustuss
actions still had a prominently Keynesian bent and effect.
A much more compelling argument against Augustuss
Jones, A H M (1970) Augustus New York: Norton & Company
12 http://classics.mit.edu/Augustus/deeds.html - 20/7/2014
13 Temin P (2006) The Economy of the Early Roman Empire Journal of
Economic perspectives pg 133-151
11

ISSUE 2, SEPTEMBER 2014


position as a Keynesian is his lack of alternatives whilst
there was a prominent private banking sector in Ancient
Rome, the principate and senate had no authority to
regulate these bankers beyond a legal maximum lending
rate. This was never utilised and remained at 12% for the
entirety of Augustuss reign therefore Augustus was not a
Keynesian by choice but by necessity as monetarism was
entirely outside of his options as Emperor.
Overall, the actions of Augustus were genuinely Keynesian
in nature; but more than that they were highly successful
in transforming the Empire from a war ravaged ailing
state into an unparalleled superpower. The Julio-Claudian
dynasty and its successors ruled the Mediterranean
unequivocally into the 3rd Century AD this strength was
thanks to the enormous successes of Augustus in founding
a prosperous Empire on the back of proto-Keynesian
economics.

Matthew Roberts (Year 12)


_______________________________________________________

The Scarcity Paradox

From adversity comes strength The role


of natural resource scarcity and human
capital in long-term economic development
The Resource Curse has been one of the most debated
and discussed theories within economic development since
the late 20th century. This notion suggests that economies
that are heavily endowed with natural resources often
experience slowdowns in economic growth and are
hindered by imbalanced and unsustainable development.
Most countries that suffer from the phenomenon are
characterised by weak governance, corruption and surging
inequality. The typical explanation for this refers to the
Dutch Disease, a case where the manufacturing and
agricultural sectors of an economy are made less
competitive in the global market as a result of a sudden
currency appreciation. This occurs when a country
discovers a reservoir of resources triggering a large inflow
of foreign currency into the market.
However, an alternative perspective towards the Resource
Curse remains largely ignored and unconsidered. Instead
of focusing on why resource-rich countries perform so

10

SAINT OLAVES ACADEMIC JOURNAL


poorly, economists ought to question why many resourcepoor countries fare so well. A large number of countries
that have experienced significant development in the postwar era suffer from a chronic shortage of natural resources.
This phenomenon, which I will call the Scarcity Paradox,
has been observed mainly in the East Asian Tiger
economies, and spread as far as Israel. The Scarcity
Paradox hypothesises that resource-poor countries
naturally develop a comparative advantage in knowledgebased industries by building a large stock of human
capital, a critical component for long-run economic growth.
The current wealth of Japans citizens is the result of the
country benefiting from the Scarcity Paradox. Japans
WWII surrender on the 2nd September 1945 marked a
turning point in its domestic economy and sowed the seeds
for a Japanese Miracle spanning several decades. The
government aimed to build an internationally competitive
technology and manufacturing sector that included funding
for university-industry partnerships allied with private
investment in education. Consequently, the Japanese
economy shifted away from import dependent industries
(e.g. textile industry) towards heavier industries (e.g. car
industry) that took advantage of the countrys surplus of
skilled workers at the time. By the turn of the millennium,
Japans real GDP per capita had exceeded the $30,000
barrier, some six times higher than in 1960. During a
similar time frame, Israel has become one of the most
innovative nations in the world. Today, it is home to
Silicon Wadi, an area concentrated by many of Israels 60
companies listed on the NASDAQ. Israels high annual
growth rates averaging above 5% from the late-1960s to
mid-1990s can be largely attributed to the governments
large scale social programs that aimed to build a
technology based economy based on improving education in
the 1970s allied with a process of structural change.
The first element of the Scarcity Paradox ties in the link
between natural resource scarcity and human capital
accumulation. Heckscher (1919) and Ohlin (1933) argued
that resource-poor countries such as Japan would not
specialise in and export primary goods. The Scarcity
Paradox argues that these countries are forced into
developing a comparative advantage through other means,
and therefore, most, if not all turn to building a productive
and skilled workforce via an effective education system.
Unlike in resource rich nations, the key determinant of
future income of an individual is the extent to which they
can build knowledge and accumulate skills, giving each
person a strong motive to work with the aim of attaining a
place at an internationally recognised university. It is also
of critical importance that the public understands that an
economy fundamentally relies on its net human capital
which is determined by the quality of education provided
and a willingness to work. Such fundamental values are
held throughout the Far East and in many resource-poor
nations, and the social pressure to perform at school
borders on the extreme, giving further support to the
Scarcity Paradox. Equally important is that teachers are

ISSUE 2, SEPTEMBER 2014


given the considerable level of respect they deserve. In fact,
according to the 2013 Global Teacher Status Index, the
Chinese public considers teachers to be comparable to
doctors in terms of social class, whilst in resource-rich
United States they are equivalent to librarians. Although
China has a significant stock of natural resources, it
appears significantly less when divided amongst its 1.3
billion citizens. In the words of Julian Simon, the Chinese
understand that the main fuel to speed progress is our
stock of knowledge.
The link between resource scarcity and human capital

Source: OECD (2012)


accumulation is supported resoundingly by findings of the
OECD. According to the OECDs Andreas Schleicher, there
is a significant negative relationship between the money
countries extract from national resources and the
knowledge and skills of their high school population (see
diagram above).
The second element of the Scarcity Paradox is welldocumented and focuses on importance of human capital
stock for economic growth and development cannot be
exaggerated. As knowledge and skills are built up in an
economy, finite resources become used more efficiently and
labour productivity rises stimulating growth. Furthermore,
fewer workers are required in the subsistence sector
creating a surplus of labour. Higher wages rewarded in the
capitalist sector (e.g. manufacturing) attract these
surplus workers. As Lewis (1954) argues, this migration of
labour leads to a gain in output due to the higher marginal
product of labour within the capitalist sector. This has
been demonstrated In Israel, where only 2.6% of the
workforce are currently employed in agriculture, in
comparison to 17.5% in 1958. During this period real GDP
per capita almost quadrupled and now exceeds $20,000.
Human capital also accelerates the rate at which new
products and innovations arise, making firms dynamically
efficient and encouraging investment into capital and
R&D. Since most imported goods cannot be reverseengineered, an important role of human capital is to
facilitate in the adoption of new techniques as proposed by
Nelson and Phelps (1963). Lucas (1988) extended upon this
concept, suggesting that since knowledge and skills are

11

SAINT OLAVES ACADEMIC JOURNAL


infectious and cannot be contained, an accumulation of
human capital benefits the macroeconomy on a more than
proportional scale.
Unfortunately, many governments have misinterpreted the
Scarcity Paradox and the success of resource-poor nations
as a matter that can be replicated by government spending
on education. In fact, public spending on education bares
little correlation to student achievement (see diagram
below). A 5.1% growth on spending under Labour on
education during the first decade of the century did little to
boost student performance, and the proposed cuts to

ISSUE 2, SEPTEMBER 2014

Roman religion was


essentially a
mechanism of social
control. Discuss.

Introduction
Die religionist das Opium das Volkes. The perennial
words of Karl Marx, from his essay entitled A Contribution
to the Critique of Hegel's Philosophy of Right, reflect a

Source: EducationNext (2001)


educational spending may be a brave and admirable
decision. In the UK, too much of the spending has been
focused on interactive whiteboards and other new
gadgets, yet in the highest ranking countries in the
TIMSS (Trends in International Mathematics and Science
Study), many teachers use simple blackboards with chalk.
This example illustrates that the accumulation of human
capital arises not through spending alone, but also through
changing attitudes of the general public towards education.
Resource-poor nations have become prosperous frontier
economies within the global context by placing education
and human capital accumulation at the forefront of public
thought and government policies. They give us belief that
from what appears to be the most adverse and undesirable
situation, we can draw on our strengths to prosper and
succeed. The Scarcity Paradox clearly suggests that it is
about time that both developing and developed countries
gave education and human capital the serious
consideration they deserve.

Shunta Takino (Year 13)


_______________________________________________________

profound scepticism of the validity of religion in the


modern day. Whether such a proposition can be levelled
against the religion of ancient Rome is a matter which has
fostered much debate. Whilst the Romans considered
themselves deeply religious in their collective pietas - a
notoriously difficult term to define - to even call the
religion of Rome a religion is, in the eyes of primitivist
theorists such as John Clarke Stobart, an audacious
statement in itself. Stobart, in his once influential book
The Grandeur That Was Rome, remarks that the Romans
were never really a religious people, based upon the
premise that they lacked the imagination to be really
devout14. Primitivist-based criticism aside however (much
of which has been denounced in recent years15, the system
of rule or government, be it during the Republican or
Imperial episodes, was in many respects theocratic in its
nature: law and politics, both fundamentally mechanisms
of social order, were, since the early Republic, intertwined
with religious institutions, whilst the religion itself was, as

14J C Stobart's

The Grandeur That Was Rome reflects the primitivist


criticism of Roman religion prevalent during the early 20th century p41
(1912). For other works of similar criticism or mode of thought, see Theodor
Mommsens The History of Rome (1864) and Franz Cumonts The Oriental
Religions in Roman Paganism (1806). The primitivist theories of Stobart
and others have largely been dismissed in recent years through further
scholarship, see Scheids An Introduction to Roman Religion (2003).
15See

Religions of Rome by Mary Beard, John North and Simon Price (1998)
for a recent piece of scholarship which largely denounces, for example, the
once generally accepted theories of Georges Dumzil, regarding the origins
of Roman religion and culture (p14-16).

12

SAINT OLAVES ACADEMIC JOURNAL


Franz Cumont stated, subordinated to politics16, indeed
often instrumentalised as a political tool.
However, the fact that Roman religion was often politicised
does not necessarily lead to the conclusion that it was
definitively, without any doubt, a mechanism for social
oppression. The following observance of several key
principles of the religion should illustrate that the religion
of Rome, at its traditional core, provided Roman citizens
with a degree of liberty, despite the decline of these values
as the religion itself became tainted and spoiled by decades
of political exploitation. A succinct overview of the
structure is perhaps needed. This essay will be comprised
of three sections: firstly, some of the central principles and
concepts at the core of Roman religion; secondly a brief look
at religion during the Roman republic; and thirdly, an
equally brief observance of Roman religion during the
Augustan principate. It must be noted that, with the title
in mind, focus will be placed primarily on what evidence
there exists surrounding the use of religion as a tool for
social and political ends. This is not an attempt to cover the
immense breadth of all aspects pertaining to Republican or
Augustan religion since this is beyond the remit of this
exercise.

Principles and Concepts


And so, to begin with, a very brief exploration of the
concepts and principles which underlined the religion of
Rome, informed throughout by Professor John Scheid's An
Introduction to Roman Religion. Firstly, as Scheid aptly
points out at the very opening of his text: this was a

religion without revelation, without revealed books,


without dogma and without orthodoxy.17 Indeed the only
real obligation to be fulfilled by Romans was that of
orthopraxy, the correct performance of strict and
prescribed rituals. The purpose of such ritual was similar
to that of the Greeks: to secure the favour and protection of
the gods so as to avoid the malevolence that drawing their
ire brings, thus enjoying sustained periods of prosperity
devoid of malevolence or suffering. This central principle is
known as pax deorum. A typical Roman prayer, taken from
Cato, serves to illustrate the simplicity of this contract
between citizen and deity:

Whether thou art a god or goddess to whom that grove is


sacred, may it be justice in thine eyes to sacrifice a pig for a
peace-offering in order that the holy influences may be
restrained. For this cause, whether I perform the sacrifice
or anyone else at my orders, may it be rightly done. For

ISSUE 2, SEPTEMBER 2014


that cause in sacrificing this pig for a peace-offering I pray
thee honest prayers that thou mayest be kind and be
propitious to me and my house and my slaves and children.
For these causes be thou blessed with the sacrifice of this
pig for a peace-offering.18
The most interesting aspect of this prayer for peace is the
unavoidable nebulousness of the deity or supernatural
force to whom the prayer is directed, demonstrated in that
the gender of that god or goddess bears almost no
importance. This bears the hallmarks of a central principle
of Roman religion: citizens are at liberty to worship or
visualise any deity or divine manifestation they wish in as
far as this was essentially an irrelevance; the predominant
concern was performance of the correct method of sacrifice,
prayer and various other forms of worship. To misplace or
omit a word from a prayer, such as the one above, could
prove, in the eyes of the Romans, immensely foolish and
dangerous.
This leads to a further fundamental concept of Roman
religion, that of religio, a notoriously difficult term to
translate and define. The word has often been viewed as
derived from the verb religare meaning to bind, leading to
the definition of religio as that of a binding relationship
between citizens and the gods with, as Scheid states,
scrupulous observance of religious obligations19. Cicero
views religio as the pious cult of the gods20, which retains
the same underlying principle of observing ones
inextricable bond with the gods through correct ritual
practice. The foundations of this bond were therefore not of
a sentimental or personal nature, rather formal
regulations which citizens were obliged to fulfil. Religio
supposedly brought social harmony and prosperity to
Rome.
An aforementioned term, pietas - again a word with almost
no specific definition - but similar in effect to that of religio.
To rely on Cicero a second time, piety is justice with
regards to the gods21: similar in effect, in that what one
does is, with regard to the gods: constantly observed and
judged by divinity. Equally, one could commit impietas,
which is denying the gods their divine entitlements of
ritual and prayer, or through damaging their divine
property, that which is sacer. Impietas could be achieved
inadvertently (imprudens) or with purposeful malice
(prudens dolo malo). The punishment, both divine and
human, varied according to whether the impietas was
accidental or purposive.
C Stobart, The Grandeur That Was Rome, p46 (1912). Stobart states
that the prayer is as we have it in old Cato, with no further specification of
its provenance provided.
18J

16Quotation taken from an extract from Frank Cumonts

The Oriental
Religions in Roman Paganism (1806) found in An Introduction to Roman
Religion, p7 (Indiana, Indiana University Press, 2003).
17J Scheids

An Introduction to Roman Religion (2003). Scheid comments on


the ritualistic and social nature of Roman religion asides the absence of
dogma or creed. Individuals had religious duties imposed on them by their
birth, adoption, affranchisement or grant of Roman citizenship, p18.

Scheid, An Introduction to Roman Religion, p22 (2003). Scheid analyses


a large number of principles and concepts underling the religion (p18-30);
for the purpose of this essay only several central principles have been
focused upon.
20J Scheid, An Introduction to Roman Religion, p23 (2003). Scheid takes this
quote from Ciceros work, On the Nature of the Gods.
21J Scheid, An Introduction to Roman Religion, p26 (2003). This quote is also
taken from On the Nature of the Gods by Cicero.
19J

13

SAINT OLAVES ACADEMIC JOURNAL


These underlying principles would seem to give a first
impression of a quite a puritanical religion. However, the
religious liberty of the citizen was also important. Worship
of the gods was conducted not out of fear but out of, as
Scheid states: a civic rationality that guaranteed the

liberty and dignity of its members both human and divine.


This liberty manifests itself most apparently in the
extensive freedom of worship citizens were theoretically
granted, not in terms of how they conducted their worship
(this was rigid) but in terms of whom or what they
worshipped. Absence of dogma or creed, extensive presence
of strict orthopraxy: this was the essence of Roman pagan
tradition. However, whilst in principle Roman religion was,
to an extent, liberal in its nature, this did not leave it
immune to usage as a tool for oppression of particular
religious or secular groups, as shall be demonstrated in the
following section.22

Roman religion during the Republic


And now to provide an extremely brief overview of religion
during the Republic. During this period from 509 to 27BC,
religion was intertwined with the law and politics of the
governing body, that of the ruling noble families. The
phrase patrician monopoly is almost synonymous with the
Republican period of Rome, defined by the hold over
secular and sacred office maintained by the ruling families.
During the Republic, religion formed a branch of state
administration, and can therefore be seen to play a major
role in the governance of the ordinary Roman people.
Pamela Bradley, in her book Ancient Rome: Using
Evidence, states that religion was subordinate to the
interests of the state.23 If one examines some of the roles
that religious officials, this would appear undeniable:
pontiffs advised chief magistrates, were guardians of the
Divine Law and established the first criminal code, whilst
Fetiales interpreted laws themselves.
One of the ways this patrician dominance was exerted was
through the electoral system. The lex Ogulnia, a law
passed in 300BC, set out a new method of election for
religious office, known as minor pars populi voting by the
lesser part of the people which dictated that, out of the 35
tribes which had right of suffrage, only 17 could vote in an
election. Which 17 tribes would be entitled to vote was
decided by lot. Here one can see a degree resistance from
the patrician circle to greater plebeian influence in the
deciding of sacred office. Furthermore, candidates for office
22Scheid

makes reference to Bacchanalia Scandal of 186BC, during the


Republic, p27. This involved, as Scheid states, numerous repressive

measures taken against astrologers, charlatans and philosophers, and the


persecution of Christians. It would thus appear evident that religion was
certainly used at times as a tool for social oppression of groups whom the
ruling body considered objectionable.
23P Bradley, section entitled Early Republic, from her book Ancient Rome:
Using Evidence (1990). As a side note, Bradley echoes the views of those
such as Mommsen and Stobart in stating that Roman religion was cold,
formal and lacked emotional involvement. Thus, not all modern scholarship
disagrees entirely with the primitivist theories of some of the early 20th
century classicists.

ISSUE 2, SEPTEMBER 2014


had to already be pontifices and had to be nominated by
existing members, who could, at least to an extent, prevent
the arrival of new members who they felt posed a threat to
their dominance within the religious elite. It would
therefore be unsurprising to find that, from what records
survive, the priesthoods were virtually monopolised by
members of the best established, elite families .24 This
illustrates one of the most striking characteristics of
Roman religion, that of the hold over secular and sacred
office by the same men. However, that is not to say that the
masses retained no influence at all in sacred office: the lex
Ogulnia expanded membership to admit the plebeians,
with a greater number of plebeian pontifices maximi seen
in subsequent years. Either way, this transformation in the
electoral system was, as stated in Religions of Rome by
Mary Beard, John North and Simon Price, an important

step in the politicisation of the priestly colleges.25


The point regarding the monopoly of office, both sacred and
secular, has been largely covered. Yet control by the
pontifices took other forms as well, some of a more social
and thus, in the context of this essay, more relevant
nature. One of the most apparent hallmarks of Roman
religion is the influx of foreign deities and cults into the
state religion itself, and, during the third century BC, this
process was carefully supervised and, in cases suppressed,
by religious authority, as the ruling families suppressed
and outlawed certain cults which they considered
unaligned with Roman religious beliefs. In their book A
History of Rome, H H Scullard and Carr remark that no

new format of a more imaginative and exacting religion


was allowed to disturb the mental composure of the Roman
people.26 Cults which threatened public order were
stamped out. Such control by the pontifices and ruling
families, to maintain the stability of the domestic religion
against more dynamic foreign cults, is a prime example of
the usage of religion as a medium for social control and
maintenance.
During the second century, political exploitation of religion
by the elite reached its height. In the words of Mary Beard,
Simon Price and John North in Religions of Rome, the
Greek historian Polybius was of the opinion that religion

should be seen as a means by which the ruling lite


manipulated and disciplined their people and that it was
by lite manipulation of popular religious attitudes that
social order was maintained.27 Despite the contradictory
nature of Polybiuss observations - a criticism made in the
same passage of Religions of Rome - this was more or less
the state of religion in the Republic. The point made
concerning popular religious attitudes is exemplified in the
manipulation of pax deorum - a popular religious attitude
briefly mentioned earlier on - to nothing more than a
Beard, J North and S Price: Priests in Politics, p10 (2013)
Beard, J North and S Price: taken from the section Priests in Politics,
p100 (2013)
26H H Scullard, M Carr, A History of Rome, p108. (1975). Taken from the
section on Religion during the Republic.
27M Beard, J North and S Price, from the section entitled: The religious
situation of the mid-second century (2013)
24M
25M

14

SAINT OLAVES ACADEMIC JOURNAL


conspiracy between the state gods and the governing
aristocracy for the maintenance of the latters
ascendancy.28 Scullard and Carr point out the readiness of
the ruling class to exploit religion as an instrument of their
class ascendency.29 The subordination of res divinae to
political convenience formed part of this exploitation of
religion, evidenced in the Aelian and Fulian Laws, whereby
a magistrate or tribune could disband all legislative
assemblies if an unfavourable omen had been witnessed.
This, in the eyes of Scullard and Carr: virtually sanctioned
the abuse of divination to suit political exigencies .30 In the
town of Ephesus, bequeathed to the Romans in 133BC, the
inhabitants were granted freedom of worship, on the
condition that they paid their taxes to Rome. Here we see
religion used as little more than a bargaining chip, a
means to an end, to induce order and loyalty amongst
citizens outside of Rome.
During the second century, the Republican government
maintained the suppressive policy on foreign cults,
religious and secular groups, thus utilising religion to
maintain social order, as Polybius had believed. In 139BC,
the praetor peregrinus of the time issued an expulsion
order banishing all members of the Jewish sect and
astrologers from Rome, the former expelled for their efforts
in proselytism. This was continued in the first century,
where the Egyptian cults of Isis and Sarapis were banned
from entering Rome, with altars to Isis on the Capitol
destroyed in 58BC. During this century, traditional
worship was undergoing ossification as the religion of
Rome passed through an apparent state of stagnation.31
Thus, it was during the Republican period in Rome that
religion became tainted and spoiled as a tool of political
exploitation and social control. Whilst in principle, Roman
religion was associated with the religious liberty of the
citizen, this was a great divergence from the reality
religion found itself in during the Roman republic. Such
use of religion for political ends is to be seen in the
Augustan principate, albeit on a more expedient and
argute level.

Religion during the reign of Augustus


During the final section, the nature of religion during the
period of rule of Augustus will be briefly examined. This
period is perhaps best summarised by Pamela Bradley,
who remarks that Augustuss religious policy reflected his

genuine conservative inclinations as well as his political


acumen.32 Augustus certainly possessed a belief in
traditional moral values, exemplified in part by the exile of

28H

H Scullard and M Carr, A History of Rome, p311 (1975)


Scullard and M Carr, A History of Rome, p198 (1975)
30HH Scullard and M Carr, A History of Rome, p199 (1975)
31HH Scullard and M Carr, A History of Rome, p311 (1975). Nearly all of the
writing on Roman religion during the Republic to be found in A History of
Rome focuses on the abuse and exploitation of religion to further political
interest or suppress and control the Roman people. This would appear to
reflect the consensus of opinion regarding religion during the Republic.
32P Bradley, Ancient Rome: Using Evidence, p439 (1990)
29HH

ISSUE 2, SEPTEMBER 2014


his own daughter Julia the Elder to the island of
Pandataria in 2BC for adultery during her marriage with
Tiberius, even calling her - allegedly - a disease in my
flesh. This aside however, Augustus was aware that
religion would prove an effective tool in both strengthening
his own and his familys image, whilst controlling the
people through instilling traditional religious values. He
believed it necessary to rejuvenate such old religious
values and practices to strengthen his regime.
One of the most conspicuous features of Augustuss
religious policy was the extensive building programme that
was performed during his reign. Over 82 temples were
repaired in 28BC according to Augustus in his
Achievements - whilst over 14 temples were allegedly built
or renovated within Rome. The Temple of Mars Ultor
Mars the Avenger was constructed in the centre of
Augustuss new forum. Its dedication to the god of war and
vengeance provided a connection to Augustuss vengeance
on the Parthians in 20BC, with the standards lost by
Crassus placed in the interior cella. As stated in Religions
of Rome: military glory was to be displayed in a setting
which explicitly evoked the emperors authority.33 The
citizens of Rome, upon visiting the forum, would be
constantly reminded of the power and ascendency of their
ruler and the loyalty he expects. Other temples to Apollo
Augustuss patron god Magna Mater, Venus, Divus
Julius, amongst others, were constructed within the
pomerium or sacred boundary of Rome. With a huge
number of religious monuments within Rome as a result,
be it through renovation or construction, the omnipresence
and ubiquitous nature of religion was unavoidable to the
ordinary citizen; a constant reminder of their religious
duties and loyalty to their divinely inspired ruler.
Of the religious monuments that were built, several found
their place adjacent to Augustuss supposedly modest
residence, domus Augusti, upon the Palatine, these being a
shrine of Vesta and temple of Apollo. In effectively making
his own home the home of the gods, Augustus created a

complex of divine and human residenceclearly evok[ing]


[his] divine associations.34 The emperor was proclaiming
himself, through architecture, to be equal with divinity to
an almost obscene level; the entire construction and
renovation programme was, as seen in the religious
buildings connected to Augustuss house, highly strategic
and political.
Equally political was the construction of the Ara Pacis
Augustae from 13 to 9BC, explicit in its implication of
peace belonging to and owed to Augustus and his military
exploits, upon his return from three years in Hispania and
Gaul. With architecture again employed as the visual
medium, the military and to an extent divine supremacy of
Augustus was conveyed to Rome, with the importance of
pietas and peace within the empire also emphasised.

33M
34M

Beard, S Price and J North, Religions of Rome, p199 (2013)


Beard, J North and S Price, Religions of Rome, p198 (2013)

15

SAINT OLAVES ACADEMIC JOURNAL


Inspiring obedience and loyalty formed part of the
intention behind this monument, and on a wider level
behind the entire building programme that Augustus
presided over.
A further way through which Augustus instilled
temperance within the Roman people was the rekindling of
traditional religious values and ceremonies. These values,
proclaiming the importance of self-control and abstinence,
appealed to the people after decades of brutal and violent
excess. It was Augustuss recognition of the appeal and
resonance such religiosity would have among the Roman
population that formed part of his intention to rejuvenate
traditional religious values. Indeed, many Romans believed
that Romes success derived from Romes piety, with the
proclamation: Augustan peace must rest upon the pax
deorum,35 achieved through adherence to ius divinum and
practice of individual pietas. Writers and poets broadcasted
the values and traits that Augustus wished to instil in the
people.36 In 35BC Horace referred to Augustus as one who
cared for Italy and the shrines of the Gods.
Augustuss relation with divinity was promulgated in more
ways than one. His establishment of the Imperial cult
cemented his own form of divinity, with temples to Rome
and Augustus built across the empire, such as the one
found in Vienne, and the emperor advertised himself as
divi filius the son of the divine Julius Caesar. In far flung
regions of the Roman world, Augustus was worshipped as a
god: in Egypt for instance he was as divine as the
Pharaohs. According to Suetonius, Augustus opposed any
temples in Italy in his name, even going as far as melting
down silver statues of himself. The Roman historian
perhaps paints Augustus in a more noble and pure light
than he deserves. Nonetheless, Augustus must have
recognised that worship of divus Augustus throughout the
empire would consolidate loyalty and order in the more
distant areas, where unrest was potentially more likely.
Augustus reinstated the Ludi Saeculari the Saecular
Games in 17BC. The games, as well as a public spectacle
gaining the contentment of the people of the capital,
celebrated the past and present grandeur of Rome whilst
showcasing hopes for a new golden age of peace, prosperity
and traditional Roman values. Furthermore, the games
were successful in their mark[ing] of the importance of the

city of Rome and of the importance of the emperor within


it.37
When Lepidus died in 12BC, Augustus succeeded him as
Pontifex Maximus, a position which became an imperial
perquisite. His election was an occasion for a
demonstration of popular support. The position was also
suitably aligned with his programme of restoration of

ISSUE 2, SEPTEMBER 2014


religious titles and authority, such as the increase of
privileges bestowed upon the Vestal Virgins and the
introduction of a Flamen Dialis, the ancient priesthood of
Jupiter, a dormant position since 87BC. In 2BC, by the
people and the Senate, he was officially granted the title of
Pater Patriae: the Father of the Fatherland. The increased
religious authority of Augustus combined well
intentionally so - with his restoration of the respect and
greatness of religious posts, thus garnering greater
obedience and reverence from the Roman people. The Cult
of Augustuss guardian spirit Lares his genius or numen became established in much of the western Roman world,
promoted by the emperor through its further securing of
the loyalty of his subjects through worship.
In summary, Augustus sought to consolidate the prosperity
and longevity of his rule, and religion proved the ideal
pretext for his political intention. Despite his genuine
conservative
inclinations
religion
was
still
instrumentalised to induce loyalty and obedience across
the empire - after decades of bloody civil war - and was
therefore as much a tool for social control as it had been
throughout the Republic, albeit in a perhaps more far
sighted and cunning way.

Conclusion
The pagan religion of Rome reflects a profound divergence
between principle and reality. Marx went on to say that
religion was the sigh of the oppressed creature; to an
extent this can be applied to Roman religion: during the
periods of Republican and Augustan rule, the people were
manipulated and controlled, with religion as the most
influential and efficacious pretence. Principles regarding
the religious liberty of the citizen and importance of the
community became essentially vacuous when religion
became little more than a patrician and imperial
perquisite, a means to an end. The Romans certainly
possessed a practical attitude towards religion as a whole:
absorbing new cults and deities, restoring old cults and
deities and banning cults and deities, largely when such
action was in the interest of those who ruled. Thus, Roman
religion was - to use the word of the title - essentially a
mechanism for social control, essentially in that the reality
of the religion, that of exploitation and suppression,
diverged from the principles it supposedly retained.

Max Lewthwaite (Year 12)


_______________________________________________________

H Scullard, From the Gracchi to Nero, p233 (1986)


Aeneid is a prominent example of where the traits and values
associated with an upstanding Augustan Roman were pronounced through
literature. The epic poem has often been viewed as a piece of Augustan
propaganda in light of this.
37M Beard, J North and S Price, Religions of Rome, p206 (2013)
35H

36Virgils

16

SAINT OLAVES ACADEMIC JOURNAL

La coupe
2014

du

ISSUE 2, SEPTEMBER 2014

monde

French
Je suis devenue une folle de football aprs avoir regard la
coupe du monde pour la premire fois. Je suis une fan
typique : je ne peux pas regarder un match sans montrer
du doigt tel ou tel joueur, ou telle ou telle action, et sans
me mettre hurler, en restant cloue devant l'cran. Je
suis une passionne inconditionnelle de football ; si je ne
suis pas au stade, je regarde le match la tlvision.
Le mondial est une des rares choses qui peuvent
rassembler les gens et les pays. Des riches aux pauvres, le
mondial permet un rapprochement de cultures
mondialement. Malheureusement, pendant les annes
rcentes, la FIFA a d faire face des difficults avec le
racisme. Avec leurs campagnes antiracisme, ils travaillent
liminer le racisme. Les difficults ne devraient pas
dtourner notre attention des gens du mondial.

Le Brsil est le pays organisateur de la Coupe du Monde


cette anne et selon moi, le Brsil est une des plus beaux
pays du monde. J'aimerais le visiter plus tard. Alors que le
Brsil sapprtait attirer les fans, il y avait beaucoup de
grves contre la construction des stades pour le mondial.
Les ouvriers se sont mis en grve pour protester contre
leurs horaires de travail et leurs conditions inacceptables
de travail. Le Brsil fait partie des conomies en cours de
croissance/dveloppement les plus rapides du monde, mais
la corruption est monnaie courante dans le nouveau
gouvernement, comme a l'tait auparavant. C'est
vraiment dommage que ce soit le cas, parce que le mondial
devrait tre un vnement qui runit les pays au lieu de les
sparer.

J'espre que l'quipe nerlandaise gagnera, parce qu'ils


jouent trs bien et ils montrent de la solidarit.

The 2014 World Cup


English Translation
I became a football fan after watching the World Cup for
the first time. I am a typical fan: I cannot watch a match
without pointing and screaming at the players, all the
while remaining glued to the television screen. I am a
devoted football fan: if Im not at the stadium, then Im
watching the match on TV.
The World Cup is one of those rare things that can unite
people and countries. From rich to poor, the World Cup
allows a unification of global cultures. Unfortunately,
during recent years, FIFA has had to deal with difficulties
concerning racism. With their anti-racism campaigns, they
are working to eliminate the problem. These difficulties did
not detract peoples attention from the game.
Brazil is the host country of the World Cup this year and,
in my opinion, Brazil is one of the most beautiful countries
in the world. I would like to visit Brazil later on. Whilst
Brazil is preparing itself for the scrutiny of many football
fans, there have been many strikes against the
construction of stadiums for the Cup.
The workers are striking against their work hours, and
their unacceptable working conditions. Brazil is among the
worlds fastest growing economies, yet corruption is as rife
in the new government as it was before. It is a great shame
that this is the case, as the World Cup should be an event
that unites countries instead of separating them.
I hope that the Dutch team wins as they play well and
display solidarity.

Maya Makinde (Year 12)


_______________________________________________________

17

SAINT OLAVES ACADEMIC JOURNAL

Rainer Werner Fassbinder


Der Vater des Deutschen
Autorenkinos im Kontext

German
Rainer Werner Fassbinder wird oft als der Vater des
Deutschen Autorenkinos gesehen. Nur drei Wochen nach
dem Ende des zweiten Weltkriegs in 1945 geboren, war
Fassbinder ein Mensch, der sich voll und ganz seiner
Arbeit widmete; usserst produktiv in seiner Kunst auf
den Punkt, ein Workaholic zu werden. Auerdem
verkrzte sein Drogenkonsum sein Leben. Bis zu seinem
vorzeitigen Tod 1982 schaffte er es dennoch, bei 44
Produktionen Regie zu fhren. Die meisten seiner Werken
waren Kinofilme, sowie einige Fernsehproduktionen wie
zum Beispiel Berlin Alexanderplatz. Fassbinder schrieb
sein eigenes Drehbuch, und in neun seiner Werke und in
noch weiteren zehn Filme nahm er auch als
Schauspielerteil.
Fassbinder
demonstrierte
seine
Vielseitigkeit, indem er auch als Kameraman und
Produzent fungierte.
Die meisten Regisseure wrden es schwerig finden, jedes
zweite Jahr einen Film zu produzieren. Aber Fassbinder
hat in jeden Jahr seiner Ttigkeit drei Filme gemacht[2].
Roger Ebert, ein Freund von Fassbinder, erinnert sich: In

einer Flut von Kreativitt unerhrt unter den modernen


Regisseuren, machte er Filme, so wie er Zigaretten
rauchte, eine nach der anderen, keine Pause dazwischen.
Von seinen Filmen ist es klar, dass die Folgen des Krieges
und der 68er kulturellen Revolution seine Arbeit stark
beeinflusst haben. Als kleines Kind hatte er die fast
vollstndige Zerstrung und wirtschaftlichen Verwstung
Deutschlands miterleben mssen. Spter wrde das
Wirtschaftswunder die Bundesrepublik Deutschland zu
einem anhaltenden Wiederaufbau und wirtschaftlichen
Aufstieg fhren. Gleichzeitig zum, und bewirkt durch das
Wirtschaftswunder dominierte eine kulturell konservative
Haltung. Die allgemeine politische Einstellung von
Deutschland war konservativ, und offene Homosexualitt
war illegal. Dieser Konservatismus war fast sicher eine
Folge der Kombination der extravagant liberalen
Weimarer Republik (1918-1933), und des intensiv
repressiven Nazi-Regimes (1933-1945). In den 1950er
Jahren, wurde ber den Holocaust und die NS Zeit nicht
gesprochen. Die deutschen Eltern dieser Epoche haben sich

ISSUE 2, SEPTEMBER 2014


auf harte Arbeit und die Verbesserung der Wirtschaft
konzentriert und ihre von Nazi-Ideologien durchsetzte
Erziehung war unreflekiert. Es gab in der Tat ein groes
Ma an Scham ber die Vergangenheit, und es hat ihrer
Bereitschaft zur Kommunikation geschadet. Die Kinder
der Nachkriegs-Generation wrde schlielich diese
Vergangenheitsbewltigung erzwingen von den 1970er
Jahren bis zur Wiedervereinigung. Neben einer Reihe von
blutigen Konflikten (die Baader-Meinhof-Terrorismus von
1970 bis 1993, zum Beispiel) haben die Jugendlichen
kulturell gegen ihre Eltern rebelliert. Fassbinder, ein
bergewichtiger, bi-sexuelle Mann kultivierte berwusst
einen Stil der Kinematographie, um diese repressiven,
altmodischen, gesellschaftlichen Normen provokativ in
Frage zu stellen.
Bei genauerer Analyse von Fassbinders Stil geben seinen
Filme eine prgnantes Bild von Nachkriegs Deutschland,

durch ironische und fast handlungslose Dekonstruktion


des Hollywood-Stils mit einem klugen und provokante
politischen Schliff. Doch sie bleiben auch in der heutigen
Zeit relevant fr die menschlichen Beziehungen des
stdtischen Lebens. Einige der Filme (besonders jene, die
sich auf eine Gruppe konzentrierenund nicht auf ein
einzelnes Opfer) sind auch mit einem ausgesprochen
dunklen und sardonischen Humor begabt. Fassbinder
erreicht dies in der Art, wie er seine Figuren schildert.
Ebert beschreibt einen Film, in dem
unsichtbaren

Mauern die Figuren trennen. Sie knnen einander sehen


und hren, aber die Macht des Schicksals verhindert, dass
sie zueinander finden; sie werden von Schicksal
choreographiert. Die Kamera isoliert sie - oder gruppiert
sie - so, dass sie in ihrem Raum gefangen sind [3] In vielen
Fassbinder-Filmen scheint es, dass die Protagonisten
vergeblich streben.
In Gtter der Pest, spielt Harry Br einen frisch
entlassenen Ex-Strfling, der langsam aber sicher seinen
Weg zurck in die Mnchner Unterwelt findet. Whrend
des Films wird Br zwischen zwei Frauen als romantische
Interessen und seinem einzigen Freund (der frher seinen
Bruder erschoss) gerissen. Diese pessimistische Handlung
ist eindeutig ein Kommentar zur romantischen und
beruflichen Sinnlosigkeit. Als Leitmotiv folgen FassbinderFiguren unausgesprochenen Gesetzen und sind fr
immer dazu verdammt, zu bleiben, wer sie sind. Wenn sie
versuchen, sich zu befreien, ist es mit Zorn, Bitterkeit und
groen melodramatischen Gesten.
Dies kann man zum Beispiel bei Lola sehen, wo eine
Tnzerin und Prostituierte, versucht, ihre Position durch
die Heirat mit einem korrupten Bauunternehmer zu
verbessern. Nach ihrer Heirat, obwohl jetzt gesellschaftlich
aufgestiegen, stellen Lolas Gewohnheiten als Prostituierte
und ihre eigene Promiskuitt sicher, dass sie nie wirklich
ihren alten Beruf verlsst, da sie weiterhin Kunden hinter
dem Rcken ihres Mannes besucht.

18

SAINT OLAVES ACADEMIC JOURNAL


Meine Meinung nach ist der Ursprungs dieses
Pessimismus in Fassbinders eigener sexueller Frustration
zu suchen und seiner Unfhigkeit, eine stabile Beziehung
zu haben mit einer Reihe von gescheiterten Ehen und
Affren. Fassbinder kombinierte oft sein persnliches und
berufliches Leben und zwang Irm Hermann (seine
Freundin whrend seiner frhen Karriere), Darstellerin in
seinen Filmen zu werden. Sie spielte in eher
unsscheinbaren Rollen wie der untreuen Frau in Hndler
der vier Jahreszeiten und der stillen, missbrauchten
Assistent in Die bitteren Trnen der Petra von Kant.
Fassbinder war bekanntlich ein unangenehmer Regisseur,
triezte seine Besetzung bis auf die Knochen mit offener
krperlicher Aggression, wenn die Leistung nicht gut
genug war, und es ist deshalb erstaunlich, dass die
Besatzung ihn zu einem solchen Grad vergtterte.
Fassbinders sadomasochistischen Tendenzen waren nicht
auf seine Arbeit beschrnkt und er schlug seine Freundin
regelmig. In 1977 wurde Irm Hermann von einem
anderen Mann schwanger und entschied sich schweren
Herzens, Fassbinder zu verlassen. Es wird berichtet, dass
Fassbinder ihr sofort einen Heiratsantrag machte und
sogar anbot, das Kind zu adoptieren - eine
melodramatische Geste nicht unhnlich dem Verhalten
seiner Figuren. Hermann weigerte sich, und so endete die
stabilste von Fassbinders Beziehungen. Fassbinders
Reaktion zeigt deutlich, wie autobiografisch seine Figuren
sind.
Fassbinders umstrittener Regiestil und Mangel an
zwischenmenschlichen Fhigkeiten, kombiniert mit
provokanten Themen wie Liebe, Eifersucht, Verrat, Scham
und Sadomasochismus, gewhrleisten, dass Fassbinder
immer im Rampenlicht war. Aber Fassbinder wollte
provozieren, und die ffentliche Diskussion regte ihn an.
Fassbinder war velleicht nicht der sympathischste
Regisseur, aber er war enorm erfolgreich. Fassbinders
Kunst hat sicherlich dazu beigetragen, die kulturelle
Entwicklung in Deutschland zu einer liberalen
Gesellschaft zu beschleunigen, aber vielleicht wurde
unsere moderne Gesellschaft mehr desensibilisiert als
tolerant. Seine Werke sind weiterhin kontrovers, auch fr
ein modernes Publikum. Vielleicht sind wir nicht so
tolerant, wie wir glauben.

Rainer Werner Fassbinder


The Father of German
Filmmaking in Context
English Translation
Rainer Werner Fassbinder is widely seen as the iconic
father of modern German filmmaking. Born three weeks
after the end of the Second World War in 1945, Fassbinder

ISSUE 2, SEPTEMBER 2014


was a man wholly dedicated to his work: prolific in his art
to the point of becoming a workaholic[1]. After becoming
heavily involved with cocaine, alcohol and barbiturates, it
is not altogether surprising that he lived a very short and
fast life of only 37 years. During this time he directed 44
productions; most of them feature films, with a few
television specials and one 15.5 hour long TV mini-series
called Berlin Alexanderplatz [2]. These films were nearly all
written and adapted for the screen by Fassbinder himself,
nine of which he personally featured in as an actor,
appearing in a further ten films of various colleagues.
Fassbinder demonstrated his flexibility in this field by
even moonlighting as cinematographer and producer in a
number of his own films [1] [2].
The average director might be hard pressed to produce a
full feature film once every two years. Fassbinder,
however, produced no fewer than three productions per
year of his working life. [2] As Roger Ebert (a friend of
Fassbinder) describes: In a flood of creativity unheard of
among modern directors, he made films like he smoked
cigarettes, one after another, no pause in between.[3]
From his films it is clear that the aftermath of the war and
the Cultural Revolution that followed greatly influenced
his work. As a young child he would have grown up
surrounded by the almost complete destruction and
economic devastation of Germany. Later the economic
miracle of West Germany would lead to a lasting period of
rebuilding and rapid industrial expansion. This period of
economic prosperity however, unlike the Golden Twenties
prior to the Second World War, was far more culturally
repressed. The general political outlook of Germany was
conservative, and open homosexuality was illegal. This
conservatism was almost certainly a result of the
combination of the extravagantly liberal Weimar Republic
(1918-1933), followed by the intensely repressive Nazi
regime (1933-1945). In the 1950s, therefore, the elephant
in the room was Nazi Germany and the Holocaust. The
Nazi generation of parents preferred to focus largely on
hard work and improving the economy and neglect
ideologies enforced by their Nazi upbringing. There was
indeed a great level of shame at their shared history, and it
had the effect of smothering their resolve for change. The
post war generation of children would eventually challenge
this outlook from the 1970s until reunification. As well as a
series of bloody conflicts (the Baader-Meinhof terrorism of
1970-1993, for example), the youth culturally rebelled
against their parents. Fassbinder, as an overweight, bisexual man deliberately cultivated a style of
cinematography to provocatively question these repressive,
old-fashioned social norms.
Upon closer analysis of Fassbinders style, his films give an
incisive picture of post-war Germany, through ironic and
nearly plot-less deconstructions of the Hollywood style with
an astute, provocative political edge. Yet they also remain
relevant to urban life in contemporary times and human
relationships. Some of the films (especially the ones

19

SAINT OLAVES ACADEMIC JOURNAL


centring on a group rather than a single victim figure) are
endowed with a decidedly dark and sardonic sense of
humour [1]. The way in which Fassbinder achieves this can
be seen in the way he depicts his characters. In the context
of Fassbinders cinematographic development, Ebert
describes a movie set with invisible walls separating the
characters. They can see and hear one another, but some
kind of force of destiny prevents them from connecting;
they are choreographed by fate. The camera isolates them or groups them - so that they are trapped in their space.[3]
From watching Fassbinders films there is a sense that
their actions have a degree of futility.
In Gods of the Plague, Harry Br plays a newly released
ex-convict who slowly but surely finds his way back into
the Munich criminal underworld. During the film, Br is
torn between two women as love interests and his only
friend (who earlier shot his brother) [4]. This pessimistic
plot is clearly a commentary on romantic and professional
futility. Fassbinders characters follow unstated laws [3]
and are forever doomed to remain who they are. When
characters try to break free, it is with anger, bitterness and
large melodramatic gestures.
This is well represented in the vibrantly filtered Lola
where Lola, a table dancer and prostitute, tries to better
her position by marrying a corrupt construction
entrepreneur. After their marriage, despite now owning
the club as a result of her husbands influence, Lolas
habits as a prostitute and her own promiscuity ensure that
she never truly leaves her old profession as she continues
to entertain clients behind her new husbands back [4].
My interpretation of the origin for this pessimism is from
Fassbinders own sexual frustration and inability to hold
down a stable relationship. Fassbinder often combined his
personal and professional lives which resulted in a string of
flings, failed marriages and affairs. Fassbinder would often
force Irm Hermann (his girlfriend during his early career)
to become an actress cast in his films. He would cast her in
rather unglamorous roles like the unfaithful wife in The
Merchant of Four Seasons and the silent, abused assistant
in The Bitter Tears of Petra von Kant. Fassbinder was a
notoriously unpleasant director, working his cast to the
bone with open physical aggression if the performance
wasnt good enough and it is surprising that he was
therefore idolised by his cast to such a degree [2] [3].
Fassbinders sadomasochistic tendencies were not limited
to his work and he would regularly beat Irm Hermann. In
1977, Hermann became pregnant by another man and
decided to leave Fassbinder, despite idolising him even
through many years of abuse [5]. Fassbinder is reported to
have immediately proposed to Hermann and even offer to
adopt the child in a melodramatic gesture not dissimilar
from the behaviour of his characters. Hermann refused,
thus ending the most stable of Fassbinders relationships.
Fassbinders reaction to Hermanns pregnancy poignantly
depicts how autobiographical his characters are.

ISSUE 2, SEPTEMBER 2014


Fassbinders controversial directing style and interpersonal
skills combined with provocative themes of sexuality, love,
jealousy, betrayal, shame and sadomasochism [3] ensured
that Fassbinder was always under a critical media
spotlight. The notion that this would be negative for
Fassbinder is false, however, as it helped Fassbinders
aims of questioning convention. Fassbinder may not be a
likeable director, but he was hugely driven and successful.
His art certainly helped the cultural development of
Germany into a more liberal society, but perhaps this was
achieved by desensitising our society rather than making it
more tolerant. His works continue to be seen as
controversial even to a modern audience, maybe we still
are not as tolerant as we would like to think.

Alaric Belmain (Year 12)


_______________________________________________________

Elige un tema de la obra que


has estudiado que te parece
importante.
Explica cmo
desarrolla el autor este tema y
por qu te parece clave.

Spanish
En el libro El coronel no tiene quien le escriba de Gabriel
Garca Mrquez (1927-2014) yo dira que el tema ms
importante es la opresin causada por la dictadura de
Rojas Pinilla (1953-1957).
Este tema se muestra durante toda la novela, el mejor
ejemplo siendo en las primeras pginas cuando la esposa
del coronel le dice (hablando de un amigo de su hijo
Agustn) que es el primer muerto de muerte natural que
tenemos en muchos aos. Esto destaca que el gobierno ha
matado a muchas personas. Adems, Mrquez desarrolla
el tema mencionando que hay un toque de queda. Otra
referencia a la opresin es cuando el mdico est leyendo el
peridico y afirma que es difcil leer entre lneas lo que
permite publicar la censura. La censura es claramente
una manera de oprimir a la gente para que no sepa lo que
ocurre en el mundo. (Si pudiera ver el comportamiento del
gobierno habra siempre la posibilidad de protestas.)

20

SAINT OLAVES ACADEMIC JOURNAL


A mi modo de ver Mrquez usa metforas; una de stas es
el asma de la esposa es una enfermedad muy opresiva.
El clima del pueblo es tambin opresivo hay humedad,
lluvia y calor sofocante.
Don Sabas es un amigo del gobierno y miente al coronel
sobre el valor del gallo. Lo hace porque quiere comprarlo
por un precio muy bajo. El mdico dice al coronel que el
nico animal que se alimenta de carne humana es don
Sabas. Esto enfatiza que a la red de seguidores del
gobierno no le interesa el bienestar del pueblo. Otro
personaje que intenta oprimir a la gente es Padre Angel.
Aunque sea una persona en que el pueblo debera tener
confianza, lo vemos delante del cine apuntando los nombres
de los individuos que entran. Es increble que haga esto
puesto que es un representante de Dios. Adems, cuando
la mujer le pide un prstamo y le ofrece su anillo de
matrimonio, el padre contesta que es un pecado negociar
con las cosas sagradas. Este incidente subraya que no
quiere ayudar a la gente y enfatiza la opresin.
En resumen, Mrquez logra mostrar la opresin en
Colombia y desarrolla el tema muy eficazmente usando
metforas, descripciones directas de la opresin poltica y
la actitud de las personas poderosas hacia sus
compatriotas. En una entrevista con TV de Colombia,
Mrquez dijo que fue una decisin poltica escribir el
coronel. En mi opinin logra muy bien darnos una
impresin muy alarmante de la situacin catica en su
pas.

Choose an important theme of


the play that you have studied.
Explain
how
the
author
develops this theme and why
you think it is key.
English Translation
In the novella El Coronel No Tiene Quien Le Escriba by
Gabriel Garcia Marquez, I would say that the most
important theme is the oppressive dictatorship of Rojas
Pinilla (1953-57). This theme is shown throughout the
novel, the best example being in the first pages when the
wife of the coronel tells him, speaking of a friend of their
son Agustin, that it's "the first natural death that we've
had in many years". This highlights that the government
has killed many people. Furthermore, Marquez develops
the theme mentioning there is a curfew. Another reference
to oppression is when the doctor is reading the paper and
notes that it's "difficult to read between the lines which the
censorship allows to publish". The censorship is clearly a
way of oppressing the people in order that they do not
know what is happening in the world. If the public could

ISSUE 2, SEPTEMBER 2014


see the behaviour of the government there would always be
the possibility of protests.
Marquez uses metaphors; one of these is the asthma of the
wife, it's a very oppressive illness. The climate of the town
is also oppressive - there's humidity, rain and suffocating
heat. Don Sabas is a friend of the government and he lies
to the colonel about the value of the cock. He does it
because he wants to buy it for a very low price. The doctor
tells the colonel that "the only animal that feeds of human
flesh is Don Sabas". This emphasises that the network of
followers of the government doesn't care about the
wellbeing of the public.
Another character that tries to oppress the town is Padre
Angel. Although he should be a person that the town
should trust, we see him in front of the cinema writing
down the names of those that enter. It's incredible that he
does that when he is supposed to be a representative of
God. Furthermore, when the wife asks for a loan and offers
her wedding ring, he says "it's a sin to negotiate with
sacred objects". This incident underlines that fact that he
doesn't want to help the people and it emphasises the
oppression.
To conclude, Marquez successfully illustrates the
oppression in Colombia and he develops the theme very
effectively by using metaphors, direct descriptions of the
political oppression and the attitude of the powerful people
towards their compatriots. In an interview with Columbian
television, Marquez said that "it was a political decision to
write El Coronel". In my opinion he manages to give us
very well an alarming impression of the chaotic situation
in his country.

Chris Leech (Year 13)


_______________________________________________________

Close Analysis of
Raymond Carvers
Little Things

Raymond Carvers Little Things is a short story focusing on


the breakdown of a relationship, incorporating the themes
of miscommunication, possession and destruction. A motif
of light changing to dark also runs through the story,

21

SAINT OLAVES ACADEMIC JOURNAL


reflecting its dark and deteriorating narrative. With the
addition of Carvers trademark minimalist style, dictating
the action through dialogue and only using sparse
description, Little Things is a gripping and disturbing piece
to read, with no distractions from its blunt and hard hitting
storyline.
The light motif is present throughout, used primarily to
represent the couples failing relationship, as well as the
oncoming darkness that is about to consume their
household and family life. The dark on the inside
foreshadows a rising tension and darkening tone, and in a
literal sense is visually suited to the kind of gritty domestic
drama being played out, creating a claustrophobic
atmosphere that closes in around the action. The couple no
longer have any hope, or light, in their relationship, and
have instead become isolated in their own, darkening
relationship. This gathering darkness and tension can also
be seen in the snow...melting into dirty water outside,
another environmental representation of the failing
relationship. What was once pure and special has now
dissolved into a commonplace substance that nobody wants
but physical traces of what once was still exist, similar to
the babys existence as evidence of the couples past love for
each other, however brief and broken.
A lack of communication is a recurring theme in many of
Carvers works like One More Thing in which the family
can only shout or speak in secluded groups. This theme is
included in Little Things, where the action in the scene is
mainly told through the dialogue, devoid of speech marks,
making the piece seem almost closer to a play dialogue
driven, and often without authorial voice - consequently
leaving large sections of the story open to interpretation, for
example: Let go of him he said. Get away, get away! she
cried In this case the extended use of dialogue almost
provokes misunderstanding from the reader, leaving the
sequence of events and emotions half-unexplained. As the
action is muddled and uncertain, the lack of clear
description enhances the sense that domestic dramas especially one as dark and entangled as this - are confusing
and unsure, with no one person taking the blame. The
sparse speech gives the dialogue importance and
physicality, and makes each statement seem more
weighted, like an action or description of one, with even
simple statements such as Get out of here! having
stronger force. This is further accentuated by the space on
the page, with line breaks frequent to space out both the
dialogue and action. The distance between the characters is
mirrored in the space between their interactions on the
page, evoking a sense of separation between them
physically, and accentuating the idea of mental barrier.
Objects also make a frequent appearance in Carvers short
stories, often to symbolise a relationship or theme within
the piece a good example in Little Things is the babys
picture on the bed which begins the entire argument. The
use of this photograph initiates questions about the history
of the couple, and influences our view on the upcoming

ISSUE 2, SEPTEMBER 2014


events. The woman is seen to have aggravated the man into
action, as her desperation at the situation has driven her to
provoke him she noticed the babys picture on the bed and
picked it up. Yet this by no means prevents sympathy also
being invoked on her part in fact she can easily be
interpreted as the more loving character, being more family
orientated and thinking about the child first, she
uncovered the blanket from around his head,
demonstrating concern about his welfare over the fear
induced by her husband, as well as shifting focus towards
the baby who is now seen as more than an object. The new
character of the baby adds another layer of tension to the
story, as the child has changed from being a stationary
image in the picture to a vulnerable character; twined with
the darkening light motif the tension is raised even further.
Carver represents the breakdown of a relationship in One
More Thing, another short story centring around the
breakdown in a couples relationship and the effects on
their child; it is easy to draw parallels between One More
Things jar of pickles being pitched through the kitchen
window and the flower pot in Little Things, which is
knocked down. In both stories, the destruction signals the
shattering of any remaining hope in the household, and any
normality that came with it. It is also to some extent a
catalyst for later events, a final act of violence and
disregard for safety sending the relationship crashing over
the edge. In Little Things, after the flowerpot is broken, far
more obviously violent words like tightened and
screaming are used, to foreshadow the oncoming wave of
violence and the rising tension in the scene.
The terse sentences with next to no punctuation create a
faster narrative pace to engage with, and it is this
quickened pace that suggests a rising climax to the scene.
The woman, after initially provoking the man into action,
having picked up the babys picture and then stared at
him before leaving, becomes more flustered in her
dialogue: she cried out, exclaiming For Gods sake! Her
actions, once bold and daring have now been undone by the
fear and tension evoked by the quickening narrative, these
feeling will be emulated by the reader, as the story seems to
be reaching a climax. The mans tone remains monosyllabic
and unflinching throughout; his dialogue is brief and
determined, I want the baby and, Let go of him. The
persistent nature of the speech creates a dangerous tone,
and the narrative seems more climactic and terse.
The climax in question happens when all the created
tension is suddenly and sharply undone and the man
[pulls] back very hard on his own baby. The final line of
the issue being decided seems almost inappropriate
given the horrible image that preceded it. The reader is left
with no idea of who won the argument, or even what
happened to the baby, all serving to create an ending of
anti-climactic horror and ambiguity. The tragedy of the
preceding events is almost accentuated by their not even
leading to a tangible conclusion.

22

SAINT OLAVES ACADEMIC JOURNAL


In brief, Little Things is an effective short story thanks to
its tightly packed content and literary technique. The
themes are represented well, and the surrounding motifs
and style of writing help support them in their success.

Emily Macpherson-Smith (Year 12)


_______________________________________________________

What
similarities
or
contrasts can be drawn
between the Music of
Haydn and Debussy?

ISSUE 2, SEPTEMBER 2014


piece written for a concert hall as some of Haydns later
symphonies were.
Claude Debussy was a composer in the Impressionist
period. Born on the 22nd August 1862 in France at SaintGermain-en-Laye he was the first child of his parents who
ran a China shop. Debussy began music lessons aged 8 and
at the age of 11 went to study piano at the Paris
Conservatoire. In 1884 he won the Prix de Rome (a
competition for composers) with a cantata called LEnfant
Prodigue. The prize allowed him to study in Rome for two
years where he studied the music of Richard Wagner.
Wagner had a lasting influence on Debussy and his songs
Cinq poems de Baudelair and the Fantasie for piano and
orchestra were influenced by his compositional style.
However he did not tend to use much of the ostentatious
nature of Wagners music in his works and later began to
reject Wagner and Romanticism.
In 1887 Debussy went to the Exposition Universelle in
Paris where he heard a Javanese gamelan which, whose
exotic scales and tuning were to influence many of his
compositions later in life. The Sarabande from Pour le
Piano was written in 1984 as part of three Images for
piano. It was published separately in 1896 and then finally,
slightly revised, published in 1901as part of Pour le Piano.

Joseph Haydn was a prominent composer of the Classical


period born on 31st March 1732 in Rohrau, a small
Austrian town. Joseph Haydns father had a love of music
and played folk music at home giving Joseph his first taste
of music. When Haydn was 8 he was accepted as a
chorister in St. Stephens Cathedral in Vienna. At the
Cathedral, he was given vocal, piano and violin lessons.
Haydn stopped being a chorister aged 16 because his voice
was changing and he was dismissed from the Cathedral
school because he cut off the pigtail of another chorister. In
order to earn an income after leaving the choir Haydn
taught and played the violin and also studied harmony and
counterpoint. The Italian composer Nicola Porpora, who
was also a singing teacher, employed Haydn as an
accompanist for his vocal lessons. Porpora also corrected
Haydns compositions.
In 1758 Count Morzin hired Haydn as his court musician.
Under Count Morzin, Haydn was in charge of 16 musicians
for whom he wrote his First Symphony and numerous
divertimenti (secular instrumental works for a soloist or
chamber ensemble) for wind instruments and strings. In
1761 the Esterhzy court employed Haydn as Musical
Director, during which time he composed symphonies,
string quartets, operas for the court and other chamber
music. Symphony no. 26 was composed in 1768/69. It is
nicknamed Lamentatione as it uses a plainsong melody
associated with passion narratives which are sung in Holy
Week, it is not, however, associated with the Lamentations
of Jeremiah as the name may suggest. The use of the
plainsong melody suggests that the work may have been
written for performance in a Church at a service. It is not a

One key difference between Haydn and Debussy, other


than the period they composed in, is the fact that Haydn
was working for Patrons. Therefore Haydn was composing
for a specific purpose and had a specific instrumentation to
score for because the number of court musicians was
limited. Debussy, on the other hand, was not composing for
a patron and was free to compose for whatever instruments
he wanted and in whatever style he wanted illustrating the
change in social stature of musicians across eras.
Sonata form was a typical structure of the Classical period.
Haydn uses this for many of the first movements of his
symphonies. An example of this is Symphony no. 26. It has
three movements, the first of which uses Sonata form.
Sonata form consists of three broad sections; the
exposition, development and recapitulation sections.
During the exposition section the composer presents the
first and second subjects which provide the main themes or
ideas of the movement. In Symphony no. 26 the first
subject is in the tonic key of D minor and the second
subject is in the relative major, F major. This was typical of
the Classical period to have two contrasting tonalities for
the different subjects. In the development section the
composer takes the material from the exposition (in the
case of Symphony no. 26 solely the first subject) and
transforms it by going through a number of different keys,
usually with a fast rate of harmonic change. The
recapitulation is similar to the exposition, both subjects
return but they are now both in the tonic key. In the case of
Symphony no. 26, the first and second subjects return in
the tonic major as it was unusual at this time to end the
first movement in a minor key.

23

SAINT OLAVES ACADEMIC JOURNAL


The rigid structure of sonata form contrasts to the general
structures used by Debussy. In the Sarabande Debussy
uses a version of binary form with two main sections.
Debussy takes as his stimulus the popular Baroque form of
the Sarabande, which was usually in binary form.
However, Debussy is freer with his structure as the two
sections are of very unequal length which contrasts to a
Baroque Sarabande where the two sections are of similar
length. This is a point of difference between the two
composers because, in these examples, Haydn follows the
strict structure of sonata form whereas Debussy takes a
much freer interpretation of binary form and changes it as
he wants showing how composers are seeking more
freedom in their composition style later in time.
A point of structural similarity between the two pieces is
the regularity of phrases. Throughout the first movement
of the Haydn, all the phrases are 8, 12 or 16 bars long and
are all well balanced. This was a typical feature of the
Classical period with the idea of question and answer
phrases. This compares with much of the Debussy where
regular 2 and 4 bar phrases are used. This exemplifies
further how Debussy is taking influence both from Baroque
sarabandes and standard musical tradition which also
featured regular phrases.
However, although Debussy uses many regular phrases as
the piece develops the complexity of phrases increases. He
uses cross phasing in which phrases overlap the bar lines
to generate excitement as the piece builds towards a
climax. This shows how Debussy is breaking away from the
rigidity of phrase lengths in Baroque sarabandes and
taking a freer approach to a more traditional style.

ISSUE 2, SEPTEMBER 2014


Debussy contrasts with Haydn starkly in this aspect.
Debussy writes using non-functional harmony, using
chords for the way they sound rather than for a functional
purpose. In the case of the Sarabande it is in C# minor,
however the first chord is based around F# minor. Debussy
also rarely uses cadences to establish the key and any
cadences he does use are generally unconventional such as
3 1. Another point of contrast to Haydn is that Debussy
uses chords which are not triads such as quartal chords
where a series of stacked fourths are played
simultaneously. Quartal chords create a very open sound
compared to triads due to the interval of a perfect fourth.
Debussy also uses extended chords which are triads with
the sixth, seventh or ninth degree of the scale added, for
example. This creates rich sonorities which blend
seamlessly from one to another in parallel motion.
All of Haydns music is tonal and based on major and
minor scales and keys. In the case of the first movement of
Symphony no. 26 the tonic key is D minor. This contrasts
with the music of Debussy. In some respects it is tonal
however there are many modal influences with the
Sarabande being based on the Aeolian mode transposed to
C#. Debussy makes use also of whole tone scales. This is
very different to Haydn who solely uses major and minor
scales. Modes are precursors to scales which predate the
tonal system of 24 major and minor keys and were used a
lot in Renaissance music. This shows further how Debussy
is taking influence from earlier styles of music and
remoulding it to suit his freer harmonic vocabulary.

An additional point of similarity is the repetition of melodic


material. The nature of sonata form means that the first
and second subjects are repeated in the recapitulation.
Also, the development section is built around the first
subject and therefore more repetition takes place. Debussy
also repeats the main melodic themes although with some
slight variation such as lowering the pitch by an octave and
reharmonising the melody using repetition to make the
listener familiar with the key melodic ideas.

In Classical music, the harmony is almost entirely diatonic


with composers mainly using notes from the keys used.
Any accidentals in the first movement of Symphony no .26
are part of the key, for example C# which is part of D
melodic minor serving a clear harmonic function. This
contrasts with Debussy who uses very chromatic harmony
to create a tonally unstable feel which allows him to slip
between a number of different, unrelated keys. There are
many accidentals in the extended chords especially.
Debussy has some occasional diatonic moments in the
Sarabande which is similar to Baroque Sarabandes but it
is predominantly chromatic.

In terms of harmony the two styles contrast greatly.


During the Classical period all composers used functional
harmony. Functional harmony is the use of chords and root
progressions that establish a sense of key. Haydn
exemplifies this in the first movement of Symphony no. 26
with regular perfect cadences and imperfect cadences at
the end of phrases. This firmly defines the tonality and is a
key feature of any music using functional harmony. Haydn
uses triads rather than any other chords and they are
mainly in root position; another feature of functional
harmony. Also pedal notes exemplify functional harmony.
Pedal notes are long held notes around which the melody
and other parts move. Haydn uses tonic pedals and
dominant preparation to firmly establish the key and lead
into new sections.

In conclusion there are similarities between the styles


Haydn and Debussy such as the regularity of phrases and
repetition of melodic material which links these two pieces
and styles in the continuous development of music.
However, there are many more differences, especially in
terms of harmony, which demonstrate how music has
greatly changed and developed between the two different
styles. Haydn uses functional, tonal harmony with triads
mainly in root position and resolved dissonances, whereas
Debussy uses non-functional, modal-influenced harmony
with extended chords, lots of chromaticism and unresolved
dissonance. These differences could stem from the different
circumstances in which the pieces were composed; Haydn
was composing for the Esterhzys whereas Debussy was
not composing for a patron so had more freedom. The

24

SAINT OLAVES ACADEMIC JOURNAL


differences are mainly due to the differing styles of
Classical versus impressionist illustrating the rich musical
styles which have developed, but are still important so that
we can grasp an understanding of music today.

Lucy Morrell (Year 12)


_______________________________________________________

Enduring Love and Closer


are traditional narratives
with nothing new to offer.
How does your reading agree with
this statement?

Ian McEwans novel Enduring Love and Patrick Marbers


play Closer are both post-modern texts that re-invent
traditional narrative content and form. Closer pushes the
boundaries of stagecraft, launching characters months into
the future, while Enduring Love second-guesses the reader,
genre-hopping between romance, thriller, and even literary
biography. But while both texts are quietly subversive,
they also build on and draw from well- established
narrative forms. Unlike James Joyce, for example,
McEwan doesnt invent his own language, and unlike
Samuel Beckett, Marber doesnt set his play outside the
real world. Both writers retellings of the traditional love
story are informed by the values of the 1990s, particularly
in regard to shifting views on gender roles and an
engagement with fashionable scientific thinking.
The structure of Enduring Loves is far from traditional.
McEwan plays with time and expectation - the beginning
is not so simple to mark. Joes story is not told in a linear
fashion. His tale only really gets going a few pages into the
novel, via a detour into Covent Garden and McEwan
withholds the conclusion of the novels dramatic balloon
accident until well into the second chapter. In Closer we
see Marber separate himself from the traditional timeline
of a play (the events taking place over the course of a few
days) in favour of short scenes, with months elapsing
between them. This original structure was noted by John
Simon, a New York Times critic, who at the time of the
plays Broadway debut remarked: Marber tells his story in
short, staccato scenes in which the unsaid talks as loudly
as the said. It is certainly true that Marbers vignettes

ISSUE 2, SEPTEMBER 2014


from the couples relationships are brief and brash to
reflect the busy nature of 1990s lifestyles. What remains
unsaid, however, are disclosures that would normally be
considered significant . For example, Larry and Annas
marriage and Dan and Alices co-habitation - these events
are deliberately not loudly expressed. Marber wants us to
acknowledge the worst in relationships, and so omits the
markers of traditional love stories.
Manipulation of time surfaces again in Enduring Love, as
McEwan deconstructs the traditional intimacy of a first
person narrative. When Joe admits that picturing Jed is
odd, knowing what I know now, hindsight intrudes,
disconcerting the reader. This narrative of self-awareness
is a fashionable mode of storytelling and we are frustrated
by Joes reluctance to divulge more information about
Parry. Alice similarly withholds her identity from us,
leaving it to Larry to tell the audience that: She made
herself up. Of course, these texts are not wholly groundbreaking, they owe a great deal to traditional narratives
wrong-footing the reader with the unknown is a classic
technique, heavily used in Victorian literature. For
example, at the very beginning of Emily Brontes
Wuthering Heights, Catherine appears the window,
leaving the reader guessing as to her identity and
motivation. Jeds initial appearance is equally mysterious.
But it is the coupling of these unanswered questions with a
post-modern sense of self-awareness that makes us truly
uneasy. Not only does the author know something we dont,
so does our narrator.
The style and form of Enduring Love and Closer are, for
the most part, relatively conventional. Like most fiction
and drama, change happens when relationships are put
under pressure. McEwan lights a fire under Joe and
Clarissas relationship and watches how they cope following Kurt Vonneguts dictum: Be a sadist in order
that the reader may see what [your characters] are made
of. McEwans sadism comes in the form of Jed Parry - a
psychopathic stalker. This is not the first time Enduring
Love is caught making a nod to Samuel Richardsons
Clarissa, this time echoing Robert Lovelaces obsessive
behaviour. McEwan presents Joe and Clarissas inevitable
fight with impressive realism in Chapter Nine. Clarissa is
composed to begin with, asking Joe whether its possible
hes making too much of this man Parry? Polite language
and rhetorical questions give the argument a starting point
- it is only natural for this couple to begin with reasoned
debate, they rarely ever argue. The argument escalates:
Joe telling Clarissa that her only concern is that hes not
massaging her damned feet, but essentially shows us
argument as sport. The couple say anything to score the
points. Joes interruptive third person narration here
alerts the reader to further nuances of the argument - the
competitive nature of the confrontation. The novel adheres
to traditional forms, but at the same time distorts our
picture of them.

25

SAINT OLAVES ACADEMIC JOURNAL


The same can be said for Closer. Events in Larry and
Annas lives inevitably lead to a break up, but the way the
argument plays out in Act One Scene 6 is wholly
unexpected. Larry confesses to sleeping With someone in
New York. / A whore. The audience is struck by his
honesty. This is hard for Larry to say, and the actor is
compelled to deliver the speech hesitantly as the lines are
spaced out down the page. Marber takes this moment of
frankness, however, and makes it the end point of the
relationship, rather than the beginning. Within a minute of
stage time Larry shifts from a guilty confession to shouting
the most aggressive line in Marbers unrelentingly
aggressive play: Now fuck off and die. You fucked up slag.
Marber is inclined to overstatement argues the literary
critic Lucy Atkins, but it could be argued that this unsubtle
presentation is not a failing of the play, but the plays aim.
Marbers script, unlike a traditional romance, does not
strive for verisimilitude, but instead parodies lovers,
reflecting them in McEwans dark distorting mirror. Life
for the characters in Closer does not follow the traditional
route, nor does it in Enduring Love.
The novel frequently mimics different genres and
academic-speak. To conclude his novel McEwan composes a
pastiche of an article from The British Review of
Psychiatry fooling us into thinking the tale is inspired by
real events. This feels like a suitable coda to a work in
which fact and fiction become one and the same. But even
this seemingly original flourish has its literary
antecedents, most memorably in the work of Vladimir
Nabokov.
In contrast to the texts supposed structural modernity, the
characters in Enduring Love and Closer, are fixated on
tradition and the past. Marber brings to the fore our
tendency to remember the dead, forget the living - a
statement we see reflected throughout the play. In scene 1,
Dan remembers his dead mother in Postmans Park, a
memorial garden to those who have died; Larry reads out
the good deeds of Alice Ayres, daughter of a bricklayers
labourer in an unprecedented level of detail; and in Scene
11 Dan and Alice try to salvage their relationship by
harking back to their first meeting. Enduring Loves Joe
Rose also dwells on the past taking time out of the
narrative to inform the reader of historic scientific
discoveries. He also considers the dead. On returning from
the crash, he muses that many crises and deaths must
have already been considered around this table. After
seeing the accident he comforts himself with the idea that
others have been through worse.
Yet both McEwan and Marber agree that there comes a
time to wake up. The turning point for Joe occurs during
the novels violent restaurant scene. While the couples
discuss the discovery of DNA, the present shockingly
intrudes. Joe notices the two men and the girl, but these
asides are background noise to the the Swiss Chemist who
identified DNA. Then a gunshot cuts across the
conversation - Trapp is hit by a high velocity impact

ISSUE 2, SEPTEMBER 2014


across Joes tablecloth desert hands... sight. The
events described and the pace built by this listing
technique jolts both the reader and Joe out of the comfort
of biography and storytelling; from here Joe decides to take
action and begins to grapple with the problem of Jed headon.
Alices death has a similar effect on Dan and the audience
in scene 12 of Closer. In the final scene of Closer, Dan is
forced to come to terms with Alices death. In Dans final
monologue, he rambles -- Marber spaces the speech out as
if each sentence is a separate thought, forcing the actor to
deliver the speech in a detached manner. Dan interrupts
his account to ask extraneous questions - I covered my
face - why do we do that? - and makes unclear connections
between Alices death and the man from the Treasury.
But, as the monologue finishes, we see Dan get in touch
with a real world that has been strangely absent from the
rest of the play, much like McEwan shows us the real
world intruding on the intellectual discussions in the
restaurant. Dan [bumps] into Ruth and sees life
continuing without Alice - his final line is an imperative: I
have to go, Ill miss the plane. Marber and McEwan agree
that in the hectic 1990s it is tempting to find refuge in the
past, but also agree we need to take control of the present.
However, Alice has to die and Trapp has to be seriously
injured before Dan and Joe decide to take control - perhaps
they are too late.
The characters use of language can also be seen as harking
back to the past. Larry makes allusions to the fairy tale
genre in Scene 5, joking that a princess can kiss a frog,
moving on to yet more archaic turns of phrase. He talks
about Paradise, its capitalisation making the usually
abstract concept a concrete place, in the manner of the
Metaphysical Poets. The Paradise Suite in Scene 7 tears
down the romantic, traditional notion of a one-stop
Paradise. There are six, states Alice bluntly. There is no
such thing as the personal paradise that Larry longs for.
After the accident in Enduring Love, Joe adopts the
language of traditional religious imagery. The car journey
is an exorcism of terror, and the balloon accident, in
Clarissas eyes, is best summed up in a Milton quote Hurld headlong flaming from thEthereal Sky. Unable to
express himself in rational and scientific terms, Joe turns
to fiction and myth. By this means McEwan suggests the
1990s scientific culture cannot articulate emotion as
effectively as traditional images and tales. This is not to
say that McEwans fiction is not permeated by the new
ideas of the 1990s. As the men gripping the balloon enact
moralitys ancient, irresolvable dilemma: us or me
contemporary readers are reminded of current work into
evolutionary biology. But even here, critic James Wood
argues in his essay Containment: Trauma and
Manipulation in Ian McEwan, McEwan places his
characters in a Rousseauian natural society. McEwan is
unaware, he argues, that Rousseau hovers behind him,
but Rousseau is certainly there.

26

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

James Woods eagerness to connect McEwan with a French


philosopher that the author may or may not have been
referencing begs an interesting question. Can Marber and
McEwan ever offer something wholly new? They are both
well read, literary writers, and as such cant help being
influenced by their cultural hinterland. McEwan and
Marbers innovation, it seems, is to acknowledge the
tradition hovering closely behind them. By manipulating
previous literary tropes, both in regard to characters and in
form, the authors, on one level, can shock and disturb us.
We cannot anticipate where the tales will go next.
However, on another level, we see the struggle of the
characters, but also of the works themselves, to endure the
shadow of the past. The modernists distanced themselves
from tradition, but these two post-modern authors welcome
historys vast literary heritage. The question no longer
seems to be how to free yourself from tradition, but how to
use it to your advantage.

Jack Bradfield (Year 12)


_______________________________________________________

27

SAINT OLAVES ACADEMIC JOURNAL

Maths & Science


Mathematical Chaos
in a Nutshell

Theres an old argument that claims that everything is just


politics, and Politics is simply applied Sociology, which, in
turn, is just applied Psychology, Biology, Chemistry,
Physics, and, ultimately, applied Mathematics. Through
this logic anything in existence can be described
mathematically, and whilst this may not quite be true,
almost any physical process can be described by a series of
simple mathematical laws. We can use maths to calculate
the flow of air, the change in humidity or the velocity of a
rain drop. We can discover how clouds are formed, water
evaporates into the air and how clouds can become snow,
rain or hail with mathematics. And yet for some reason the
weather forecast is always wrong.
Mathematicians named this reason.
They called it Chaos.
As the saying goes The Devil is
in the details, and the details
are the origins of chaos. Edward
Lorenz was a meteorologist,
mathematician and one of the
fathers of chaos. At work at MIT,
Lorenz built a simple model of
the atmosphere, controlled by
simple
mathematical
rules.
There were no clouds in this
model, only wind, gradually
shifting from North to East to
West and back, displayed as a
Edward Lorenz list of numbers on a print-out
1917-2008
from
the
machine.
They
simulation
proved
as
unpredictable as the weather itself, but nothing seemed
awry until Lorenz decided to re-run a section of the
simulation. He thought that it would be simple enough,
plugging the readouts from the first iteration into the
machine and running the program. However, something
strange occurred: the wind took a different path to its
original route. Lorenz was baffled as to how this had

ISSUE 2, SEPTEMBER 2014


happened; the rules for the model hadnt changed, so the
error must have been in the data he inputted. Upon closer
inspection Lorenz realized his mistake: the data on the
print-out was rounded to 3 decimal places, whilst the
computers simulation was accurate to a much more precise
degree. A seemingly minute variation in the original data
produced such a large impact in the final result that the
two sets of final data were virtually incomparable.
Intrigued, Lorenz looked more closely at the two results
and plotted a graph of one of the many variables in his
model. The result showed how the two sets of data started
almost on top of each other, and, after the first peak, they
began to diverge and soon after were indistinguishable
from each other. Lorenz was already aware that at certain
points of equilibrium in a system, just a small push could
result in a drastic change of outcomes, but following this
result, as well as his later experiments, Lorenz realized
that any point in a system could be such an equilibrium
point and create just as drastic outcomes. The details were
the sprout of a great multitude of chaos.

Results from the two iterations of Lorenzs Weather Model


plotted together on the same graph
Lorenz saw this chaos pattern occur across a variety of
models based simply on mathematical rules. Many of these
models showed aperiodic properties; feigning a regular
pattern without ever actually reaching it. Lorenz realized
that in a truly chaotic system matching circumstances,
where every variable was identical, could never happen, as
if they did then the system would simply follow the same
route again and a form of periodicity would be found.
Lorenz began to search to test how simple a mathematical
model could produce chaotic properties. In order to do so he
took a simple model of convection flows in a cup of coffee
heated from below, showing how the hot liquid will rise to
the top of the container, before losing its thermal energy
and falling to the bottom again, heating up and rising
again in a circular motion, or at least in theory. Above a
certain temperature this simple circular flow could speed
up, slow down or even change direction depending on
variables such as how much heat is lost, and how much
was present originally. In short, it behaved chaotically.
Taking this model, Lorenz stripped off any equations that
he deemed unnecessary, ultimately ending up with the

28

SAINT OLAVES ACADEMIC JOURNAL


three
non-linear
equations shown on
the
left.
Lorenz
decided that these
three equations with
three variables: x, y
and z would be
enough to test on. A
Lorenzs equations for the
more
easily
rotational speed of convection.
imaginable
system
These also apply to Lorenzs water
that this model also
wheel
applies to is a
Lorenz Water Wheel
(see diagram). This construction consists of a series of
buckets with holes in them attached to a wheel with a flow
of water at a constant rate into the top of the bucket-wheel.
Here the flow of water into the buckets takes the place of
the element heating the coffee from below, the buckets
themselves represent the particles in the coffee taking on
heat and the holes provide a means for these particles to
lose heat to their surroundings.

The Lorenz Water Wheel at three levels of water flow


When the water flow into the buckets is low, the behaviour
of the system is simple: a bucket fills with water at the top,
and gravity causes the wheel to rotate under the weight of
the water, and the bucket loses all of its water before it
reaches the bottom of the wheel. The same then repeats for
each subsequent bucket, and the wheel rotates at a steady
pace as each bucket fills and empties. This is the left hand
wheel in the diagram.
However, if the flow of water into the buckets increases
then there is not enough time for the bucket to fully empty
before reaching the bottom, so on the second rotation of the
wheel the bucket will fill to beyond the levels of the first
rotation. This will cause an increase in the speed of
rotation of the wheel and will, in turn, mean that even less
time is available for the water to be lost from the bucket,
but, equally, there will also be less time for the bucket at
the top of the wheel to fill. Before long the system will
appear to behave chaotically, and will seem to speed up
and slow down at random and even switching directions at
some points. This unpredictable behaviour is Chaos in
action.

ISSUE 2, SEPTEMBER 2014


attracted to two points, although it never intersected itself
(why this doesnt happen is the same reason why a chaotic
system cant have identical variables twice). Lorenz

A Lorenz Attractor for the Water Wheel and


Convection Currents
realized that these two points represented two important
states of the system; in the case of the water wheel, one
showed the wheel rotating clockwise, whilst the other
showed anticlockwise. This pattern soon became known as
the strange or Lorenz attractor and became one of the most
famous images of chaos, known for its butterfly like
appearance.
This brings us neatly to one of Lorenzs other discoveries
(or, as some would argue, inventions): The Butterfly Effect.
As possibly the most well-known of chaotic examples, the
Butterfly explains how, through Chaos, the flap of a
butterflys wing on one side of the Atlantic Ocean could
cause a hurricane on the other. This is the reason why the
weather forecast is always wrong: even if data recording
technology doubles every day, human beings will never be
able to record every butterflys flap, every exhale from a fly,
every miniscule fluctuation of atoms in the air; there will
always be a detail missed from which chaos can sprout and
take shape. In the space between readings chaos can occur
and make all previous predictions obsolete.
Chaos flows through almost everything, and will always, in
the end, have unforeseen consequences. Chaos is found in
every form of science and beyond, resulting in long term
predictions always being slightly inaccurate. And, if you
believe the argument that Politics is Maths, it explains
why Politicians always promise that they are going to do
something, and never seem to act as predicted.

Alastair Haig (Year 12)


_______________________________________________________

Lorenz used the equations for this system to plot a 3


dimensional graph for the convectional current in the
system. Oddly, the curve traced by the data appeared to be

29

SAINT OLAVES ACADEMIC JOURNAL

Is Human Intelligence
a product of Genes or
the Environment?

ISSUE 2, SEPTEMBER 2014


families, and then any phenotypic similarities must be
heavily influenced by genes, as this is the constant between
the two individuals.
In 1966, Cyril Burt used this principle in order to calculate
the heritability of intelligence. He used a large sample size
of 53 pairs of identical twins that had different nurturing
experiences, and concluded that IQ is highly heritable.
Alas, the problem with this study was that upon review by
other scientists, the results were almost definitely faked,
as correlations remained the same to 3 decimal places!

What is Heritability?
Introduction
The question I have set out to answer is a classic example
of the nature versus nurture argument, where scientists
have tried to classify a particular characteristic, in this
case intelligence, as a result of genes or the environment.
The nature versus nurture debate was started in 1869 by
Francis Galton, who published the book Heredity Genius.
In this, he claims that talent runs in families, and it is
inherited. He explained the pedigrees of famous judges,
statesmen,
peers,
commanders,
scientists,
poets,
musicians, painters, divines, oarsmen and wrestlers. He
concluded that there are a large number of instances in
which men who are more or less illustrious and have
eminent kinsfolk.
However, Galtons studies were very much anecdotal and
failed to point out that over half of his geniuses appeared
from families with no history of exceptional talent. His
work was criticised due to the fact that he had ignored the
contribution of upbringing and the environment in
determining intelligence. Nevertheless, this sparked the
nature versus nurture debate, and in the rest of this
article, I will aim to address whether human intelligence
can be attributed to either genes, the environment, or a
combination of both.

Investigating the Heritability of


Intelligence
Twin studies are an important tool for behavioural
geneticists to use in trying to work out the heritability of a
certain phenotype (the observable characteristics of an
individual as a result of an individuals genes and the
environment), in this case intelligence, in humans. There
are two types of twins; identical, or monozygotic, which
develop from one zygote and then split to form two
embryos, and non-identical, or fraternal, which develop
from two separate zygotes as a result of the fertilisation of
two eggs. Monozygotic twins share the same genotype, so
theoretically, if reared apart and brought up in different

Heritability measures the fraction of phenotype variability


that can be attributed to genetic variation, and is widely
used quantitative behavioural genetics. The key word in
this definition is variability. Heritability is a population
average, so it cannot be applied on an individual basis.
When someone says human height is 80% heritable, they
do not mean that 80% of an individuals centimetres come
from genes and 20% come from the environment. Instead,
the variation in height in a particular sample is attributed
80% to genes and 20% to the environment.
Heritability can also only measure variation, not absolute
values. For example, most people are born with ten fingers;
those with fewer have usually lost fingers due to accidents;
therefore the heritability of fingers is almost zero. Yet it is
ridiculous to say that we have ten fingers due to the
environment. We grow ten fingers because we are
genetically programmed to do so; however, the variation in
finger number is down to the environment.
Applying this concept to intelligence, it is clear that
intelligence cannot be caused by genes. Environmental
factors such as food, parental care, teaching and books are
equally necessary to make someone intelligent. However,
the interesting point here is that in a population where
everyone has access to the same resources, variation in
intelligence can be attributed to genes. This is because the
individuals share the same environment, so any variation
must be genetic. Therefore, in a true meritocracy,
heritability will be high, and the more important genes
become in determining successful individuals. The more
equal we make society, the more genetic discrimination
arises. After all, the world is full of scarcity scarcity of
jobs, scarcity of resources so there will always be
unsuccessful individuals, and the more access people have
to the same environment, the more genes will determine
who is successful in society, as only the most intelligent
will progress.

Defining Intelligence
At this point, it is important to define what we mean by
intelligence. Most so called measures of intelligence
correlate with each other; for example, people good at

30

SAINT OLAVES ACADEMIC JOURNAL


general knowledge and are usually good at abstract
reasoning and number tasks. Around about the same time
as Galton, the statistician Charles Spearman, also famous
as the inventor of the Spearmans rank, dubbed the
common factor g as a measure of intelligence.
So are there any specific genes for g? Early attempts to
find genes for intelligence failed, with only one gene, called
IGF2R, found on chromosome 6, showing any significant
correlation with IQ results, albeit weak. However, one
factor that does correlate quite highly with IQ is brain size;
brain volume and IQ have a 40% correlation. In 2001,
when brain scanning technology had improved greatly by a
series of advances, two separate studies in Holland and
Finland found a high correlation between grey matter in
the brain and g. Furthermore, identical twins had a
correlation for grey matter volume of 95%, whereas
fraternal twins only had a 50% correlation. A gene known
as ASPM codes for proteins that determine the number of
neurons in the brain, which in turn determines brain size.
Although this doesnt actually bring us closer to finding
actual genes for g, this does show that there is a significant
genetic element in intelligence

Modern Twin Studies


Modern twin studies are far more statistically based, with
quantitative data collected and less anecdotal evidence to
support conclusions. They reveal that intelligence, despite
the strong genetic influence discussed on the previous
slide, seems to receive a strong influence from family. In
fact, these studies found that IQ was 50% genetic, 25%
influenced by shared environment and 25% influenced by
experiences unique to the individual.
However, these statistics seem to vary greatly depending
on two major factors. The first is socio-economic status. The
scientist Eric Turkheimer found that in a sample of 350
identical twin pairs, many of whom had been raised in
extreme poverty, there was a large difference in IQ
between the richest and the poorest. Among the poor
families, almost all variability in intelligence was down to
the environment (so the children who had enough money to
go to school had a higher intelligence). Meanwhile,
amongst the rich, the opposite was true; almost all
variability in intelligence was down to genes. What this
says essentially is that living on 5,000 per year can
severely affect intelligence for the worse, but living on
50,000 or 500,000 makes little difference.
This has political implications as governments can use
these results to shape policies. Raising the safety net of the
poorest does more equalise opportunity than reducing
inequality in the middle classes. Socio-economic status is a
crude reminder that no matter what twin studies reveal
about the strong genetic influence on intelligence, the
environment still matters.

ISSUE 2, SEPTEMBER 2014


The second factor that affects how much genes or the
environment affect intelligence is age. The older we are,
the more genes seem to predict IQ than family background
and environment. In Western society, the contribution of
shared environment to intelligence in people aged under 20
is 40%; this figure falls rapidly to zero in older age groups.
This seems fairly reasonable, since children tend to live
with their parents, and are often dependent on them; as we
grow up, we become more independent and move away
from parents, so understandably they have less of an
influence on our intelligence. However, what is more
surprising perhaps is that the contribution of genes to IQ
rises from 20% in infancy to 40% in childhood to 60% in
adults and 80% in people of middle age! By adulthood,
intelligence is mainly inherited, and partly influenced by
experiences unique to the individual, but the shared
environment has almost no influence.

Nature via Nurture


Coming back to the original question, is intelligence a
product of genes or the environment?, it is now clear that
it is both. It is not nature versus nurture, but rather
nature via nurture. Having a certain set of genes
predisposes someone to experience a certain environment.
For example, having sporty genes encourages young
children to practice and train more at the sports they are
good at, so they develop into sportsmen and sportswomen.
Similarly, genes do not cause someone to be intelligent, but
rather they make someone more likely to enjoy learning,
and because they enjoy learning, they spend more time
doing it, and thus become cleverer. The environment acts
as a multiplier for small genetic differences, pushing sporty
children towards the sports that reward them, and bright
children to the books that reward them. Nature can only
act via nurture.
In conclusion, intelligence cannot be attributed to genes or
the environment; this is a false dichotomy. Rather, both
the environment and genes have influence over
intelligence. Variation in intelligence can be more genetic
or more environmental depending on socio-economic
circumstances and age. The most important point to note is
that it is a combination of appetite and aptitude that make
people intelligent. Genes act through the environment;
nature acts via nurture.

Abhishek Patel (Year 12)


_______________________________________________________

31

SAINT OLAVES ACADEMIC JOURNAL

Should the Milwaukee


protocol be used as a
treatment for Rabies?

Rabies: an acute viral infection of the nervous system of


warm blooded animals that is fatal when untreated and is
usually transmitted in infected saliva through the bite of a
rabid animal. This definition summarises the main
aspects of rabies and also introduces us to the main
concern surrounding the disease, namely that its fatal
when untreated.
Nowadays, rabies is an entirely preventable disease.
Thanks to the 100% effective human vaccine and
immunoglobin, as well as measures to control the spread of
the disease, rabies now only accounts for less than 10
deaths a year in developed countries. However the current
treatment of the disease still has some severe limitations.
Many more at risk patients in developing countries dont
receive the appropriate treatment due to the high cost of
the vaccine and immunoglobin. Furthermore, the difficulty
of diagnosing rabies early, before the virus reaches the
central nervous system and symptoms appear, means that
the vaccine often isnt administered within 30 days of
exposure, which is the time frame during which our
immune system can still mount an effective response in. If
the vaccine is administered too late then the disease
becomes 100% fatal and the doctor can only offer support
and care for the patient as they begin to suffer from
hallucinations, aerophobia and hydrophobia, before
slipping into a coma, followed by inevitable death..
Yet, what if there was a treatment that could save the
patient? Would you try it no matter how unlikely the
chance of success or how unethical the procedure? With
these questions in mind I introduce you to the Milwaukee
protocol, a controversial
procedure that involves
putting the patient into a
drug
induced
coma,
developed
by
Doctor
Willoughby in an attempt
to save the life of 15 year
old Jeanna Giese.

ISSUE 2, SEPTEMBER 2014


The theory behind this procedure is that the rabies virus
kills nerve cells by affecting neurotransmission, causing
the nerve cells to use more energy than is supplied.
Therefore by inducing a coma, effectively stopping
neurotransmission in the brain, Doctor Willoughby
reasoned that the immune system would have more time to
clear the virus from the brain before extensive brain
damage had occurred.
Dr Willoughby also carefully
selected the drugs that would be administered during the
procedure, choosing drugs known to have an antiviral
effect such as ketamine, amantadine and ribavirin in order
to aid the immune response. With the consent of the other
doctors and Gieses parents, Dr Willoughby began the
treatment and after seven days of monitoring he began to
notice an increase in the levels of rabies antibodies in
Gieses blood serum and cerebrospinal fluid, encouraging
signs that Gieses immune response was working to clear
the virus. In the end, the treatment was a success. Not only
did Giese survive rabies but also the doctors were able to
bring her out of the coma, and after a few months of
intensive rehabilitation and physiotherapy, only a slight
slurring in her speech remains. In light of this success, the
Milwaukee protocol was further trialled on 25 patients
suffering from rabies. Out of these only 3 patients survived
and further clinical trials were discontinued. However
there are no guidelines stating that the Milwaukee protocol
shouldnt be used to treat patients and some doctors still
use the procedure as a last resort.
So, why should the Milwaukee Protocol be used to treat
rabies? Is there any evidence that the treatment is
effective?
1. Rabies is 100% fatal once
the virus has reached the
central nervous system
and since there is no other
treatment at this stage of
the disease (and there
have been a few survivors
treated
with
the
Milwaukee protocol), you would be acting in the best
interest of the patient (provided they or their family
members had consented to the treatment) by trying
every possible means to save them.
2. There is some evidence that Dr Willoughbys theory
may in part be correct. The original theory that the
rabies virus initiates brain cell death (apoptosis) was
discounted after post-mortem examinations of the
brains of rabies victims showed limited lesions of the
brain and no tissue necrosis. While investigations into
whether the virus affects neuronal function have shown
that it does affect the release of neurotransmitters such
as serotonin. Therefore if the reasoning behind this
treatment is correct then inducing a coma may improve
the patients chances of surviving.
3. The drugs ketamine and amantadine have been shown
to have a beneficial effect. Rabies patients have higher
levels of quinolic acid (a neurotransmitter that excites
the NMDA glutamate receptor) and ketamine blocks

32

SAINT OLAVES ACADEMIC JOURNAL


the effect of the quinolic acid while amantadine protects
the receptor.

ISSUE 2, SEPTEMBER 2014

Stellar Physics

Yet, taking the benefits into account, why shouldnt the


Milwaukee protocol be used to treat rabies?
1. There is a low success rate associated with the
treatment and it is generally ineffective. Out of the 25
patients that received the treatment in the initial trial,
only 3 survived, 2 of which later died after being
discharged. Furthermore those who survived the
treatment had been vaccinated, although it was
administered incompletely or late.
2. This treatment could cause harm to the patient even if
they survive rabies. For example: the doctors may not
be able to bring the patient out of the coma, the patient
may end up suffering from severe neurologic
complications due to the coma or a lock in situation
might occur where the patient is conscious but unable
to move or communicate.
3. The money spent constantly monitoring and caring for
one patient, on the slim chance that they survive, could
be better spent vaccinating 16,000 people with a
vaccine proven to be 100% effective.
4. It is still under debate whether the rabies virus affects
neurotransmission since conflicting evidence from
autopsies shows that some cases do have widespread
neuronal loss and tissue necrosis. So it isnt yet clear
whether the Milwaukee protocol would have a
beneficial effect.
5. All the survivors of the Milwaukee protocol had been
infected with weaker strains of the virus and had
unusually high levels of antibodies to begin with,
suggesting they had a stronger immune response. So it
may be that the treatment is only effective in treating
cases such as these where the immune response is
strong enough to clear the weaker viral strain from the
brain.
In conclusion, the Milwaukee protocol isnt an ideal
treatment. The low success rate, high costs and ethical
issues surrounding it make it unlikely to ever be
extensively used or accepted as an effective treatment.
Moreover new developments into better diagnosis
techniques and cheaper vaccines may make rabies a
disease of the past.
However, if a rabies patient came into your clinic today
beyond the 30 day vaccination window and the Milwaukee
protocol was the only treatment possible, would you try it?

Caterina Hall (Year 13)


_______________________________________________________

Stars are one of the most intriguing aspects of physics, and


we already know a surprising amount about them. They
can change a huge amount during their lifetimes,
beginning as a mere dust cloud and ending their lives as a
neutron star or black hole. Many different processes occur
within stars which have a huge influence on the overall
size or luminosity of a star.
The brightest, largest stars are known as hypergiants. As
well as being highly luminous, in order to be classed as a
hypergiant a star must have a high level of atmospheric
instability and high mass loss. They often lie close to the
Eddington limit, which is the luminosity at which the force
of the stars gravity equals its radiation pressure outward.
Above this limit the stars radiation would be so strong
some of its outer layers would be thrown off, restricting the
star from shining at high luminosities for long periods.

Left to right: hypergiant, main sequence star, white


dwarf, neutron star, black hole.
This kind of delicate balancing act is also evident in other
stars. Main sequence stars such as the sun maintain a selfregulating system of hydrostatic equilibrium. If the rate of
energy production in the core declines, the radiation
pressure outwards will decrease. This causes the stars
mass to compress the core and increase temperature and
pressure, causing energy production to increase again. This
keeps the star in a steady state.
However this balance is not always present in a stars life
time. Once a star uses up its hydrogen fuel, becoming a red
giant, there is a chance it will have insufficient mass to
generate the core temperatures required for fusion of
carbon, about 1 billion Kelvin. This could cause an inert
mass of carbon and oxygen to build up in its center, which
would then cool, causing it to glow and take the form of a

33

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

White Dwarf. This could be very dense, with a mass


comparable to the sun while being only the size of the
earth.
One star well known for its incredible density is the
neutron star, which is formed when a huge star undergoes
a supernova explosion. One cubic meter of neutron star
material would weigh approximately the same as all the
water in the Atlantic Ocean, 400 billion tones, and the
mass of a Boeing 747 would be compressed to the size of a
grain of sand. Neutron stars spin at 700 times per second,
bulging at the center due to centrifugal force and have such
a strong gravitational field they can warp light and
accelerate matter to around 100 million kilometers per
hour. At this speed the atomic nuclei of atoms would be
fragmented upon impact with the surface of the star,
potentially creating some nuclear fusion.
Perhaps the only thing denser than a neutron star is a
black hole, which is thought to be nearly infinitely dense.
Black holes have a strange effect on their surroundings. If
an observer were to see an object pass through the event
horizon of a black hole (the point of no return), it would
apparently freeze, as light signals take longer to escape the
gravitational pull. In this time its wavelength would
increase causing the light to be red-shifted and dimmer
until finally taking the form of radio waves, invisible to the
human eye. This sort of effect could never happen here on
earth, and it is this element of unpredictability that makes
stars so fascinating.

James Kershaw (Year 12)


_______________________________________________________

The Power of Capsaicin

It is classed as a vanilloid, a group of compounds so named


because it includes vanillin, the compound that gives the
well-known flavour to vanilla. All parts seen to the left of
but not including the NH in the diagram make up the
vanillyl functional group, otherwise known as 4-hydroxy-3methoxyphenyl. Without the fatty hydrocarbon tail,
Capsaicin would be almost identical to vanillin, and would
not have its fearsome spicy properties.
The long hydrocarbon tail is what gives it its fiery taste as
it allows it to bind to nerve cells and get the vanillyl group
to interact with the lipoprotein nerve receptor for vanilloids
(aptly named transient receptor potential cation channel
subfamily V member 1, vanilloid receptor 1, or TRPV1).
Peppers are hot because the capsaicin encourages
calcium ions to flood into nerve cells through their cell
membranes, activating the same sensory pain nervous
response that physical heat burns do. This works because
TRPV1 itself acts as a heat-activated calcium gateway. The
capsaicin causes the gateway to open below the
temperature range it normally would (below 37 degrees,
human body temperature), tricking the nerves to
responding as though to a physical heat stimulus. This is
why peppers feel hot to eat without actually having a high
temperature.
Ironically, consuming capsaicin can still cause tissue
damage even though no actual hot food was eaten. By
fooling neurons into signalling the presence of excessive
heat, their response is to inflame the exposed tissue, which
does cause damage if the capsaicin was in high enough a
concentration.

Introduction
Capsaicin (Kap-Say-Sin) is the main active ingredient of
chili peppers. From Jalapenos to Habaneros, this is the
fascinating molecule that gives peppers their spicy, hot
eye-watering effects. In this article I will be exploring the
compound in depth, explaining how it brings about the
burning sensation we are all so familiar to, and the health
benefits of including capsaicin in your diet.

Organic Structure & Properties


Capsaicin is an odourless, aromatic alkaloid. Its IUPAC
identification is 8-Methyl-N-vanillyl-trans -6-nonenamide
and its molecular formula is C18H27NO3.

The hydrophobic hydrocarbon tail renders Capsaicin very


insoluble in water - this is why drinking water after eating
hot food will not stop the burning, in fact it may aggravate
the sensation by washing the capsaicin around the mouth
to trigger the feeling in more places! Drinking milk is the
best solution, due to a protein called casein that is
lipophilic (fat-loving) that bind to the capsaicin and rinse it
away.

Health Benefits
However, Capsaicin has numerous health benefits. It
would seem counter-intuitive, but while capsaicin causes
pain upon contact with our bodies, it can also be used as a
pain relieving agent. Its used in topical creams that are
prescribed by the NHS to relieve aches caused by
osteoarthritis, muscle pains, and sprains. This works
because in high enough concentrations, when absorbed

34

SAINT OLAVES ACADEMIC JOURNAL


through the skin, capsaicin cause so much calcium to flow
into nerve cells that they are overwhelmed and unable to
spread the pain signal for a significant duration of time.
Upon regular use over a few weeks, the neurotransmitters
within the neurons stop functioning, rendering the
sensation of pain from the chronic condition feeling much
weaker. In this case, the specific neurotransmitter involved
in Substance P , a neuropeptide that sends signals of pain
and also works in the process of inflammation.

ISSUE 2, SEPTEMBER 2014

Spacecraft Propulsion

Introduction

Substance P
Substance P is a neuropeptide composed of eleven amino
acid residues. Peptides are large proteins formed of many
amino acid monomers by amide bonds - that is, when the
carboxyl (COOH) group of one amino acid forms a covalent
bond with the amino (NH2) group of another. In our bodies
they act as chemical messengers in the nervous system.
Health claims of including capsaicin in your diet include an
increased metabolism, One myth is that it can aid weight
loss - in fact, in the short term, it has been shown to
decrease the rate of weight regain instead, by slightly
shifting the focus of chemical oxidation from carbohydrates
to fats, having the effect of a reduced appetite.

Other Uses

Spacecraft propulsion is the process of accelerating objects


in space. The vast distances involved make it very difficult
to colonize and explore space; Voyager 1, for instance, took
nearly 35 years to travel 17 light-hours. The extreme
conditions also make it prohibitively expensive, as many
spacecraft need electrical warmers in order to keep their
electronics functioning properly. Spacecraft propulsion has
only really taken off in the last 50 years or so, and it is
currently a very underdeveloped technology. Despite this,
there have been continual improvements, and there is hope
that the trend will continue.

Chemical Rockets
Perhaps the most widely used method is the chemical
rocket which relies on firing hot gases out of a nozzle.
Most nozzles are of a certain type, known as the De Laval
Nozzle, which causes a choked flow the gases inside the
combustion chamber cannot easily escape. This pressurises
the gas, so that it has to escape at supersonic or hypersonic
speed.
In addition, most chemical rockets come in two types:

As the main component of police grade pepper spray,


capsaicin is also a defensive weapon. In high
concentrations it is a powerful irritant that can
incapacitate an aggressor - whether human, crocodile, or
bear - when sprayed in the eyes or inhaled into the lungs.
In animal testing it has even been found to cause corneal
lesions, a sign that prolonged exposure could potentially
cause permanent eye damage to humans too.
It may be extreme to say that spicy foods are mildly
addictive, but consuming capsaicin has been proven to
incite the release of endorphins (hormones that spread an
uplifting feeling of pleasure), to counter the pain of the
burning sensation. So next time youre looking for a cheap
buzz, you could try visiting your local grocery store instead
of the liquor store...

Monopropellant
Bipropellant

Monopropellant rockets utilize a catalyst along with a fuel


which readily breaks down (e.g. hydrazine) and then funnel
this out of the exhaust. Typical exhaust velocities are 17002900 m/s.
Bipropellant rockets on the other hand, use internal
combustion where a fuel (usually a highly purified
kerosene or hydrogen) is combusted with an oxidizer
(usually liquid oxygen). This mixture generates
tremendous thrust, with velocities reaching 4500 m/s. (See
diagram below)

Connor Smieja (Year 12)


_______________________________________________________

35

SAINT OLAVES ACADEMIC JOURNAL


It is these rockets that have so far been used to escape
Earths relatively deep gravity well. Other forms of
propulsion generate very low thrust compared to these
rockets, but have higher efficiencies overall.

Electric Propulsion
Electric propulsion is currently the only way of travelling
in space. Chemical rockets quickly run out of fuel, whereas
electric rockets have burn times measured in thousands of
hours.
There are 3 primary types:

Ion thrusters
Electromagnetic thrusters
Electrothermal propulsion

ISSUE 2, SEPTEMBER 2014


Electrothermal propulsion is fairly different to the family
of electrical thrusters, being more akin to chemical rockets.
The idea revolved around heating a propellant to very high
temperatures using electricity, and then blasting this out
of the back using a nozzle. The method has not been
explored much.

The Gravity Assist


Perhaps the most widely used method of accelerating
spacecraft greatly is the gravity assist, whereby a
spacecraft uses the gravity of a planet to slingshot forward
at a higher velocity. The Juno probe, launched by NASA,
flew by Earth on the 9th of October, accelerating from
78,000 to 87,000 mph.

Other methods
A simple
illustration of the
de Laval Nozzle;
as the gas exits,
the velocity
increases (blue
line) while
temperature and
pressure
decreases (red
and green line).

Ion thrusters accelerate positively charged protons out of


the exhaust using very powerful magnets. The ion stream
is then neutralized as it leaves the exhaust with an anode.
The relatively low mass of protons means the thrust
generated is on the order of millinewtons (mN). However,
efficiency is extraordinarily high, approaching almost 90%,
and exhaust velocities of the thrusters can hit almost
80,000 m/s. Over long periods of time, this method is
extremely viable.
Electromagnetic thrusters are the siblings of ion thrusters,
wherein all ions (including electrons) are accelerated with
no neutralizing anode. They too have thrust measured in
millinewtons and velocities can be as high as 110,000 m/s.

Nuclear based rockets also exist, but they are largely


theoretical. One idea is to detonate nuclear bombs behind a
rocket to propel it forward, but the idea is prohibitively
expensive. Another relies on nuclear fission or fusion to
generate electricity, which is then used to accelerate the
spacecraft using electric propulsion methods discussed
above. Other ideas rely on accelerating the exhaust of the
products of fusion out of the back using magnetic fields,
and in addition using the electricity generated. However,
most of these have not yet even been tested.
There are also the vast fields of theoretical spacecraft
propulsion, from the warp drive to the EM drive.

Conclusion
Electric and chemical rocketry currently represent the
breadth of our knowledge about spacecraft propulsion. We
have successfully used them so far, but they are fairly
underdeveloped technologies. Compared to the speed of
light, we currently travel at a snails pace. In the future,
perhaps more methods will become available to us as our
knowledge increases. But ultimately, our future as a
species lies out there.

Akhilesh Amit (Year 12)


_______________________________________________________

36

SAINT OLAVES ACADEMIC JOURNAL

The Biophysics of Flight

ISSUE 2, SEPTEMBER 2014


with nothing but a sack of bricks, you would throw the
bricks behind you to propel you forward so you could reach
the edge. Jet engines do the same, but with air, and lots of
it.
Birds do not have this luxury. Instead, when they flap their
wings, they rotate them for part of the cycle. The wings
keep on producing a perpendicular force, but whilst before
it was directed in above them, it is now directed in front of
them.

Going Up

Opposition

Planes, birds and insects would be much less impressive if


they could not create lift.

However great the thrust and lift forces may be, they will
always be opposed by weight and drag.

The way they do this is often down to the shape of the cross
section of the wings when viewed from side-on. This shape
is known as an airfoil. From the leading edge to the
trailing edge, the wing curves downwards. The air around
the wing tends to follow this curve, so it gets sent
downwards as well. Newton's Third Law tells us that the
force of the wing pushing the air down is matched by an
equal and opposite force from the air pushing the plane up,
also known as lift.

Surprisingly, the weight of a plane can decrease


significantly over the course of a single flight. The airplane
gets lighter as it uses up more and more fuel.

This raises a question. What about wings that don't have


this bend? What about the straight wings of Wright
brothers' original planes?
This puzzle has a very simple solution: whilst no air will be
bent down if the wings are parallel to the flow of the air,
air will be deflected downwards if you tilt the wing up.
For every wing, increasing the tilt increases the lift. But
you can have too much of a good thing. You eventually
reach a point when the path is too steep for the air to
follow it - so it doesn't. The lift reduces to nothing and the
plane finds itself in a less than desirable position: stall.
Lift depends on a large number of factors, one being this
'angle of attack', and another being the wing area. This is
intuitive: if the wing produces lift, a larger wing surface
means more lift. So to vary the lift, you can vary the
surface area.
Birds and planes alike do this. When a falcon dives, it folds
its wings so they become smaller. This reduces lift and
drag, neither of which it particularly wants.
Planes undergo a less drastic transformation. There are
slats at the front and flaps at the back which extend to
increase the surface area. You can see them from your
plane window during take-off and landing.

Moving forward
On planes, jet engines use the conservation of momentum
to generate thrust. If left alone in the middle of an ice rink

Drag is often classified as two different types. The one we


are most familiar with is pressure drag. If you go
skydiving, a parachute will slow you down as you descend
to earth because there is a higher pressure beneath the
parachute than above it. In other words, the air molecules
pushing you up exert a greater force than those pushing
you down.
But there is always friction drag as well. Air 'rubs' along
the parachute, causing friction which slows it down.
The balance between pressure and friction drag is rarely
equal. Planes and birds are affected by pressure drag more
than by friction drag, whilst for tiny micro-organisms it is
the friction drag that is dominant.
Another type of drag is induced drag. This is a side-effect of
lift and is generated at the tip of a wing. To reduce it, you
can make the tip smaller. This reduce the wing area and
hence compromise on the lift? To avoid losing out, the
wings can be made longer.
As a result, most birds have wings that are long and
slender rather than short and stubby. You get the same
wing area but a smaller tip. However, as often happens in
matters of practicality, there is a limit. What use are long
wings to a bird if it has to manoeuvre around trees in a
dense forest? Why give a plane incredibly slender wings if
this puts the structure at risk of breaking?

Manoeuvring
A bird or plane is going to struggle is it can't steer. It can
do so by changing the shape of its wings so one side
produces more lift than another. If the right wing produces
more lift than the left, then it will bank to the left. Birds
change the shape of their wings using their muscles.
Insects do so using the tiny veins in their wings.

37

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Nowadays, plane wings change shape due to moving flaps


on the wings which are controlled electronically. However,
in the very early days of aviation, a different method was
used, called wing-warping. A pilot would bend the wings
directly by pulling on a system of cables as required - not
for the faint-hearted.

technologies emerge and new engineering opportunities


arise, it will indeed be the key to a strong economic future
in the UK. It would be greatly beneficial for the UK to
embark upon a programme of supply-side policies in order
to inspire and equip young people to lead the industry into
the next generation.

Elena Rastorgueva (Year 12)

A first reason why engineering is so vital to the future of


the UK is that the government must strive to achieve
balanced economic growth. Aggregate demand in the UK
is, at the moment, heavily biased towards and dependent
upon consumption. Such dependency, with investment and
exports lagging, makes the UK economy fragile and
vulnerable. A shock to consumption, which could be caused
for instance by a rise in interest rates incentivising saving
and increasing the cost of borrowing, could cause economic
collapse and disaster for the UK, with no other components
of AD to fall back on. Consequently, it is imperative that
the UK acts now to stimulate investment and exports. The
Director General of the Confederation of British Industry
John Cridland endorses this view, arguing that the key to
the UK getting back on track is growth, founded on a
rebalanced economy geared much more towards
manufacturing and export. The latter is especially
applicable to engineering. The root cause of the UKs
modest exports is the fact that, as a nation, we do not
manufacture nearly enough consumer goods. With the
regulation and costs which come with production in the
UK, it would be foolish to suggest that the UK should
dedicate its resources to the manufacturing of low quality
goods, as businesses would find it impossible to compete
with such economies as China. However, the opportunity
for the UK lies in the engineering and development of
higher quality goods, which it can then export to the rest of
the world. The UK has the infrastructure, the capital and,
if effective training schemes are introduced, the human
resources required to truly lead the world in the
manufacture of high quality goods. While we cannot
compete at the bottom end of the market due to our high
costs of production, we can compete at the top end where
emerging economies lack the resources that the UK
possesses. This is our niche area where our exports can
compete and it is imperative that we exploit it so to avoid
economic stagnation and being overtaken by the BRIC
countries. The challenge therefore is how to seize this
opportunity and produce these high quality goods, which
require research, innovation and human skills, all of which
come under the category of engineering. UK gross
investment in Research and Development was modest at
1.8% in 2010 compared with 3.2% in the USA and 2.8% in
Germany. There are a lower percentage of firms deemed
innovation active in the UK than the entire developed
world at just 36% (Statistics from Jobs and growth the
importance of engineering skills to the UK economy).
However, the turnover from this limited innovation is often
the best in Europe. This portrays perfectly the unfulfilled
potential which exists in the engineering sector. If the
government invests in engineering research, innovation

_______________________________________________________

Why is engineering the key


to a strong economic future
in the UK?

Engineering is the application of scientific and


mathematical principles to real world challenges. While
the necessity and importance of engineering has rocketed
in the UK in recent years, with the development of new
technologies and the widespread demand for a greater
quality of life, the engineering sector has largely stagnated
with the economy shifting increasingly towards services, as
Figure 1 shows. While in 1970, the engineering sector
accounted for 32% of the UKs GDP, double the
contribution of 16% from services, services now contribute
three times more to GDP than engineering (ONS Blue
Book, 2009).

Figure 1 - Share of UK economy in business services,


engineering and finance

This clear and continuing trend is, for the reasons detailed
below, a cause for great concern for the UK government.
Engineering is a sector where the UK has the potential to
surge ahead of the competition and, particularly as new

38

SAINT OLAVES ACADEMIC JOURNAL


and skills, the UK can greatly strengthen its exports and in
doing so, rebalance what is currently a frail and
susceptible economy. However, it should be stressed that
such an approach would take years to implement it is a
long term solution. For instance, changes made to school
curriculums to develop more engineering skills will take a
generation to impact upon the economy. The dependence
on consumption in the UK economy is an immediate threat
and not one that can be left unaddressed for such a period.
Hence, it could be argued that developing engineering is
not the ideal solution to the imbalance in the makeup of
UK aggregate demand, and that other measures with
lesser time lags should be pursued.
Another reason why engineering provides the key to a
strong economic future in the UK is the growth and
development of technology, which provides new
engineering opportunities. The services sector has served
the UK well in recent years as London particularly has
surged ahead of its foreign opponents in areas such as
finance, law and management consultancy. However,
emerging economies have begun to diversify and are now
beginning to compete in the international services sector.
Activity in Indias services sector grew at its fastest pace
in well over a year in June, as new business poured in,
according to Reuters which also refers to the broadening of
the Chinese economy into services. Consequently,
opportunities for services in the UK are declining as
business is leaked to emerging economies. However,
opportunities are ever increasing in the engineering sector
as new innovation provides potential new routes to
economic growth. An example of this is the biomedical
engineering sector which, through the development of new
technological capabilities, has emerged as an area in which
the UK can dominate the world market. According to
Design News Magazine, the biomedical engineering sector
is set to grow 62% by 2020. This is an encouraging example
for the UK and it is vital that the use of new technology is
extended to other, somewhat stagnant, engineering sectors.
Aerospace and chemical engineering are both forecast to
grow by less than 10% by 2020 yet, if these industries can
embrace innovation and apply it to their fields; there is
great potential for them to move forward. In this way, the
application of new technology in engineering is an exciting
chance for the UK to dominate an international market,
diversify its economy and bring sustained economic growth.
It is thus imperative that steps are taken so that the
engineering skills and infrastructure are in place to apply
and use technology in the sector. However, developing
technology in engineering is a risky sector for the
government to invest in. While the UK may be able to
develop new products, it is relatively easy for these to be
copied my other countries businesses. In such cases, the
UK pays the cost of research and development, but other
economies reap much of the reward. A similar example on
a national scale was when Samsung allegedly copied
Apples iPhones in the way in which they designed their
Galaxy smartphones. As such, while the UK may be able to

ISSUE 2, SEPTEMBER 2014


develop innovative products, they may not be the sole
benefactors from them and so developing technology in
engineering may not be such an economically attractive
prospect.
A final reason why engineering is integral to a strong
economic future in the UK is the positive impact that
engineering can have on so many sectors of the economy.
Investment in engineering creates a positive multiplier
effect as the skills acquired by trained engineers and the
goods they produce can be utilised all over the economy.
Figure 2 below shows how, while many thousands of
workers in Science, Engineering and Technology are
employed in the manufacturing and construction sectors,
over 800,000 work in either Business Services or
Computing. Engineering qualifications are numerically
challenging and give students a wide range of transferrable
skills which are greatly valued by employers in various
sectors across the UK. This is demonstrated by the fact
that engineering graduates are the second highest earners,
according to a recent Telegraph study.

Figure 2 - Number of Science, Engineering and


Technology workers in various sectors

In a similar way to engineering skills, new products


developed by engineers can also bring greater efficiency
and greater profit to businesses in other sectors, and
greater growth overall in the economy. For instance, the
mechanisation of a factory by an engineering firm may
increase the efficiency of a retailers production line, cut
costs and increase profit. In such a way, engineered
products can have a beneficial effect on almost every
industry in the UK. More investment is thus required in
inspiring and educating engineers, which would create
growth not only in manufacturing, but also across the
economy as a whole. This multiplier effect is yet another
reason why engineering is so important to the UKs
economic future. However, the impact of innovative
engineering can occasionally have negative impacts on the
economy. Returning to the factory example above,
mechanisation may improve efficiency, cut costs and
improve profit margins, but it will likely put many people
out of jobs, meaning the income and purchasing power of
the population falls. In this way and others, engineering in
other sectors can sometimes lead to economic problems as
well as benefits.

39

SAINT OLAVES ACADEMIC JOURNAL


To conclude, engineering provides an opportunity for the
UK to dominate a world market and rebalance its economy,
while also taking advantage of new technological
innovation. The engineering sector has somewhat
stagnated over the past 40 years as the UK economy has
become increasingly geared towards services. Now is the
time for the government to intervene and reverse this
trend. The engineering of high quality products provides an
ideal chance for the UK to boost its struggling exports and
address the reliance of aggregate demand on consumption,
an immediate threat to the economic recovery. While
competition in services will intensify greatly in the coming
years with the diversification of emerging economies, the
UK has the infrastructure and potential to set itself apart
from the world market in engineering. It would thus be
advisable for the UK government to increase its
investment in supply-side policies to establish a more
prominent engineering sector with more skilled workers, to
lead the UK into a strong economic future.

Daniel Fargie (Year 12)


_______________________________________________________

How Physics
completed Chemistry

When I said completed Chemistry, I meant it in the


loosest sense possible, as for the most part, it was the
chemists who did most of the work; when I said
Chemistry, what I actually meant was the Periodic Table,
one of the most important tool for chemists. I just wanted
to have an interesting title.
In all honesty, the contribution of Physics in completing
one of the most valuable chemist tool in history is
undeniably massive. However, its a contribution that is
not mentioned often in Chemistry class. This article aims
to give the physicists the credits they deserved, and to
show that the sciences is more interlinked than what we
see at first glance.

The First Intervention


In the late 19th century and the early 20th, the various
major discoveries and breakthroughs in the world of
physics bombarded the chemistry world with revelations

ISSUE 2, SEPTEMBER 2014


and great advancements. This first came in 1895, where
the German physicist Wilhelm Conrad Rontgen had
discovered X-rays; it would soon be put to use by a young
physicist named Henry Moseley and earned him fame
forever after, despite his very short scientific life.
When Rontgens discovery was published, a French
physicist called Henri Becquerel became very interested on
it due to some similar results it had shown to another
observation that he had made of phosphorescent. However,
his observation had nothing to do with X-ray (though it had
helped many others to discover new elements) and a lot to
do with radioactivity. His discovery of this phenomenon
lead Marie and Pierre Curie to discover polonium and
radium. It was them who coined the term radioactivity.
With radioactivity came a new concept that was previously
rejected by many, as well as the legendary chemist whose
name comes with the Periodic Table as much as Newton
and the Law of Gravity, or Einstein and General
Relativity. Dmitri Mendeleev was a fundamentalist as he
believed that elements were individual, and rejected the
idea that transmutation can ever occurred. But with
radioactivity, we now know that its possible for one
element to be transformed into another.
Afterwards in 1911, Rutherford concluded that the nuclear
charge (obviously caused by proton, though he do not know
this yet) of an atom is approximately half of its mass, or
ZA/2 after analysing atomic scattering results. This was
supported by Charles Barkla, who did an independent
experiment with X-rays (told you it was important). Not
much was thought about this, until an amateur in science
came and beat the experts to a revelation.
In 1907, Anton van den Broek, a Dutch economist,
modified Mendeleevs periodic table and proposed that
there are 120 elements, with a table with many gaps to be
filled in. He hypothesised a particle call alphon, which was
2 atomic weight units (now we know as hadrons), and that
each subsequent element has an increase of 1 alphon.
When Rutherford and Barkla published their finding, van
den Broek published a new article later in the year,
dropping any mention of alphon, but now said that the
number of possible elements is equal to the number of
atomic charge, which still agree with what he had said
earlier, as 1 charge means an crease of approximately 2
atomic weight units. And of course this make sense, as
each subsequent element in the periodic table has an
increase of 1 proton, or as van den Broek knew it, 1 atomic
nuclear charge.

And so came the Atomic Number


Although the amateur van den Broek had beat the experts
to the new discovery, not much was made of it until the
young English physicist Henry Moseley finished it work
and came up with atomic number. Though he died at the
age of 26 while serving in WW1, he had surely marked the
stones of history with his name thanks to his work. Before

40

SAINT OLAVES ACADEMIC JOURNAL


Moseley, all chemists had arranged the elements in atomic
weight, as it was all they knew at that time. Even the
famous names of Newland, Odling, Hinrichs, Lothar Meyer
and above everyone else, Mendeleev had arranged their
very own periodic tables in atomic weight.
Moseleys experiments consisted of bouncing high energy
light off of various blocks consisted of pure elements and
recorded the X-ray frequency which was emitted by each
one. These emissions occurred as the high energy light
caused an inner electron to be ejected, causing an outer
electron to fill the space it left behind. Of course thanks to
Bohrs Quantum Leap we know this phenomenon will emit
a photon of specific frequency. At first Moseley did this
with 14 elements, 9 of which were consecutive elements in
the periodic table, from titanium to zinc. What he found
was that if he plotted the frequency of the X-ray emitted
against the square of the number that represent the
position of the respective element, it will form a straight
line graph.
This of course confirmed van den Broeks hypothesis on
that the elements can be arranged in a sequence of integer.
Of course, later on - in 1920 in fact - Ernest Rutherford
named the particle, which he had discovered the year
before by bombarding Nitrogen-14 with alpha particles to
form Oxygen-17 (N + --> O + proton), which he dubbed
proton. We finally came to the conclusion that the
elements are arranged in how much of these protons they
have in their nucleus, with each subsequent element
having one more proton than the one before.

Quantum Physics and Electron


Configuration
As we all know, when we get down to the scale that is as
small as the atom, the effects of Quantum Mechanics will
become more apparent than Classical Mechanics.
Therefore, we cannot conclude the case on the Periodic
Table without looking at how Quantum Physics had put in
the final piece of the puzzle and help completed the modern
Periodic Table.

ISSUE 2, SEPTEMBER 2014


qualified to question that. However, Niels Bohr was set on
establishing the structures of how electrons arranged
themselves in the atom.
What was strange was that Bohr did not turn to any
mathematical equations, or even applied any quantum
theories onto the subject matter, but used the chemistry
knowledge to help him figure out the electron
configuration. For example, he knew that the atom Boron
can make up to 3 separate bonds, therefore it (being Boron)
and all other elements in the same group, must have up 3
electrons in its outer shell. By just using sheer deduction,
the physicist Niels Bohr had made the first step in giving
the electron-based explanation on why elements in the
same group, like Fluorine, Chlorine, Bromine and Iodine
all behaved in the same way. Of course, this however gave
way to a fatal flaw, as he also assumed that the group 5
element of Nitrogen (which in most cases from 3 bonds)
have 3 outer shell electrons; we knew better now that it
has 5 electrons in the outer shell. This gives rise to a very
strange yet logical table of electron configuration.

A photo taken from A Tale of 7 Elements by Eric


Scerri, depicting Bohrs original electronic
configuration.
Later on Arnold Sommerfeld in Germany suggested that
the nucleus might lie at one of the foci of an ellipse rather
that at the centre of a circular atom. Whereas Bohrs model
used only one quantum number to specify a shell or orbits,
Sommerfelds model required a second quantum number to
describe the elliptical path of the electron, which mean
Bohr must introduce subshells within his own shell. This
lead to Bohr releasing a more detailed set of electron
configurations in 1923.

I hoped that when I mention Quantum Physics, we are all


familiar with the good old stories of the Black Body
Radiation, of Plancks discrete packages Quanta, of
Einsteins contribution of the Photoelectric Effect and of
Bohrs theory on the model of the Hydrogen atom. This is
because Bohrs ingenious works, and all the works that
came before which lead to it, all sums up and play a
massive part in how the Periodic Table looks like what it is
today.
When Bohr had completed his works on the Hydrogen atom
model in 1913, of course he did not just stopped there. He
went on to apply his theory by generalise the one-electron
model on the many-electrons model. This obviously pose a
question on the validity of such action, but Im not

Bohrs set of electron configuration in 1923

41

SAINT OLAVES ACADEMIC JOURNAL


Finally, Edmund Stoner found a third quantum number,
and in 1924 Wolfgang Pauli discovered a need for the
fourth. The fourth number comes hand in hand with the
theory that electron will take on one of two angular
momentum, now known as quantum spin. All four
quantum numbers are related to each other by a set of
relationship. They all came to provide the explanation why
the modern electron configuration tell us that each shell
contain 2, 8, 18, 32 electrons respectively.

ISSUE 2, SEPTEMBER 2014

The Truth about


Confirmation Bias

Introduction

Combination of four quantum numbers to explain the


total number of electrons in each shell
This is the relationship. The first quantum number n can
take on any interger value starting from 1 (n correspond to
the respective shell it represent). The second quantum
number, labelled , can have any of the following values
related to n:
= n-1, 0
So for n = 3, can take any value of 2, 1 or 0. The third
number, lets call it m can adopt values of in these ways:

m = - , -(-1), 0 (-1),
So if = 2, m can be -2, -1, 0, 1, 2.
And last but not least, as we all know, the spin state of any
particle can only be +1/2 or -1/2.
This forms the modern electron configuration we knows of
and how each shell is separated into s, p, d and f sub-shell.
This is why the modern Periodic Table looks the way it
does, with s, p, d and f blocks of elements.

Quang Tu (Year 12)


_______________________________________________________

The human habit of trying to gather patterns within the


environment to shape preconceived and personal
hypotheses can create an interesting effect as this could
hamper the scientific method and could possibly degrade
the reputation of a scientist. Whether intentional or not,
studies have shown that human beings tend to fall prey to
the efforts to confirm ones belief and draw false
conclusions - even when the correlation of two events do
not prove the causality of one event from the other. As a
result of false perceptions shaped from personal beliefs,
even the most reputable academics may (unintentionally)
introduce a bias into their research and presentation of
collected data. This particular bias is known as the
confirmation bias.

What is Confirmation Bias?


Confirmation bias is also known as selective collection of
evidence. It is considered as an effect of information
processing where a persons thought process may divert in
the direction of making their expectations come true.
People tend to favour information that confirms their
preconceptions or hypotheses independently of the
informations truthfulness or falsity. In the context of the
scientific method, this could lead to the scientist collecting
ambiguous data and, through false interpretation, confirm
ones existing position and draw biased conclusions.
From the scientific method standpoint, a consequence from
the confirmation bias is a phenomenon commonly known
as illusory correlation, where one perceives and conducts
their testing to confirm their established perceptions to
create a relationship between variables when no such
relationship exists. A common everyday example is the
uses of stereotypes leading people to assume that a certain
group of people share a certain trait resulting in an
overestimation of the strength of the association between
these two variables.
If one would conduct an experiment to explore a stereotype;
every occurrence of an instance where a preconceived
stereotype is shown to be true would fuel the confirmation
bias and the person would be surer that a true relation

42

SAINT OLAVES ACADEMIC JOURNAL


must exist between the two variables of the stereotype.
However, every occurrence of an instance where a
preconceived stereotype is shown to be false should knock
down the stereotype and show that no true relationship
exists between the variables of the stereotype. However,
the tendency for human nature is to present a false
interpretation of the data collected and reach at a biased
conclusion that suitably confirms their assumed hypothesis
disregarding any falsity of their illusory correlation.

ISSUE 2, SEPTEMBER 2014


instinctively engage confirmatory thought towards a
hypothesised idea, in which they might have some
background knowledge.

How
can
we
confirmation biases?

remove

With the term confirmation bias coined by the English


psychologist Peter Watson, studies were conducted in the
1960s to conceptualise and explain the phenomenon. The
results were interpreted to show that people generally have
a tendency to test ideas and hypotheses in a one-sided way
keeping their focus on type of expected result rather than
using neutral, scientific methods that involves the
consideration on alternative explanations to established
preconceptions.

The Black Swan was a book published in 2007 by Nassim


Nicholas Taleb. It introduced the metaphor of the black
swan that describes any event that may come unexpectedly
out of surprise. A long time ago, people had a collective
belief that all swans were white and with every instance of
a white swan would reinforce this train of thought;
however, when explorers found Australia, they found
occurrences of a black swan. This creates an interesting
analogy that is followed by the scientific community
referring to the fragility of a system of thought that a set
of conclusions may come undone once its fundamental
postulates are disproved. In this case, the observation of a
single black swan would be the undoing of the logic of any
system of thought, as well as any reasoning that followed
from this particular system.

Confirmation bias is often described as a result of


automatic, unintentional strategies rather than deliberate
deception. However, an alternative explanation provided
by Robert Maccoun suggests that biased evidence
processing is due to a combination of cognitive and
motivated mechanisms.

The underlying point of this metaphor illustrates that a


hypotheses may never proven true; and in fact the
scientific community sets out to disprove a system of
thought. If the postulates cannot be disproved, then and
only then must we be truly going towards a paradigm
where this system of thought must be fully true.

From the cognitive approach, the confirmation bias is


based upon the limitation of a person to handle complex
tasks and end up taking shortcuts to reach conclusions.
These shortcuts are known as heuristics simple, efficient
rules used by people to form judgements and make
decisions. It is a mental shortcut that involves only
focusing on one aspect of a complex problem rather than
considering all aspects and creates a balanced and holistic
conclusion. Errors and deviation, such as cognitive biases,
creep into heuristic-based conclusions and can lead to a
dismissal of their research as unreliable. From the
motivation approach, the confirmation bias can result from
the desire of belief, sometimes called wishful thinking. It is
known that people prefer pleasant thoughts over
unpleasant ones in a number of ways: this is called the
Pollyanna principle. When this principle is applied to the
interpretation to sources of evidence, this could explain
why desired conclusions are more likely to be believed true.

Chandan Dodeja (Year 12)

Why do confirmation biases occur in


the scientific community?

Psychologists Jennifer Lerner and Philip Tetlock propose


two distinguishable thought processes that may explain
the cognitive action of the brain as the confirmation biases
leaks into conclusions. Exploratory thought considers all
points of view and anticipates any objections to their
position, while confirmatory thought seeks to justify their
own specific point of view. These concepts of the two
different thought processes have been justified from
numerous studies shown that most people tend to

_______________________________________________________

How 300mg of
Aspirin can turn your
day around

Introduction
At some point in our lives, we will take a form of drugs, be
it for pain relief, clearing up infections or various other
reasons. But how do the symptoms of a headache disappear
after swallowing an aspirin? What processes occur inside
the body as a drug works?

43

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Most synthetic drugs are classified as xenobiotics, xenos


referring to Greek for stranger and biotic translating to
relating to living beings. The general meaning of
xenobiotics is therefore a substance, in this case synthetic
drugs, present in an organism whereby the organism
cannot synthesise the substance naturally. Once
administered, drugs work by entering transport mediums
in the body, such as blood and other bodily fluids. Here,
they are able to travel to the site in which they are
intended to work. Within the site of action, they carry out
their function by binding to receptors usually located on
the outside of cell membranes, or on enzymes located inside
the cell.

Receptors
Receptors are protein molecules that respond to signals
such as chemicals, and cause a physiological change in the
area of the body that they correspond to. For example,
narcotic pain relievers such as morphine work by binding
to receptors in the brain responsible for sensing pain, and
they decrease the activity of these receptors. Whereas nonnarcotic pain relievers such as aspirin work in the localized
area of pain, e.g. the back, by binding to an enzyme in the
cells of the back responsible for producing prostaglandins,
which are bioactive molecules that cause pain and an
inflammatory reaction.

Pharmacokinetics
Pharmacokinetics is a branch of pharmacology that looks
at the movement (kinetics) of a drug as it enters, acts and
exits the body. To study the pharmacokinetics of a drug,
volunteer patients take the drug, then blood and urine
specimens are collected before they undergo quantitative
analysis. There are four stages in the action pathway of a
drug: absorption, distribution, metabolism and excretion.

Absorption
Absorption is the method in which a drug is made available
to the transport mediums of the body, such as blood,
lymph, plasma, serum, aqueous humor etc. There are eight
ways in which a drug can be absorbed into the body:

Orally administered
Intravenous (IV)
Intra-nasal
Smoking (inhalation)
Sublingual (under the tongue)
Intra-muscular
Subcutaneous (under the skin)
Percutaneous (through the skin)

The rate of absorption of orally administered drugs and the


subsequent appearance in the blood stream is dependent
on the following factors:

Rate of the disintegration or dissolution of the pill or


capsule in the stomach or gastrointestinal tract
The solubility of the drug in the stomach or intestinal
fluids (the more soluble, the quicker the rate of action)
The molecular charge on the drug molecule in relation
to the cell membrane
Aqueous solubility versus lipid solubility (aqueous
soluble drugs are soluble, however, they do no pass
through the phospholipid bilayer easily)
The presence or absence of food in the stomach
The presence of any medications that influence
gastrointestinal motility.

Distribution
Following on from absorption through the stomach or the
gastrointestinal tract, the drug enters the circulatory
system, where it is distributed to most areas of the body
where there is blood flow. Organs with high blood flow, for
example the brain, heart and liver are the first to
accumulate the drug. Meanwhile, connective tissue and
organs with less blood flow are the last to accumulate the
drug. Once the drug molecules arrive at the intended site
of action, they then bind to the corresponding receptors and
carry out their function.

Metabolism
For a drug to be excreted it must be inactivated, that is the
process of altering the chemical make-up of the drug
molecule to make it available for excretion. This process is
called metabolism, detoxification or biotransformation.
For example, the metabolism of ethanol is as follows:
1. The alcohol molecule is metabolized in the liver by an
enzyme called alcohol dehydrogenase

44

SAINT OLAVES ACADEMIC JOURNAL


2. The alcohol is converted by the enzyme to acetaldehyde,
which causes dilation of the blood vessels, and after
accumulation, it is responsible for the subsequent
hangover
3. The acetaldehyde is then converted again by alcohol
dehydrogenase to acetate, which is similar to acetic
acid, or vinegar as it is better known.

Excretion
Excretion is the process by which a drug is eliminated from
the body. Drugs can be excreted by various organs, such as
the kidney or the lungs. They can be found after excretion
in biological fluids such as bile, sweat, hair, breast milk or
tears.

Conclusion
In conclusion, research into how best to alter or enhance
the pharmacokinetics of a drug is ongoing, and is the key to
being able to deliver a drug for its intended use in medicine
in the most efficient and effective way. Professionals such
as medicinal chemists, pharmacologists and pharmacists
are at the forefront of this research, alongside research
scientists and others.

Matipa Chieza (Year 12)


_______________________________________________________

The Story of the


Atomic Structure

ISSUE 2, SEPTEMBER 2014


theory and thought the atom did not exist and everything
was made of four elements: fire, water, earth and air.
Between 500BC and 1720, a new group of scientists were
created; they were called the alchemists. They lead to the
creation of new theories and actually experimented
(although not controlled) to prove these theories. They
used a mixture of science and mysticism; they even wanted
to live forever and tried to create a potion although failed.

2000 years later


In 1777, Antoine Lavoisier sometimes known as the
Father of Modern Chemistry named oxygen and hydrogen
and invented the first periodic table of 33 elements which
is significant in the atomic structure.
In 1803, John Dalton, building on the ideas of Democritus
developed the atomic theory. His predictions were:
1: Elements are composed of small indivisible particles
(atoms)
2: Atoms of the same element are identical. Atoms of
different elements are different.
3: Atoms of different elements combine together in simple
proportions to create compounds.
4: In a chemical reaction, atoms are rearranged, but not
changed.
Dalton created his own atomic table with atomic masses.
This has now become a piece of history as the strength of
his predictions is still true and you can apply them to
modern-day Chemistry. Daltons atomic model was called
the billiard ball model, it was thought that the atom was a
uniform solid sphere
In 1896, Henri Becquerel discovered radioactivity and the
three types: alpha () positive, beta () negative and
gamma () - neutral.

Where it all began


Once upon a time, there was a Greek Philosopher called
Democritus, he lived in 400BC. He was the founder of the
idea: the atom. He discovered matter was made out of lots
and lots of invisible tiny things, as he believed that you
could only split matter a certain amount of time and would
end up with a particle that couldnt be split:
tomos(invisible in Greek), that are too small to be seen
creating the invisible solid sphere model.
However, a different philosopher also in the fifth century
BC called Aristotle immediately dismissed Democrituss

In 1897, the hunt for the correct atomic model began; it


started off with Joseph John (JJ) Thomson. Experiment
after experiment using cathode rays, Thomson was finally
able to conclude that atoms were made of smaller particle
which were named electrons. This conclusion came from
the idea that cathode rays were streams of particles,
electrons that had a negative charge, that were deflected
by a magnet and an electric field and has an extremely
small mass. With this information, Thomson created the
plum-pudding atom, which was a sea of positive charge
with negative electrons moving around inside.
In 1908, Hans Geiger during the Geiger-Marsden
experiment lead to the discovery of the atomic nucleus.
In 1911, Ernest Rutherford led to the downfall of the plumpudding atom. He directed alpha particle towards a gold

45

SAINT OLAVES ACADEMIC JOURNAL


foil and measured any deflections. According to the plumpudding atom, it would hardly deflect alpha particles at all.
But most werent deflected, a few deflected at large angles
and a few deflected back to the source. This meant the
positive charge was concentrated; leading to the nuclear
atom being created. (In 1918, Rutherford discover the
proton)
In 1913, Neils Bohr slightly changed Rutherfords model,
proposing the Bohr model. This theory means that
electrons would orbit the nucleus in fixed paths.
In 1922, the Planetary/ Solar System Model was created.
This made electrons orbit a central nuclear sun in shells.
This was a combined effort of Rutherford and Bohr.
In 1923, Louis de Brogile suggested a particle could have
nature of a wave and particle.
In 1926, the Electron Cloud Model was created by Erwin
Schrdinger. He suggested that the electron had wave
properties introducing the idea of orbitals (regions around
the nucleus). The electrons travel in random orbits and so
fast it appears to form a cloud.
In 1932, James Chadwick finally discovered the neutron by
shooting alpha particles at light element. This emitted a
new radiation which he discovered was made of uncharged
particles now known as neutrons. Rutherfords nuclear
model was used with the addition of neutrons.
As the future unfolds, more and more new theories will be
proposed for the atomic structure, science will become a
whole different subject, and no one can tell what will
happen so we are going to have to wait and see.

Danielle Hasoon (Year 12)


_______________________________________________________

Real World Applications


of Sci-Fi technology

Any fans of the Sci-Fi genre who have spent hours


daydreaming about playing with a real light sabre or firing
a blaster should cheer up with the knowledge that
physicists around the world share this thought, but unlike
most of us, have the expertise and the technology to try
and make these dreams a reality. The concept of using Sci-

ISSUE 2, SEPTEMBER 2014


Fi as the basis for developing
cutting edge technology may
seem ridiculous at first but is in
actual fact not that unusual.
One well known example is the
mobile phone, largely based
around the communicator first
seen in 1966 in the original Star
Trek series. This technology had seemed to be an
impossible feat at the time but 7 years later, the first
mobile phone was released and by today, our technology far
outmatches the communicator. Similar ideas are now being
suggested with other examples of technology and one which
has recently caught the imagination is the light sabre, from
the Star Wars universe. Described by Obi-Wan Kenobi as
an elegant weapon from a more civilised age the light
sabre recently grabbed headlines after an experiment at
MIT accidently managed to cause photons to bind together
in a way similar to what we see with the light sabre. The
experiment described involves cooling rubidium atoms to
near absolute zero and firing single photons at the cloud of
atoms. The intention was to cause the photons to slow
down in an affect similar to the refraction of light we see
every day in glass but what scientists were amazed to see
is that 2 photons existed the cloud together as if they were
molecules. At current this experiment is not particularly
useful as only 2 photons were combined but if the
experiment were to be expanded it could theoretically be
used to create large structures out of pure light. The
professor running the experiment, Mikhail Lukin also
described how the photons behaved just like molecules,
appearing to have mass and could block an object, such as
another molecule made of photons. This technology is only
in very early stages with problems such as difficulty
producing large molecules and the energy requirements for
making the lethal weapon we can see Jedi knights battle
limiting this project but who knows, in a few years time,
we could see a company taking this technology and making
it happen.
Another Sci-Fi technology I will explore is one essential to
the space age worlds of Star Trek and Star Wars, a warp,
or hyperspace drive. The first key problem with such a
device as it involves travelling faster than the speed of
light, which would defy the laws of physics first proposed
by Albert Einstein.

Simply accelerating matter with engines could eventually


cause the matter to reach the speed of light but after this,

46

SAINT OLAVES ACADEMIC JOURNAL


energy is simply converted into matter by E=MC.
Therefore, to suggest anything could go faster than this
would surely be wrong? In actual fact it can be possible to
travel faster than light whilst also obeying the laws of
physics thanks to a little known thing called dark energy.
Very little is known about dark energy or what it is
however we can observe its affects, which are responsible
for speeding up the expansion of the universe. It is believed
that dark energy does this by bending space itself.
Therefore, if we could find a way to harness the power of
dark energy, it would be possible to develop a form or dark
energy bubble to surround a spacecraft. This could be used
to stretch and compress space making space the thing that
moves as well as the spacecraft. This would allow for
speeds many times the speed of light without defying the
laws of physics as the speed of the craft within the bubble
would not be faster than light. Again, this concept has the
issue of power requirements to run the warp drive but with
technology such as fusion reactors, this could be overcome.
However, it is unlikely we will see this very soon as so little
is known about dark energy scientists would have no idea
how to even approach the idea of using it.

ISSUE 2, SEPTEMBER 2014

Green Fluorescent Protein


Green Fluorescent Protein (GFP) is a beautiful molecule
(Fig. 1) which is unlocking the problem of seeing into living
cells. In fact, the 2008 Nobel Prize for Chemistry was
awarded to Martin Chalfie, Osamu Shimomura and Roger
Y. Tsien for the discovery and development of the green
fluorescent protein. It was originally discovered and
purified by Shimomura in 1962 in Friday Harbor Labs,
Washington from the A. victoria jellyfish (Fig. 2). In the
outer rim of the jellyfish there are actually two fluorescent
proteins: GFP and aequorin. Aequorin fluoresces blue light
due to a reaction with Ca2+ ions and this blue light is
absorbed by the GFP to produce a bright green light at
509nm. This explains why A. victoria has a much greener
tinge to it.

Saad Khan (Year 12)


_______________________________________________________

GFP: The Shining


Light of Biomedical
Research

Figure 1 - 3D Representations of GFP. The


image on the right highlights the
chromophore in the beta barrel

Figure 2 - Aequorea victoria. Natural


bioluminescence: The original wtGFP was
extracted from this species.

Fluorescence has been well-known since the 16th Century


and its applications have been developed ever since. Uses
of it are everywhere: from the brazen neon lighting in an
off-licences window, to a mark of authenticity in a 50
note; to the ever-amusing glow sticks at my nightly raves.
In physical terms, fluorescence is the emission of a photon
of slightly longer wavelength (and thus of lower energy)
than the exciting light. This may seem like a run-of-themill principle of physics. In biological terms however,
fluorescence may hold the answer to seeing research in the
life sciences in a whole different light.

Comprised of 238 amino acid residues, the original GFP


(now known as wild-type GFP or wtGFP) has an intricate,
yet practical, structure. On the outside, there is a -barrel
structure composed of 11 -pleated sheets. On the inside
lies a hidden treasure. Located in the centre of the barrel is
the fluorophore (the part which fluoresces). The amino acid
residues of Ser65-Tyr66-Gly67 undergo redox cyclization
reactions to create the fluorophore. No cofactors or
enzymes are needed for these reactions; they are even selfcatalysed by the surrounding molecule. Only molecular
oxygen (02) is needed.
However, it is also important to point out that the inwardfacing R-groups of the -barrel influences the
chromo/fluorophore. Characteristics such as colour,
intensity and photostability are affected. This has lead

47

SAINT OLAVES ACADEMIC JOURNAL


researchers to numerous mutations and derivatives of
wtGFP. For instance, enhanced GFP (or eGFP) can fold up
more efficiently. A vast range of colours have also been
developed by random or directed mutagenesis. An example
includes the very-inventively-named Yellow Fluorescent
Protein (YFP), which is quite surprisingly yellow. In this
mutant, the alanine206 amino acid was replaced with a
lysine amino acid and this makes the light emitted by the
fluorophore a different wavelength. Other examples
include BFP (blue), CFP (cyan) and the related mRFP1
(red).

GFP as a
Expression

Reporter

of

ISSUE 2, SEPTEMBER 2014


base sequence of the section concerned. Finally, the major
problem to be overcome is the fact that the wtGFP gene
(like all other genes) ends in a STOP triplet code so as to
release the protein from translation. This would have to be
removed from the original sequence to make 1 fusion
protein with GFP tagged to it.
This recombinant DNA technology can be used to report
the expression of the GLR1 gene. The GLR1 protein is a
glutamate receptor in nervous tissue and can be seen (Fig.
4) as a GLR1-GFP fusion protein in C. elegans.

Gene

The position of where a protein is in a cell may imply vital


information about the function of that protein. For
example, a protein exclusively found in the nucleus may
tell us that it is used for transcription, DNA replication or
chromatin condensation. This can be done using GFP
where it acts as a tag for the expressed proteins.
The process is easy-peasy (Fig. 3). Just plonk the DNA
sequence of GFP between the gene promoter region and the
DNA of the gene you want to be tagged so that when all
that whole section of DNA is transcripted and translated,
you create a fusion protein including the fluorescent GFP.
This may seem too simple to be true, because it is. There
were some extra difficulties to address to make this
possible.

Figure 4
This technology can also be used to test and prove
hypotheses. In 2012, cardiologist Rina J. Kara noticed that
pregnant women were the least affected by a heart attack.
She hypothesized that when there was a heart attack in a
pregnant woman, stem cells in the foetal tissue of the
embryo would go through the placenta to the womans
heart in order to make new heart (cardiac) cells to improve
the state of the heart. She tested her ideas by using GFP.
She made a normal mouse and a green fluorescent mouse
(a mouse with all of its cells containing GFP tagged
proteins) mate. This resulted in a pregnant normal mouse
with a GFP embryo. She then induced an ischemic heart
attack in this mouse. On examination, Karas research
team observed that there were GFP-positive cells in the
pregnant mouses heart. In addition to prove that this
wasnt coincidence, a control mouse was used. This mouse
still was pregnant with a GFP embryo, but it wasnt
induced into having a heart attack. No GFP-positive cells
were found in the control mouses heart.

Surgery by numbers?
Figure 3 - Photograph of a nematode worm (C.
elegans) with the fluorescent GLR1-GFP
protein in its nervous system.
Firstly, the DNA coding sequence for GFP had to be
discovered this was done by Douglas Pasher in 1992.
Second, the process of genetic engineering has to be dealt
with. For this, specific enzymes called restriction
endonucleases and DNA ligases are used to cut up and
rejoin the DNA strand respectively. However, the specific
endonuclease or DNA ligase to use depends on the DNA-

GFP derivatives have also been extensively used in human


mammalian cells. This is relatively not harmful as GFP
and its derivatives have low levels of phototoxicity. In the
past, it has been used to label and track cancer cells this
is vital in the research done into metastatic cancer, where
pieces of more developed cancer may go into the lymph
nodes and develop somewhere else. A GFP derivative
called pHluorin has been tagged to the protein
synaptobrevin and this has been used to visualise synaptic
activity across neurones. On a similar perspective,
fluorescent proteins have been used to help us see the
connections and wirings in the brain. This project used

48

SAINT OLAVES ACADEMIC JOURNAL


many different colourful GFP derivatives and hence the
project was called BrainBow (Fig. 5).

ISSUE 2, SEPTEMBER 2014


be done using different dye colours and different cleavable
areas, which require different molecular scissors. This
means that surgeons with this technology can reduce the
probability of paralysis in cancer surgery.

Transgenic Fish
GFP also has a helping power for the environment,
specifically pollutants in lakes or rivers. Zebrafish are a
species of fish which are quite translucent unaltered. There
is now an ability to genetically modify these animals so
that they can indicate to use about the levels of pollutants
in water in nature.

Figure 5 - A Cross-section of the 'BrainBow'


More recently, twin sisters Aneeta and Ameeta Kumar,
with the help of the Nuffield Foundation, have developed a
way of finding tumours. This technique exploits the fact
that the outside of a cancer cell is quite acidic (low pH and
a high concentration of H+ ions) and it also used the
molecule pHLIP, which burrows into a cell wall under
acidic conditions. Fluorescent dyes, such as GFP, were
used to tag onto the pHLIP so that the tumour could be
seen under a fluorescent microscope. The preliminary pilot
results seem promising.
However, the most revolutionary aspect to GFP in human
cells is in the field of surgery. Here, GFP has made it
possible to see where cancerous tissue is at a molecular
level. This is mainly thanks to Roger Y. Tsien and his
development of a smart molecule (Fig. 6). This molecule
includes a polycationic strand (blue) attached to fluorescent
GFP (green). The polycationic stand would stick to
everything in the body and hence everything would be
green. Therefore, a polyanionic strand (red) is also present
to neutralise the polycation. When neutral, the molecule
cannot stick to tissue. The charged sections are joined
together by a cleavable section (yellow). This cleavable
section can only be cut by the correct molecular scissors
which is only present in tumours. This in turn means that
only cancer cells can make the molecule sticky and thus
can be used as an indicator.

The recombinant DNA technology discussed earlier is used.


However, this method exploits the fact that more fusion
proteins would be created due to a promotor which can upregulate transcription. This up-regulation of the promoter
is due to the presence of heavy metal cations (e.g
Cadmium, Mercury, Zinc) or polycyclic hydrocarbons.
These are all pollutants. As a result the zebrafish become
green in areas with these pollutants.
A novelty area of transgenic animals for pets has sprung
out of this. GloFish, an American Company, now sells
zebrafish of many fluorescent colours (Fig. 7).

Figure 7 - set of GloFish

Conclusion
In conclusion, there are many various applications of GFP
and its derivations. I think it will also help in diverse areas
such as medicine and conservation. Overall, I think it is
fair to say that GFP is lighting up the field of biomedical
research.

Eamon Hassan (Year 12)


Figure 6 - Molecule model of the
smart molecule from Roger Y. Tsien

_______________________________________________________

The beauty of this fluorescence is that, when blue light is


shone on it, the green glow could radiate through
translucent tissue to flag up the cancer. Furthermore, the
vast variety of GFP derivatives mean that this molecule
has great potential. For example, it can be used to
differentiate between nervous and cancer tissue. This can

49

SAINT OLAVES ACADEMIC JOURNAL

Higher Projects
The Higher Project allows GCSE students to plan, manage,
realise and then review a topic in depth. This year,
students have researched a variety of subjects, and have
fully embraced the opportunity to produce an extended
essay on anything that interested them most. A selection of
some of the best pieces of work has been published in this
journal.
_______________________________________________________

How did ammonite


faunas change during
the British Albian?

ISSUE 2, SEPTEMBER 2014


the Cambridge Greensand. My main methodology will be to
read many papers and books, as well as websites, along
with primary research such as comparison between
specimens in my collection and museum/private collections.
Much of this would involve examining small features in the
shell across a series of specimens ranging from the bottom
of the formation to the top. This would provide a
description of how certain features changed over time.
Perhaps the two most important features to describe are
the venter and the ribbing. The venter is the area around
the edge of the ammonite shell. This can show a keel, a
depression or a groove, with the keel being a ridge in the
middle of the venter extending around the whole shell.
Venters of certain species changed over time for example,
the venter of Euhoplites gradually changes from a
depression to a groove over time. The ribbing is important
as some species develop or lose lautiform ribbing over time.
Lautiform ribbing is where the ribs form platy protrusions
(clavi) at the venter. It is a key diagnostic feature of many
Albian ammonites. It can therefore be seen that both these
features will play a vital role in describing the change
occurring in ammonites during the Albian.

Research Overview
Overview

Introduction
Project Overview
In this project I shall address the question How did
ammonite faunas change during the British Albian,
focusing on Tethyan species, the faunal change of the
cristatum Zone and a brief summary of the derived fauna
of the Cambridge Greensand.
Why I chose this project title
I chose this title because I have a passionate interest in
this field combined with practical experience collecting in
Albian deposits. I also feel that I have accumulated a
significant amount of knowledge on the subject over
several years I have built a comprehensive collection and
won two competitions, both with projects involving this
time period. The project may also be relevant to my future
career, as it would help me practice writing in a format
that is suitable for publication in scientific journals.
Questions to be addressed
I will investigate several areas in this project. The main
focus will be on how ammonites evolved during the British
Albian, including analysis of different transitions. I will
also investigate migrant ammonites which occasionally
turn up in British sediments of this age, and the nature of

Most of my research has been conducted not only over the


summer, but for the past year or two. These include
primary sources (my own collecting, visits to museums and
talking to people) and secondary sources (books,
monographs, papers and websites).
Primary sources
The primary sources of research I used in my investigation
were mostly gained from my experience working with these
ammonites. During this summer I did not get an
opportunity to return to Folkestone to collect more
material, as I was mainly collecting ammonites from the
Glauconitic Chalk of Isle of Wight on holiday. However, I
looked over all the material I have collected in the past,
examining and photographing the roughly 50 species of
Albian ammonite I have in my collection. I drew on visits I
have undertaken to the Albian ammonite collections at the
Natural History Museum and the Sedgwick Museum, as I
was able to photograph lots of the specimens in the
collection. Visits to museum collections to see ammonites
are completely unbiased representations of the fauna, but
may be unsorted and have wrong labelling. However, this
was not a major problem as most times I could tell what a
specimen was even if an old name was used on the label. I
also looked back on various conversations I have had with
workers in the field. These include professionals I have
talked to at museums such as Dr Hugh Owen.
Conversations with professionals were obviously very
useful, but may have been biased depending on their point
of view on particular aspects of the fauna. This is
especially true as there have been many disagreements

50

SAINT OLAVES ACADEMIC JOURNAL


between rival workers in the field in the past. I also used
information I have gained from fellow collectors of
Folkestone ammonites who I know and have either spoken
to in person or been in correspondence with through email.
These include Philip Hadland, Ian Clark, Fred Clouter and
Steve Friedrich. Conversations with collectors have
generally been useful, especially in seeing the specimens
that have, but it is worth noting that most do not keep up
with recent revisions (due to the unfortunate circumstance
that papers and monographs are often out of the price
range for individuals), and therefore their knowledge may
be out of date. I would say that since my primary sources
consisted mainly of my experience and collecting, it is fairly
reliable.
Secondary sources
The secondary sources I used consisted of papers,
monographs, guides and websites. A full list of these is
given in the bibliography. A guide that was of particular
use was Fossils of the Gault Clay (2010) which is a
publication with chapters by different specialists in the
field. It was especially useful in giving a good description of
most of the zonal ammonites, but did not take into account
the recent nomenclatural changes as it was published
before these took place. Another important reference was A
monograph on the Ammonoidea of the Lower Greensand,
published by R. Casey over a number of years. Again, this
does not take into place the recent changes but gives a very
in-depth description of all the relevant Lower Albian
species. The four recent papers revising the Albian
Ammonoidea by Owen and Cooper were probably the most
useful and up-to-date source I used, but I would criticise its
inaccessibility to the general public. Websites I used were
mostly those written by collectors showing off their
collections. These were useful in that it enabled me to
compare and examine pictures of other specimens, but
identifications would often be wrong. The most useful
website I used was probably www.gaultammonite.com, a
site detailing the collection of Jim Craig, which is now in
the Natural History Museum. This contains many pictures
of most of the common ammonites and quite a few rare
ones, and also has guides to the ammonites and
stratigraphy. It uses the old names, but this is not a
problem.
Conclusion
My research I feel has been successful, covering all the
topics I was investigating and providing a balanced view on
the subject.

Discussion
The Albian age sediments of Britain preserve one of the
most diverse and beautiful ammonite faunas available to
geologists and palaeontologists to study. In particular,
ammonites are used by biostratigraphers to name intervals
of rock. Subtle changes over time enable most Albian rock

ISSUE 2, SEPTEMBER 2014


horizons to be identified solely from the ammonites
preserved within them.
What are ammonites and what is the Albian?
Ammonites were coiled cephalopods that went extinct at
the KT Event the same disaster which killed the
dinosaurs. They first evolved during the Devonian, but
were not especially abundant until the beginning of the
Jurassic era. After this, they became very common and are
used extensively by scientists studying both their
morphology and the rocks containing them.
The ammonite lifestyle has been a subject of much debate.
Some were evidently good swimmers, as their sleek shells
and razor sharp keels show. Others were thick and spiny
it is hard to imagine them slipping around in the water
column hunting. Instead, it is probable that they lived on
or just above the seabed, drifting slowly and grabbing prey
with their tentacles.
The Albian was a stage of the Cretaceous period that lasted
around 11 million years, roughly 100 million years ago.
The best preserved sequence of the Albian in Britain is the
Gault Clay, which outcrops in the south of England
especially at Folkestone, Kent, although numerous pits and
quarries also provided temporary exposures of this
formation. The Gault is divided into both bed numbers and
ammonite Zones and Subzones, and both are referred to
throughout the text.
An important thing to note is the preservation of Gault
ammonites in particular. Fossil specimens usually only
represent the middle of the ammonites. A typical Gault
ammonite may consist of a chambered portion only a
couple inches across, whereas the actual ammonite may
have been several inches in diameter when alive. This is
due to the fact that usually only the centre of the ammonite
was mineralised and replaced by either phosphate or
pyrite. The outer whorls were left unfilled and were
crushed in the clay. Therefore it is difficult to study
ornamentation of the outer whorls or even compare sizes.
Therefore, much of the work on Gault ammonites is based
in what is the centre portion of the shell.
A brief history of Albian research
The most significant Albian deposit in Britain, the Gault
Clay at Folkestone, has long been known to both
professional and amateur geologists, for both the beauty
and diversity of the preserved fossils it contains. In the
past, before the construction of sea defences, massive
landslides along The Warren caused the clay to be pushed
up on the beach and exposed many of the historically
important specimens in museums were collected from these
reefs. Early attempts to describe the stratigraphy of the
Gault were made in the 19th century in various papers by
De Rance, Price and Jukes-Brown. It is to Jukes-Brown
and Hill (1900) that we owe the present division of the
Gault into 13 beds. This has been further refined by

51

SAINT OLAVES ACADEMIC JOURNAL


various workers in the field, notably R. Casey and H.G.
Owen.
The ammonites of the Albian, on which this project is
focused, were made the subject of a monograph by L.F.
Spath (1923-43), who also refined the stratigraphy. This is
currently the most comprehensive work on these
ammonites however, it must be treated with caution.
Spaths coverage was incomplete and many names have
been changed since the monographs publication. Recent
nomenclatural changes to Albian ammonites have been
made in a set of four papers by Cooper and Owen (201113).

ISSUE 2, SEPTEMBER 2014


In the latest Upper Albian, Euhoplites gives rise to a
number of different genera, some of which persist into the
Cenomanian, such as Hyphoplites. Throughout the entire
progression of species, there is a general trend towards
lautiform ribbing.

How did they change?


Many different ammonite lineages existed during the
British Albian and all showed significant change over the
11 million years represented. What is interesting is that
until recently38 it was not realised that the former
Hoplitidae actually consisted of several very different
ammonite lineages the Schloenbachiidae, the
Placenticeratidae and the true Hoplitidae. Such was the
extent of convergent evolution and similar shell features
that these were all once considered to be closely related
even species that are now separate families were once
included in the same genus e.g. the former Anahoplites.
However, several clear progressions can now be traced that
show exactly how these ammonites evolved39.
Hoplitidae
The origin of the Hoplitidae as a whole is thought to lie in
Uhligella,40 a form that seems to link the smoother
Desmoceratidae with the more ornamented species. This
genus is only rarely found in the English Albian (known
from isolated examples from the mammillatum nodule bed
and from Bed II of the Gault) but it is more common in the
Tethyan of the Mediterranean41. Uhligella itself gives rise
to members of the Sonneratiidae, which evolve into the
earliest Hoplitids such as Pseudosonneratia and

Destombesites.
Isohoplites is the ancestor to Hoplites and Amedroites, two
typical forms of the dentatus Subzone. Transitions towards
Proeuhoplites are seen in Lautihoplites, which is
descended from Hoplites. Proeuhoplites has a sulcate
venter which later deepens and forms a channel as it
evolves into Euhoplites of the lautus Subzone. A large
variety of species are present, from the relatively smoothly
ribbed E. lautus to the strongly ornamented E. opalinus
and E. proboscideus. Forms of the cristatum Zone include
the very strongly ribbed E. armatus and the smoother E.
ochetonotus.

Cooper and Owen 2011


Cooper and Owen 2011, 2013
Casey 1949, Cooper and Owen 2013
41 Casey 1949

Fig 1 Evolutionary progressions of Albian Hoplitidae.


Taken from Cooper and Owen (2011).
Placenticeratidae
Albian Placenticeratidae are primitive forms and consist of
mainly derivatives of Anahoplites. This genus, formerly
included in Hoplitidae, gives rise first to Neanahoplites of
the daviesi Subzone and then in the Upper Albian genera
such as Semenoviceras (known in Britain from only a
single specimen)42. There is a general trend towards
greater ornamentation in this progression. Anahoplites
also evolves into Euhoplitoides43 (formerly included in
Euhoplites) through a different progression. This is a
common ammonite in the orbignyi Subzone with weak
ornamentation and a sulcate venter. Another offshoot is
the large Hengestites, a vitually smooth, involute
ammonite with a tabulate venter in the adult.

38
39
40

42
43

Cooper and Owen 2011


Cooper and Owen 2011

52

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Fig
3

Evolutionary
progressions
of
Albian
Schloenbachiidae. Taken from Cooper and Owen (2011).
Fig
2

Evolutionary
progressions
of
Albian
Placenticeratidae. Taken from Cooper and Owen (2011).
Schloenbachiidae
The earliest Albian Schloenbachiid is Paranahoplites, a
large, compressed and strongly ribbed ammonite which
gives its name to the intermedius Subzone. A progression
through Pseudhoplites, Gazdaganites, Epihoplites can be
traced, leading to the inflated and lautiform ribbed
Procallihoplites. Callihoplites is the ancestor to a variety of
ammonite genera leading into the Cenomanian, including
Schloenbachia. Pseudhoplites is also the ancestor to
Dimorphoplites, a strongly ribbed form with prominent
ventrolateral clavi, and Metaclavites. Some lautiform
ribbing is present on some specimens of Pseudhoplites
showing the transition to the lautiform ribbing of
Dimorphoplites. Throughout this progression there are
changes in strength and form of ribbing, as well as whorl
thickness.

Acanthocerataceae
The Upper Albian cristatum Zone shows an influx of other
ammonite genera that are not typical of the Middle Albian.
These are members of the superfamily Acanthocerataceae,
Tethyan forms which differ from the indigenous genera by
having a sharp keel running down their venter. These
forms are very rare in the Middle Albian except in the lyelli
and subdelaruei Subzones44, where a more general
incursion of these forms is present. However, these are
only short term migrations of only a few species. In the
cristatum Zone, the main genera are Dipoloceras and early
Hysteroceras. Hysteroceras gradually develop stronger,
more evenly spaced ribbing and a much weaker keel into
the orbignyi subzone, where it is very common and gives its
name to the Subzone. Strongly ribbed forms such as
Hysteroceras varicosum and Hysteroceras bucklandi are
present in the binum Subzone. Transitions to Mortoniceras
are shown through the development of mid-flank tubercles
on the ribs, a feature typical of this genus45. Such an
introduction of foreign ammonite forms is thought to be
due to changing sea conditions at the time, allowing deep
water species from the Tethys to enter the shallower
Hoplitid realm.46 The Upper Gault sea was much deeper

44
45
46

Young et al. 2010


Young et al. 2010
Owen 1971

53

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

than the Lower Gault, shown by rapid sedimentation with


most fossils crushed flat47.

such as Idiohamites and Anisoceras are the dominant


species.

Exotic forms

Why did they change?

In addition to the typical Albian fauna described above, the


British Albian was host to a variety of migrants that are
far more common in other parts of the world. These short
term penetrations into the Gault sea include members of
the Leiostraca smooth, relatively unornamented
ammonites such as Beudanticeras which is extremely
common in the Lower Albian mammillatum nodule bed and
less so in the Upper Gault. Falciferalla milbourei, a tiny
ammonite with falcate ribbing, is present in huge numbers
in the intermedius Subzone, and evidently represents a
short term mass migration of this species48. Other
ammonites are known only from isolated individuals, and
probably were drifted shells or simply rare occurrences for
the Hoplitid realm. These include Pictetia and
Oxytropidoceras in the dentatus Zone, Uhligella,
Anapuzosia, Desmoceras, Eubrancoeras and Tetragonites
in the intermedius Subzone, Engonoceras in the niobe
Subzone, Gastroplites and Hypophylloceras in the
cristatum Zone49, and Rhytidoceras in the inflatum Zone50.
These species did not affect ammonite evolution in the
British Albian as they were only isolated individuals. Most
are immigrants from the Tethys, but the Gastroplites is
unique in coming from the Boreal or Arctic faunal province.
There are several other specimens of exotic ammonites
held in private collections that await description, such as
potential Arcthoplites,51 a strange Engonoceratid (possibly
Metengonoceras or Parengenoceras)52 and a very unusual
ammonite resembling an Eopachydiscus or Lewesiceras
from the cristatum Subzone53.

To understand why ammonites changed during the Albian


it is essential to examine their mode of life. Those that
lived different modes of life would have changed in
different ways.

Heteromorphs
Throughout the Albian, heteromorph ammonites coexisted
with those described above. These were irregularly coiled
forms, with strange shell shapes such as Us and open
spirals, as opposed to the tightly coiled ammonites most
are familiar with. Heteromorphs show little change
throughout the Albian and were of a very widespread
occurrence, with them probably drifting along in the ocean
currents across the seas54. In the Lower Albian,
Protanisoceras is most common. This persists into the
Lower Gault but becomes extinct in the intermedius
Subzone, having been replaced by the larger and more
common Hamites. Hamites, species of which are very
common, is present throughout the whole of the Albian but
becomes much less common above Bed X of the Gault. In
the highest part of the Albian, large tuberculated genera

Young et al. 2010


Casey 1954, Young et al. 2010
49 Casey 1966
50 Recent discovery by the author, confirmed by H. G. Owen, unpublished
51 S. Friedrich pers. comm.
52 S. Friedrich pers. comm.
53 P. Green pers. comm.
54 Young et al. 2010
47
48

Throughout the Albian, there were many changes in the


shell structure of ammonites, as described previously. The
development of coarser ribbing seen in several lineages I
would interpret as a strengthening feature. We know that
some at least must have lived near the sea floor as they
bear marks of attack by crustaceans those crustaceans
living high above the sea floor during the Albian would
probably not have strong enough pincers to inflict such
damage. This would mean these ammonites would be at
risk from benthic fauna such as crabs, lobsters and indeed
other cephalopods.55 As ammonite shells are rather thin,
developing ridges and folds would have a huge
strengthening effect. This could also be true of lautiform
ribbing joining several ribs at the top of the shell would
surely have added considerable strength to the ventral
area.
As well as strengthening for defence against predators, the
shell could have developed stronger ribbing in response to
differing conditions, such as deeper water for example. The
complex sutures of ammonites are also thought to be an
adaption to deeper water, enabling stronger bonds between
the chambers of the phragmocone.56 An interesting
comparison can be made with the bivalve shell
Actinoceramus. In the Middle Albian, members of this
genus are fairly smooth, with only faint concentric growth
lines (A. concentricus). During the cristatum Subzone, a
period of much disturbance and turbulence in these seas at
the junction of the Middle and Upper Albian, the genus
develops much stronger longitudinal ridges (A. sulcatus).
From the orbignyi Subzone onwards, the genus loses the
ridges in once again calmer conditions.57 With ammonites,
the process is far more complex, but I would hypothesise
that ribbing would be a response to similar situations.
Perhaps more solid conclusions would be drawn following
more in-depth research, and comparing to the lifestyle of
modern molluscs.
Many ammonites, particularly coarse forms of Euhoplites
(e.g. E. armatus) evolved large spines. These would have
had the effect of deterring or even fending off predators.
Their defensive purpose is evidenced by the fact that they
are sealed off a short way up the spine, meaning that if a
spine broke off there would not be a puncture in the shell
that would greatly affect the buoyancy of the animal

55
56
57

Monks 2000s
Monks and Palmer 2002
Owen 1971

54

SAINT OLAVES ACADEMIC JOURNAL


(especially if a spine on the phragmocone broke). These are
preserved on ammonite fossils as bullae.58

Conclusion
To conclude, ammonites changed for a variety of reasons
ranging from defence to changing sea conditions.
Strengthening seems to be the driving force behind
evolution of the features described above, to provide these
animals with a better chance of survival.

Thomas Miller (Year 11)


_______________________________________________________

The British Economy: Why


did it enter recession and
how can the national debt
and deficit be dealt with?

Introduction
Discussions in the House of Commons paint a very mixed
and confusing picture about the nature of todays economy:
the Conservative Party often blame their struggles to
complete objectives on the mess that the last government
left, retorted by the opposition benches, who believe the
recession was caused not by their public spending agenda,
but the banking crisis of 2008. Moreover, resolutions to
solve Britains economic issues take the form of austerity
measures from politicians on the right-wing or increased
spending from those on the left - both of which spark fierce
comments of disapproval from different parties.
Consequently, this continuous war of rhetoric leaves many
people unsure of which idea to support and often
discourages the electorate from voting. I also feel that the
political parties dabble too much in the past; the economy
does not have time to stagnate whilst politicians argue
about events from five years ago. Therefore, I have chosen
to address this question in order to undertake an
important deep field of research and process this to form a
clear and balanced personal conclusion concerning the
decline of the economy and how to deal with this. Thus, I
58

Monks and Palmer 2002

ISSUE 2, SEPTEMBER 2014


can use the outcomes from my report to more confidently
argue
in
economic
debates
and
to
prevent
false/manipulated information from fooling the electorate
to pander to a particular party.
To complete this task, I aim to use various resources (to
avoid political bias) in order to gain statistics, opinions and
facts on why the economy entered recession and the best
method to cure its effects, gain knowledge of the publics
opinion on these matters and to analyse the political
parties responses to the question. As aforementioned, I can
then make a secured judgement concerning where I stand
with these matters and so highlight the political party I
feel has the correct stance to tackle this issue.

Research Review
Research outline
My research involved the usage of many resources of
multiple different types over 25 in total. An argument
may be made that this was excessive; I disagree: the range
of sources gave me the opportunity to document a wide
spectrum of opinions and ideas, therefore allowing me to
produce a wholly inclusive and very detailed report that
carefully considers all evidence in order to produce strong
conclusions. Analysing a range of sources was crucial, as
my study of the economy is heavily politics-based and so
biased sources would make my conclusions poor. I
conducted my research in the order of researching three
sub questions and following on to create a survey based
upon my findings.
Evaluation of sources
What follows are my analyses of the main sources I used,
as well as any that could be deemed questionable:
Investing For Recovery Charles Vintcent
In conjunction with the BBCs timeline, this book was used
to explain how the subprime mortgaging process occurred,
how that created the US/UK housing bubble and the
resulting effects the crash in the housing market had on
the global economy. Written by a man with over 25 years of
experience as a private client stockbroker, I found his
opinion to be useful and his statistics were reliable, being
sourced from the Office for National Statistics (ONS).
The BBC
The BBC was the most accessed source for my project,
covering a range of access queries. It provided a timeline of
the financial crisis and how it developed into a recession
each critical stage with a link to a BBC report at the time;
I also found statistics (such as the current unemployment
rate), statements of UKIPs policies, statements of the
budget 2013 and a clear explanation of the difference
between debt and deficit. I had no concerns over bias
(despite the BBC has been criticised for siding with the left

55

SAINT OLAVES ACADEMIC JOURNAL


predominately by right-wing organisations such as the
BNP) mainly because it is required to be impartial, but
more importantly because I only used statistics/statements
from the website, which cannot be swayed. There is also no
concern over reliability, as the BBC works with
professional bodies to produce its reports e.g. the
International Monetary Fund (IMF).
W i ki p e d i a
I used this website for the following: recession statistics for
the UK and the definitions of subprime lending and
revenue. The reason as to why this site is seldom
preferred for usage is that fact that anyone can edit the
information. During my research, I bore this in mind and it
is reflected in the fact that I made reference to it only three
times, in comparison, for example, to the BBC website (see
below). I believe the recession information is valid as it was
sourced from the ONS and the definitions were correct as
their meanings stated on Wikipedia matched the context
when found in my research.
Online dictionaries
The frequent use of economic jargon prompted me to define
key terms and, as such, I researched them online. I
primarily used The Economists (TE) economic glossary a
fantastic resource: it provided clear explanations and links
to jargon that it used in definitions. I also used other online
dictionaries/sources if a term was not listed on TEs index,
each proving to be suitably reliable as they were often
economics-focussed.
The Times/The Sunday Times (TT/TST)
This centrist newspaper was an excellent source, given its
record of being a popular and credible source. I found
statistics from it (such as Germanys growth rates) and
expert opinions on party policies by Jill Sherman (the
Whitehall editor) and David Smith (economics editor for
TST for over 20 years). I gained good information from the
source, but questioned its bias given that TT supported
TCP in the 2010 general election. I was, however, surprised
to discover that articles did not usually paint the rightwing of British politics in a wholly positive light,
presenting less bias.
The Guardian (TG)
TG was accessed in order to gain left-wing political opinion
concerning the budget 2013 and also to discover the
response of Ed Balls to his acceptance of the TCPs planned
2015-16 spending agenda. Again, being a well-trusted
source and articles I read were from economists (for
example, Nils Pratley) I found no concern with the validity
of statistics. Contrary to my concern of TTs political
alliance affecting my research, I purposefully intended to
use a left-wing source as it would give me a socialist view
of events.

ISSUE 2, SEPTEMBER 2014


The Daily Mail (TDM)
Being used as an opinion of the right-wing on George
Osbornes budget, the article (George Osborne's Budget
Day 2013: On the right road, but he needs a lot of luck)
was a good source, showing a positive response to it,
reflecting conservative views as TDM is itself such
politically aligned. However, I found a highly unlikely
statistic: for each public sector job cut, six are made in the
private sector. I couldnt not find this replicated elsewhere
and research into TCP stated on its own policy list that two
new jobs are created for the private sector for each lost in
the public sector. I did not find this inconsistency
damaging, as the other statistics from the site were
directly from the budget 2013 and so proven correct. This
issue, nonetheless, provokes the thought that perhaps the
right-wing may be resorting to fabricated statistics in order
to sway people to believe in austerity.
The Conservative and Unionist Party (TCP)
Researching TCPs position to the economic situation was
not difficult: news of austerity features regularly, as do
comments that Labour got it wrong. Furthermore, their
most up-to-date measures were present in George
Osbornes budget. Therefore, the website provided me with
little useful information this is not a concern, as I have
other resources which can compensate for this and I do
not question its validity, given it is derived directly from
TCPs website.
The Liberal Democrats (TLDs)
Learning TLDs policies was not as simple: their minority
presence in the government means that their attitudes and
measures are not often publicised or made legislation. I
consequently found a list of their policies on TLDs website;
it did not include large detail (it stated its aims, yet
typically no explanation of how these would be achieved),
but was a reliable indicator of their stance (which was a
vague mixture of spending and austerity), as the
information was sourced from their website.
The Labour Party (TLP)
When trying to research TLPs policies, I found that they
had no current policy list published. As a result, my
sources surrounding the general idea of their economic
plan was sourced from the Touch Stone website and TG
both left wing sources. I believe they were reliable sources,
given that they would not lie about a party they broadly
support (as TLP is now considered centre-left and verging
upon the right, contrary to a more socialist-aligned party in
previous years, after returning under the banner of New
Labour with Tony Blair in 1997) and provided direct
quotation of statements from Ed Balls (the Shadow
Chancellor).

56

SAINT OLAVES ACADEMIC JOURNAL


UK Independence Party (UKIP)
I also referenced policies from UKIP who favour deeper
cuts and exiting of the EU - website in order to seek that
the BBC was correct in its analysis and to find out if they
offered explanations. In their 2013 Local Elections
manifesto, there was a list entitled How we will save your
money, which I subsequently printed, ready to be offered
as reasoning for their measures in my report. I found that
BBC was correct and had not fabricated anything, the
information was trust-worthy - given it was produced
directly by UKIP - and that despite a disclaimer on their
site stating that their old manifestos may be outdated, I
was confident in the use of the 2013 edition, given that the
Local Elections occurred only a few months ago.
The Green Party (TGP)
My research concerning TGP covered their 2010 general
election manifesto, 2013 local elections political broadcast
from BBC iPlayer and a YouTube video clip of Caroline
Lucas (MP for Brighton, Pavilion) giving a speech to the
Peoples Assembly Against Austerity. TGPs stance towards
the deficit is to increase spending as is an argument
typically given amongst the left-wing. Nonetheless, I
discovered that their proposals of how to increase public
spending (such as investing in the green economys jobs)
were followed by exact methods in how this would be
financed, compounded with case studies and statistics to
validate their position. To the contrary of multiple rightwing sources that stigmatise increasing spending with
more borrowing and more debt, research discovered this is
not true. TGP would instead finance its measures through
full tackling of tax avoidance, as an example, with little
borrowing to cover the rest of the costs, given
astronomically low interest rates currently. I was sceptic
about some figures Ms. Lucas revealed in her speech such
as, by 2020, if the government meets its energy targets, the
UK renewables industry will be worth a cumulative 60bn
to the economy however, cross-referencing revealed them
to be true. Furthermore, I used the 2010 manifesto for
some information: this is because TGPs stand has not
changed to the economic crisis and so many of the policies
have remained the same.
Survey
After conducting secondary research, I used the
information I discovered to produce a survey. I conducted
the survey on 03/09/2013 (from 11am 12:15pm), on
Eltham High Street, giving it to those age 18+. Each part
was deliberate: the date and time would be a peak
opportunity to interview people (with a high street full of
people), Eltham was the closest constituency to me that
had the smallest gap between Labour and Conservative
support and those over 18 were eligible to vote, increasing
the likelihood they would have formed opinions on my
questions. The results were interesting, particularly

ISSUE 2, SEPTEMBER 2014


comments regarding why the economy fell into recession
and the best ways to tackle the debt and deficit.
Conclusion
In summary, I am extremely pleased with the quality and
depth of my research: I found it interesting and
enlightening to fully submerse myself in the subject of the
economy. It is documented in detail with my RRSs and
SFB and will enable to me write an in-depth and very
balanced report.

Discussion
Section 1: Why did the economy go into recession?
A recession is defined as two consecutive quarters of GDP
decrease59; the UK entered a recession in quarter 2 of 2008,
exiting it in quarter 4 of 200960, yet profound effects of this
still remain: high unemployment (7.7%)61, falling living
standards and degrading public services. There are two
groups regularly blamed for the recessions instigation: the
(then) Labour government and banking sector both have
their roles explained here.
T h e B a n ks
The banks engaged in subprime (SP) mortgage lending
(loaning to people who would have difficulty in repaying
loans), and made mortgages easily obtainable and in large
quantities. Thus, fraudulent claims and large loans given
out caused house prices to sky-rocket to unaffordable, high
levels and so as people could not repay their loans and
consequent pleas from banks to be bailed-out (further
discouraging people to invest their money into
them/remove money from their accounts) they began to
post losses62. The first sign of this was on 08/02/2007, when
HSBC revealed its large losses and its Hong Kong shares
fell by 2.4%63. Similar circumstances occurred with other
banks globally in a lead-up to the stock market crash of
2008:

02/04/2007: New Century Financial filed for bankruptcy


after being forced to repurchase billions of dollars worth
of bad loans this was just the tip of the iceberg with
the SP area.64
09/08/2007: BNP Paribas (a French bank) suspends
1.35bn of investments funds, claiming the market was
too volatile, causing credit markets to freeze. In
response (to calm fears of a credit crunch when banks
stop lending), the European Central Bank pumped
63bn into the Eurozone market.65

http://economics.about.com/od/economicsglossary/g/recession.htm
http://en.wikipedia.org/wiki/Economy_of_the_United_Kingdom
61 http://www.bbc.co.uk/news/10604117
62 Investing for Recovery Charles Vinctent.
63 http://news.bbc.co.uk/1/hi/business/6341205.stm
64 http://news.bbc.co.uk/1/hi/business/6519051.stm
65 http://news.bbc.co.uk/1/hi/business/6938425.stm
59
60

57

SAINT OLAVES ACADEMIC JOURNAL

14/09/2013: Many customers withdraw money from


Northern Rock, which struggled due to tightening
money markets. The banks share price fell by 32%
after approaching the Bank of England (BoE) for help;
it was nationalised on 22/02/2008.66
17/03/2008: American bank JP Morgan Chase acquired
Bear Stearns after it became a victim of the SP
mortgage debts. The US Federal Reserve (Fed) lent
$30bn to the bank at a lower interest rate of 3.25%,
with its share price decreasing to $2. This event
highlighted that the financial crisis was worsening67
(the FTSE 100 - a listing of the 100 largest companies
on the London Stock Exchange and traditionally being
used as an indicator concerning the performance of
major UK companies68 dropped by 100 points).
07/09/2008: The Fed spent $25bn bailing out Freddie
Mac and Fannie Mae; had they not been bailed-out,
shockwaves would have rattled global markets due to
many foreign governments investing in their bonds
(assets that bear interest).69
15/09/2008: Lehman Brothers went bankrupt due to the
SP crisis70, causing the Dow Jones Industrial Average
(a listing on the New York Stock Exchange of 30
publicly owned American companies71) to plummet by
504 points72.
17/09/2008: Lloyds TSB acquired HBOS for 12bn after
its share prices fell by 19%.73
03/10/2008: Despite strong governmental reluctance to
bail out the banks whilst its citizens struggled in the
recession, the US passed a 394bn plan to bail out the
banks.74
13/10/2008: The UK government bailed out the RBS,
Lloyds TSB and HBOS with 37bn of taxpayers money,
giving them 60% and 40% ownership of the banks,
respectively.75
22/04/2009: Seeking to remedy the chaotic situation
through a spending regime, Alistair Darling (the then
Chancellor of the Exchequer) revealed the UKs largest
historical budget deficit of 175bn.76

The banks SP lending eventuated in them becoming, oneby-one, bankrupted and requiring governmental bailingout, resulting in the recession by causing a credit crunch,
heavily indebting countries (costs of bank bailouts for the
UK: RBS, 33bn; Lloyds TSB, 5.5bn; HBOS, 11.5bn;
Northern Rock, 26bn; Bradford and Bingley, 18bn)77 and
causing a the FTSE 100 to drop to 3530.70 points at its

ISSUE 2, SEPTEMBER 2014


trough during the recession (02/03/2009), compared to
6959.80 points at its peak before the crash (01/10/2007)78.
Labour
It is also argued that Labour caused the crash and left
Britains economy in a mess after leaving office. 2009-10s
budget deficit was 7% of GDP in 2010, yet it was
approximately 4.5% in 200779. Rachel Reeves, Labours
Shadow Work and Pensions Secretary, notes that the
public still blame Labour for the recession80 and swathing
attacks are made consistently by the Conservatives, calling
it the party that brought [Britain] to its knees,81; these
statements would seem to imply that Labour caused the
recession. It had an indirect effect, whereby it did not
regulate the banks sufficiently to prevent SP lending and
so it may not be considered the direct cause of the
recession, rather, an ineffective body that could have
halted the banks actions.
Conclusion
I conclude that the cause of the recession was the banking
sector, as it engaged in unscrupulous lending to people it
knew couldnt repay loans and, as explained, became
bankrupt and so the taxpayer already burdened by
increasing unemployment and poverty had to then bailout the banks, whilst excessive bonuses were being paid to
senior staff members; highlighting an injustice82.
Moreover, it was events concerning the banks that caused
turmoil in the credit markets and stock markets; however,
it can be argued that the former Labour government
should have acted and regulated the banks more effectively
in order to prevent SP lending from continuing.
Nonetheless, it is not wholly Labours fault, as it was a
global recession, meaning that the economies of Greece,
America, Spain, Germany etc. did not enter recession due
to Labours policies, but due to the banks SP lending that
was rampant across the world. I would also argue that
Labours initiatives to remedy the crisis were ineffective
and that money should have been spent more wisely to
ensure growth in the economy could be achieved, yet the
interpretation that they ruined the economy has likely
attributed to the fact that some people blame them for
causing the recession.
Section 2: How can the national debt and deficit
be handled?
Having established the causes of the recession and
appreciating the fact that the UKs economy is now

http://news.bbc.co.uk/1/hi/business/6996136.stm
http://news.bbc.co.uk/1/hi/business/7299938.stm
68 https://www.share.com/shareholder/Q207/ftse100.pdf
69 http://news.bbc.co.uk/1/hi/business/7602992.stm
70 http://news.bbc.co.uk/1/hi/business/7615712.stm
71 http://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average
72 http://money.cnn.com/2008/09/15/markets/markets_newyork2/
73 http://news.bbc.co.uk/1/hi/business/7622180.stm
74 http://news.bbc.co.uk/1/hi/business/7651060.stm
75 http://news.bbc.co.uk/1/hi/business/7666570.stm
76 http://news.bbc.co.uk/1/hi/uk_politics/8011321.stm
77 Investing for Recovery Charles Vinctent
66
67

http://uk.finance.yahoo.com/q?s=^ftse
http://www.debtbombshell.com/britains-budget-deficit.htm
80 http://www.telegraph.co.uk/women/womens-politics/9872219/RachelReeves-Of-course-the-public-still-blame-Labour-for-recession.html
81 Anna Soubry, BBC Question Time 24/01/2013:
http://www.bbc.co.uk/iplayer/episode/b01q9rxv/Question_Time_24_01_2013/
82 http://marchthefury.wordpress.com/2011/02/01/the-people-speak-thebanks-caused-the-recession/
78
79

58

SAINT OLAVES ACADEMIC JOURNAL


growing83, there is still the issue of addressing
unemployment, the debt and the deficit. Firstly, these
terms need clear defining, as they are often confused:

The debt is the sum of money the government owes to


lenders (from the UK and overseas) that it has
borrowed, plus interest. In March 2013, the debt was
1.1628trn - 40,000 per household.
In order to clear this debt, the deficit must first be
eliminated; this is the difference between the money the
government receives in taxes compared to how much it
spends on public services and loan interests per year. It
is currently 120bn, meaning that for every 81 earned
by the government, it spends 100.84

There are conflicting ideas as to how the deficit can be


eliminated: austerity (cutting public expenditure to close
the deficit) and increasing public spending (thus
encouraging economic growth). Quantitative easing (QE) is
another initiative that is used to stimulate growth by
encouraging banks to lend these concepts are explained
here.
Quantitative Easing
Employing QE, with the permission of the Treasury, the
Bank of England (BoE) creates more money (by printing it).
The BoE spends this money on buying banks, investment
firms and pension funds government bonds in order to
increase spending. This is achieved by making the bonds
more expensive (as they are bought by the BoE) and
therefore less attractive for banks to buy: as a result, the
money earned by the companies can be invested or loaned.
As they become more enthusiastic about lending, interest
rates should fall, causing economic growth. Once growth
has been achieved, the bonds are sold and the money
destroyed meaning there is no extra money in the longterm.85 This is a simple measure in theory and was
employed by the UK and US in response to the credit
crunch, yet has not been successful as the money from QE
has been funnelled into saving the financial sector, but has
not caused economic growth.
However, a different approach to QE has been suggested:
rather than using the money from QE for the financial
sector, QE for the people (QEP) should be implemented;
the BoE has committed 375bn to QE, meaning each
person in Britain would receive 6,000. In-turn, this would
stimulate economic growth as people would be able to payoff outstanding debts, have more spending ability and
support themselves adequately in times of falling living
standards. The Treasury is in agreement with this concept;

ISSUE 2, SEPTEMBER 2014


rather than transferring money from QE to banks (where it
is not used), it could instead be used by the people.86
Austerity
The deficit-reducing technique employed by the current
Conservative-Liberal Democrat coalition is austerity,
typically adopted by the right-wing: cutting public
expenditure in order to narrow the fiscal gap. In the shortterm, this creates unemployment until the private sector
begins to grow, employing more people and generating
more wealth eventually closing the deficit. Supporters of
austerity argue that they need to curb expenditure in areas
(such as the NHS and welfare) stating that Labours
policies in these areas were disastrous and that
borrowing must cease and fall, as public sector debt is
rising by 43bn per year due to interest payments87.
However, those against austerity argue that it has wringed
countries (such as Greece) into stagnation and that the
government should have increased public spending whilst
interest rates were at 0% following the crash. Moreover,
such as the current governments cuts, it has led to
increased poverty levels due to people having welfare
payments stopped or reduced and becoming unemployed88.
Spending
The left-wing usually implements Keynesian economics,
arguing to increase public spending in order to stimulate
economic growth and encourage spending. The Public and
Commercial Services Union (PCS) states that austerity has
created a negative cycle: the recession caused a fall in
unemployment, this meant more out-of-work benefits
claimants, thus higher welfare spending and so cutting
more public sector jobs to reduce overall public expenditure
has furthered this effect. As such, PCS advocates that more
jobs should be created (reducing employment benefits and
increasing tax revenues these have been shown to recoup
92% of money spent on public sector jobs by Richard
Murphy of Tax Research) through the green economy and
house-building, the 850bn worth of assets of the bailedout banks should be used to yield a large income for
financing spending (as they believe the banks caused the
recession), clamping down on tax avoidance (doing
everything you lawfully can in order to reduce your tax),
tax evasion (paying less tax than you are legally obliged to)
and uncollected tax all costing 120bn each year.
PSC oppose privatisation, noting that it degrades public
services, results in poorer working conditions and can
increase costs to the public. They would also see cuts in
some areas, such as the abolition of Trident (saving

http://www.newstatesman.com/2013/10/we-could-fix-our-economy-givingevery-man-woman-and-child-6000-cash
87 http://blogs.telegraph.co.uk/news/robertcolvile/100213475/austerity-isntdiscredited-the-truth-is-we-need-it-more-than-ever/
88 http://www.nybooks.com/articles/archives/2013/jul/11/how-austerity-hasfailed/
86

Return of growth cant cover over deficit hole David Smith, The
Sunday Times, 25/08/2013.
84 http://www.bbc.co.uk/news/business-21846044
85 http://www.bbc.co.uk/news/business-15198789
83

59

SAINT OLAVES ACADEMIC JOURNAL


1.5bn/annum) and ceasing war efforts in Afghanistan
(saving 2.6bn/annum).89
Conclusion
From analysing the concepts and evidence, I feel that the
best way to handle the deficit is by having an economic
policy that encompasses a mixture of QEP, austerity and
spending. This may sound contradictory; however, I
support an economic plan primarily based on increased
spending in the sectors discussed above, yet cutting more
bureaucratic positions (such as the number of people in
governance) and taking sensible measures to increase the
amount of money people have, thus allowing them to spend
more and make the economy grow. Through doing this, the
fiscal gap can steadily close. I understand that austerity
would close this gap fastest, but it would have profoundly
negative effects on the people and economy if it was
rushed. Indeed, it is best to cut excessive pay and
unneeded positions, and invest in more jobs with this
money (and money gathered from low-interest borrowing,
ensuring tax is properly collected and using assets of
nationalised corporations to fund jobs).
Nonetheless, others may retort that spending more will
increase the deficit; this is correct (if borrowing), yet
incorrect (if exercising higher taxation or looking at a
longer-term outlook), as the approved figures relating to
sums that would fund such initiatives would be more than
recouped through fewer welfare dependencies and a more
vibrant private sector (as increased spending will mean
businesses can grow and employ more people). This is
exemplified by the New Deal approach offered by
President Roosevelt in the 1930s90, which resulted in a fall
in unemployment, and so, in the UKs context, lower
unemployment would mean less dependency on the state
and tax contributions as a result. The initial fiscal deficit
would increase, however, over time it will decrease as the
private sector grows. Overall, the three ideas in this section
have not had all specific policies explained, rather, facts
and figures that develop the ideas. In order to determine
the best set of policies, section three looks at five UK
political parties stances to the economy, as they can
contextualise them.
Section 3: Which political party has the best
economic policy to solve the issue?
It appears that after successive Conservative and Labour
governments, flawed economic policies have prevailed that
have created an economy which goes through boom and
bust periods, rather than sustainable growth. It is for this
reason that I am analysing the policies of the TCP, TLDs,
TLP, UKIP and TGP to appreciate a wider spectrum.
TCP

http://www.pcs.org.uk/en/campaigns/campaign-resources/there-is-analternative-the-case-against-cuts-in-public-spending.cfm
90 http://www.newdeal75.org/whatwasit.html
89

ISSUE 2, SEPTEMBER 2014


TCP are currently employing austerity, combined with tax
cuts for lower earners, steady budgets cuts to departments
(of 1% per year for 2013-15, excluding schools and the
NHS91) and investing in infrastructure. They have
achieved growth in the UK economy and created 1.4m jobs
in the private sector, as well as cutting the deficit by 1/3.92
One could say that they have achieved what was necessary;
however, I do not think their policies have been effective.
Despite growth, the fiscal deficit is not still decreasing as
promised, it will stand at ~120bn next year (meaning that
public sector debt is still rising)93 and their policies are
having a positive effect on a small percentage of the
population: since the crash, the richest 1,000 citizens have
increased their wealth by 190bn94, whilst the average
working family has been 1,500 worse-off95 how, then,
are these policies working if the population is not feeling
the growth and prosperity, but more financial pressure?
TLDs
TLDs offer a more Keynesian approach to the economy,
stating that they would create 1m green jobs, fairer taxes
(a 600 tax cut to 24m people), cracking down on tax
avoidance, taxing homes worth more than 2m and placing
a benefits cap at 26,00096. These policies are good
ideologically; however, there is insufficient provision of how
these would be funded and no specific detail given about
what types of jobs will be created. I therefore believe TLDs
do not have the fully correct approach, as they do not
consider other areas, such as QE, QEP and utility
nationalisation etc.
TLP
TLP have not yet released full details of their economic
policy, but they have stated several things they will do: if
they win the next general election, they will keep the
Conservatives 2015-16 spending agenda, cap the welfare
budget97, implement a mansion tax (funnelling money from
the highest 20% of earners into the poorest 20%98),
emphasise capital spending (cash payments over a period
of one year in order to acquire/improve the existing life of
assets99) and a VAT cut100, enabling the consumer to spend
more money and instigate some economic growth. Once
more, I find that the policies are similar to those of TCP
and TLDS, and TLPs policies are less credible because
they do not have a fully established set of policies, making

http://www.bbc.co.uk/news/uk-politics-21851965
http://www.conservatives.com/Policy/Where_we_stand/Economy.aspx
Return of growth cant cover over deficit hole David Smith, The
Sunday Times, 25/08/2013.
94 http://www.youtube.com/watch?v=QuggTKDAhE0 Caroline Lucas
speaking at the Peoples Assembly Against Austerity.
95 http://www.labour.org.uk/cameron-worst-pm-for-living-standards-onrecord,2013-08-06
96 http://www.libdems.org.uk/what_we_stand_for.aspx
97 http://blogs.channel4.com/gary-gibbon-on-politics/not-playing-balls-labourwill-stick-to-coalition-spending-plans/23098
98 http://www.theguardian.com/politics/2013/jun/26/ed-balls-tories-acceptscuts-labour
99 http://en.wikipedia.org/wiki/Capital_expenditure
100 http://touchstoneblog.org.uk/2013/06/labours-emerging-economic-policy
91
92
93

60

SAINT OLAVES ACADEMIC JOURNAL

UKIP
UKIP provides a clear set of policies; this is an
improvement on what the main three parties offer.
Nonetheless, I strongly disagree with them, such as a
further 77bn of public cuts101: they have not specified
where these would be targeted; however, I feel that current
austerity has been damaging and that a further amount of
that scale would create more social tensions. I also feel that
their policy for a free-market is irresponsible, given the
actions of banks that caused the recession; I do not think
they can be trusted. Their economic stance contains many
populist policies (for example, a 40% increase in defence
spending, returning student grants, a flat tax rate of about
25%, banning immigration for 5 years and leaving the EU
to save 53m per day in membership fees)102. Factually,
these are flawed: the UK is withdrawing troops from war
in the Middle East increasing spending would be
wasteful and unnecessary, immigrants contribute 37%
more in taxes than they use in public services (in
comparison to British-born people who use 20% more in
public services than they contribute in taxes103) and
leaving the EU now would present a strong uncertainty to
markets, Britain would lose trade tariff exemptions (thus
increasing the prices of imports/exports, causing further
job uncertainty and price increases) and the Confederation
of British Industry has calculated that the EU is worth
62-78bn per year to the UKs economy104. Moreover, a
senior Whitehall editor at TT has highlighted that UKIPs
policies are financially unviable, presenting 120bn in
uncosted pledges, due to tax decreases and expenditure
increases105.
TGP
TGP follows Keynesian economics: they seek to build a
sustainable economy that is not focussed on growth, but
capable of being supported by the confines of our single
planet and able to meet the basic needs of all. Very
impressively, the Greens provided detailed policies,
explaining how they would be funded and justifying them
with statistics and case studies. In short, they seek to
narrow the rich/poor divide through fairer taxation
(relieving lower earners of tax, and introducing several
taxes such as the Robin Hood tax, a 0.05% transaction
tax), ending the profiteering culture in the banking sector
and public utilities by renationalising them and setting up
not-for-profit green investment banks, introducing a living
wage (causing fewer people to require income support
benefit), strong investment into the green energy sector
http://www.bbc.co.uk/news/uk-politics-22396690
http://www.ukip.org/issues/2013-01-25-10-55-7/local-2013
103 http://www.keithtaylormep.org.uk/wpcontent/uploads/MythandFactflyer4.pdf
104 http://www.cbi.org.uk/media-centre/press-releases/2013/11/in-withreform-our-out-with-no-influence-cbi-chief-makes-case-for-eu-membership/
105 http://www.thetimes.co.uk/tto/news/politics/article3751480.ece

through the provision of construction, research and homeinsulation jobs, as well as increasing state pensions and
the welfare cap to ensure the families can live with dignity
and good support106.These measures promote economic
activity as people will have more spending power; TGP
would finance these through a variety of means, including:
more taxation, saving 120bn lost to tax avoidance and
evasion, using the assets of nationalised banks and QE
money to invest into green jobs and scrapping Trident,
saving 100bn over the next 30 years etc.107 I would argue
that TGP must include some more cuts, such as lowering
the costs of the NHS and welfare, although, TGP approach
is to treat problems at their core and so the costs of these
budgets will steadily decrease over time as a result of their
policies.
Conclusion
I believe TGP unexpectedly have the best economic
policy to ensure the deficit will decrease, but also at a rate
such that the people do not feel extreme economic
pressures, public services are able to cope adequately and
there is a fair system whereby society as a whole benefits
from successes. There is also focus on very specific policies
and honesty about popularity, giving authentication to
them and their explanations are accurate. People would, of
course, disagree with me; for example, they may oppose
TGPs policy to tax transactions, yet a 0.05% tax would not
discourage corporations from trading (as it is such a minor
amount) and could raise up to 12bn for investment into
jobs108.
Section 4: What are the publics views concerning
these questions?
On 31/09/2013, I conducted a survey on Eltham High
Street from 11:00 12:15, asking the three questions
(posed in this dissertation) to 30 members of the public
who were all over 18. The graphs of results and analyses
are below:

20
Number of people

it difficult to fully understand whether they support


austerity, spending and/or QE/QEP.

ISSUE 2, SEPTEMBER 2014

Q1) What/who do you think caused the


recession?
19

15
8

10

3
0

The banks

Labour

Other

Rather not Don't know


say

Cause

101
102

http://greenparty.org.uk/assets/files/resources/Manifesto_web_file.pdf
http://www.youtube.com/watch?v=QuggTKDAhE0 Caroline Lucas
speaking at the Peoples Assembly Against Austerity
106
107

108

http://greenparty.org.uk/assets/files/resources/Manifesto_web_file.pdf

61

SAINT OLAVES ACADEMIC JOURNAL


From the data, it is clear that most people attribute the
banks to causing the recession; however some also blame
the Labour government, frequently noting that it mustve
been their fault as they were in power at the time. Only 3
people were unsure and 4 answered with other causes.
Notably, one person said it happened naturally this was
a surprising comment, yet I do not agree, as there has been
a clear line of evidence to show the banks role in troubling
financial markets. Another claimed it was due to corrupt
politicians I cannot deem whether this is true; however,
it reflects dissatisfaction that the electorate felt with the
establishment during such a calamitous period.

Number of people

Q2) Which method(s) do you think should be


used to tackle the UK's debt and deficit?

20
15
15
10

1
Cutting Increasing
public
public
spending spending

Other

Rather not Don't know


say

Method

Number of people

Of the options, it seems austerity is favoured over


spending, but only by 2 more people. I can understand the
logic behind simply needing curve public spending if there
is a financing gap, but this gap can be closed through the
careful balance of several approaches, with spending as its
heart. 4 people did not know and 1 wished not to answer,
with 15 people suggesting their own policies: effective
action on immigration, leaving the EU, stopping foreign aid
and cutting the pay of and number of bureaucratic
positions etc. The first three are akin to UKIPs stance,
likely being as such due to their rising profile in recent
months.

Q3) Which political party/parties do you think


has/have the best policies to deal with the
debt and deficit?
13
14

12
10
8
6
4
2
0

7
3
1

4
2

3
1

I found the results of this question very surprising: despite


TCPs unpopular measures of austerity, 13 people believed
they are correct. Just over half, 7, thought that TLP was
right (perhaps reflecting the deep concern and confidence
in Labours economic policy being sensible, after the large
budget deficits left in 2010) and TLDs received a single
vote, likely due to their policies not being kept to and so
former LD supporters have turned away from them. Given
UKIPs recent strong media presence, I expected more
people to vote for them; however, it may be that the
electorate are not as well-informed on their economic
stance as they are on UKIPs stance towards immigration
and the EU. Further intriguing was the recognition of the
Green Party and that people felt confidence in their
economic policies (given that they are typically side-lined
in debates, often branded as a single-issue party for the
environment). The 4 who answered other all stated that
no party had the correct policies, highlighting a section of
the electorate still disillusioned by politics.
Conclusion

ISSUE 2, SEPTEMBER 2014

In conclusion, the public blame the banks for the recession


(but also Labour to a lesser degree), are almost split in
support of austerity/spending (but no mention of QE or
QEP, perhaps because they are concepts people do not
understand) and there is a mix as to which political party
is correct, with the Conservatives topping the vote. At least
10% of people answered dont know for each question,
showing that part of the electorate do not understand these
issues and so this could contribute to high levels of
disillusion with the political system and economy.

Main Conclusion
In answer to my question, I conclude that the British
economy entered recession due to the unscrupulous lending
activities of global banks and that Labour poorly managed
the remedy to the situation, resulting in a spiking deficit.
To fully eliminate the debt and deficit, I support the
removal of unnecessary bureaucratic and aristocratic
positions (such as the monarchy), introducing QEP and
increasing public spending in order to ensure that
economic growth is stimulated and that the positive effects
of this are felt by the whole population and not only the
richest in society. As such, I feel that TGP has a robust
economic policy in order to effectively implement this,
although my opinions are not fully supported by the public,
who also generally blame the banks, but are mixed in
opinion on what action to take and which party can
successfully eliminate the debt and deficit.

Rishil Patel (Year 11)


_______________________________________________________
Political Party

62

SAINT OLAVES ACADEMIC JOURNAL

Will the mathematical


innovations of the future
come from computers?

What is the Definition of Creativity?


Before looking at the ability of computers to exhibit
creativity, a working definition of it is needed. The Oxford
English Dictionary defines creativity as the use of
imagination or original ideas to create something109.
However imagination is defined as the ability to
formulate new ideas. This could reasonably be applied to
mathematical calculations, in which computers have
already vastly surpassed humans: the K Computer in
Japan can perform 10,000,000,000,000,000 calculations per
second110, whereas the human brain can perform two111!
Margaret Boden a professor of Psychology and
Philosophy at the University of Sussex - rejects this
definition112. She argues that creativity involves the
expansion of a field of endeavour beyond the previous
boundaries. Solving an equation using a set of rules is not
creative, but inventing an entirely new method- as Isaac
Newton did with the invention of infinitesimal calculus- is
quite another matter.
For the purpose of this dissertation, it will be Bodens
definition which will be used. The purpose of a definition is
to create a specific meaning for a word, and the former
definition is far too general since it can be applied to
almost anything. Also, Margaret Boden is one of the
leading academics in the study of creativity, which gives
her definition more authority than that found in the Oxford
Dictionary which was, presumably, devised by a layman on
the matter to define the everyday usage rather than a
precise term for use in the study of creativity.

http://www.oxforddictionaries.com/definition/english/creativity accessed
3rd October 2013
110 http://www.telegraph.co.uk/technology/news/8586655/Japanesesupercomputer-K-is-worlds-fastest.html accessed 6th October 2013
111 http://www.ualberta.ca/~chrisw/howfast.html

ISSUE 2, SEPTEMBER 2014

Computers vs. The Brain


Before the invention of the digital computer, British
mathematician Alan Turing had invented the concept of a
Turing Machine113. A Turing Machine is a machine with an
infinitely long tape feeding in input values. The machine
itself performs calculations using these input values and
feeds the results out as the tape passes through the
machine.
In the real world, computers cant have an infinite amount
of storage for input (and it is digital storage rather than a
tape which is used). However, the principle remains the
computer reads input values, performs functions on them
and then outputs them. Turing Machines often have
extremely complicated functions to perform, but it is
possible to encode any set of instructions in one using
something called a Universal Turing Machine which is able
to interpret all necessary instructions (and computer
scientists have written the code for a Universal Turing
Machines indeed, a digital computer may informally be
considered to be one). A set of instructions is called an
algorithm, and computers are therefore defined as
algorithmic.
Some mathematicians, such as Sir Professor Roger Penrose
of Oxford University, have argued that computers cannot
exhibit genuine creativity because the human brain is nonalgorithmic that is, that human thought involves
processes that cannot be performed by a Turing Machine.
This argument is usually based on mathematician Kurt
Gdels Incompleteness Theorem, which involves
mathematical truth which can be seen by mathematicians
as obvious but cannot be formally proven using an
algorithm114. Proponents of such a theory argue that the
human brain must be more than just a computer, and the
most influential theory regarding this idea is called OrchOR (Orchestrated Objective Reduction), which claims that
quantum physics affects the brain in non-computable ways
(which means involving calculations which could never be
fully performed for example, calculating the precise value
of an irrational number such as pi)115. However, the
mainstream scientific community has rejected this theory
on the basis that it assumes that conclusions made by
humans are infallible, and also that it ignores the high
temperatures in the brain which would make quantum
effects irrelevant. Most supporters of this idea from the 90s
have since abandoned it in the light of new evidence about
processes in the brain, and Professor of Physics Lawrence
Krauss wrote a refutation claiming that Orch-OR is
nonsense116. For this reason, it seems likely that a

109

112

http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/ai_creativity.ht
ml accessed 3rd July 2013

The Emperors New Mind: Concerning Computers, Minds and the Laws
of Physics by Roger Penrose. Chapter 2: The Turing Test
114 http://www.miskatonic.org/godel.html accessed 16th July 2013
115 Shadows of the Mind: In Search of the Missing Science of Consciousness,
Roger Penrose, Published by Oxford University Press in 1994
116 http://scienceblogs.com/cortex/2007/03/15/quantum-consciousness/
accessed 6th October 2013
113

63

SAINT OLAVES ACADEMIC JOURNAL


computer is capable of performing any function that the
brain is and even if any non-computational physics is
involved it could be harnessed by the computers of the
future.
Some would argue that a computer could never be more
intelligent than its maker. However, there is evidence to
the contrary. The computer Deep Blue was designed by a
computer scientist, yet later defeated the world champion
at a game of Chess117. Intelligence can be categorized into
two areas: crystallized intelligence and fluid intelligence118.
Crystallized intelligence is purely genetic and refers to a
persons raw intelligence in the everyday sense of the
word their intellectual potential as determined by their
ability to store and process information. Computers can
already perform higher than humans in this area on
account of their vast memory storage, which was what gave
Deep Blue its advantage119. Fluid intelligence is the ability
to apply crystal intelligence, and is the form of intelligence
which can be improved through practice. In terms of
computing, crystallized intelligence is the processing power
and memory storage while fluid intelligence is the set of
algorithms via which the computer utilises its memory and
processing power. Since computers are already vastly
ahead of the human brain in terms of crystallized
intelligence, the main difficulty in programming a
computer which can surpass humans in creativity is
finding useful algorithms. The human brain clearly does so
with some success, and so the first area to study is the
neuroscience of creativity.

Creative Algorithms in the Brain


One of the most popular theories in Neuroscience as to the
processes behind creativity comes from Dr Charles Limb of
the John Hopkins Medical Centre120. When studying the
brains of musicians who were improvising on stage, he
found unusual activity in the prefrontal cortex. He found
that the section associated with self-monitoring and the
section associated with the random generation of ideas
become more activated. He suggests that creativity is the
result of random ideas being generated and then quickly
analysed and selected.
A similar idea was advocated by Professor Carl Sagan in
his Pulitzer Prize-winning book The Dragons of Eden121.
He suggests that creativity stems from the right
hemisphere of the brain (which is generally associated with

http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
accessed 2nd November 2013
118 Ackerman, Phillip L. (1996). "A theory of adult intellectual development:
Process, personality, interests, and knowledge".

ISSUE 2, SEPTEMBER 2014


the production of new ideas), but that it is the left
hemisphere of the brain which produces useful creative
output by selecting good ideas. Modern neuroscientists
often reject this left-right distinction, but propose similar
neural networks that fulfil the same roles122. The exact
criteria for good ideas vary from field to field. A good
mathematical idea, for instance, would likely be general
(i.e apply to lots of problems as opposed to being an ad hoc
solution) and simple. Another criterion might be aesthetic
appeal, exemplified by physicist Paul Dirac who mentioned
that it was his keen sense of beauty which allowed him to
achieve his greatest mathematical insights123.
Aesthetics are to do with the appreciation of beauty, and
the sense of beauty which humans possess124. Aesthetic
appeal is a sort of gut feeling which cannot really be
proven a common expression is there is no right or
wrong answer when dealing with art, which is heavily
based on aesthetics. Although aesthetics are not universal,
there are certain things which are often found aesthetically
pleasing by different people in spite of varying cultures and
environments. Philosopher Dennis Dutton devised his six
universal aesthetics to define these things which tend to
invoke aesthetic pleasure in the majority of people125.
Biology is highly utilitarian, since for something to evolve
it must have some purpose. It seems highly plausible that
aesthetics which do seem to have some level of
universality are an evolved mechanism to subconsciously
detect good ideas126. If this is the case, a simple algorithmic
selection process in the brain has accumulated
evolutionary baggage to become the complex driving force
behind all art, which is often considered to be something
distinctly human and non-computable. This means that
aesthetics the source of the soul often described in
works of art - could in principle be programmed into a
computer as a subroutine to perform part of the process of
selecting good ideas . In practice it would be very
challenging, due to the contentious nature of aesthetics
and the unavoidable element of subjectivity, and so
developments in the study of aesthetics may be just as
important as developments in neuroscience and computer
science for the future of computers exhibiting creativity.
In the brain, the selection processes have to be performed
in a somewhat clumsy and inefficient way. One of the most
accepted theories is the process of incubation, which is
the sub-conscious process of sorting through ideas and
combining those which are connected127. Much research
has been done into the factors which improve the efficiency
of this process, such as REM (Rapid Eye Movement a

117

119

Hsu, Feng-hsiung (2002). Behind Deep Blue: Building the Computer that

Defeated the World Chess Champion. Princeton University Press


http://www.ted.com/talks/charles_limb_your_brain_on_improv.html
accessed 29th July 2013
121 The Dragons of Eden, Carl Sagan, Published by The Random House
Publishing Group in 1977
120

http://blogs.scientificamerican.com/beautiful-minds/2013/08/19/the-realneuroscience-of-creativity/ accessed 20th August 2013


123 The Psychology of Invention in the Mathematical Field, Jaques
Hadamard, Published by Dover Publication Inc. in 1945
124 http://www.oxforddictionaries.com/definition/english/aesthetic
125 Derek Allan, Art and the Human Adventure: Andr Malraux's Theory of
Art. (Amsterdam: Rodopi. 2009)
126 www.apa.org/education/k12/psych-aesthetic.ppt accessed 20th July 2013
127 Dodds, A. Rebecca, Ward, B. Thomas, & Smith, M. Steven (2004). A
Review of Experimental Research on Incubation in Problem Solving and
Creativity. Texas A&M University
122

64

SAINT OLAVES ACADEMIC JOURNAL


sign associated with dreaming) sleep, a positive mood and
regular exercise128. The precise role of dreaming is often
disputed most neuroscientists agree that subconscious
processes involved in incubation are enhanced during
dreams, but some dispute that the conscious mind
experiences dreams which are relevant to their creative
pursuits (for example, the anecdote of chemist August
Kekuls dream in which he saw the ring-shaped form of
the chemical Bentane which he had been searching for)129.
The reason that these factors improve the crucial
incubation process and allow for the output of good ideas is
because they put the brain in the best state to perform the
necessary calculations to combine different ideas and filter
out those which are not useful130. However, it is very hitand-miss, with different brains performing incubation to
different degrees of success under different conditions.
Different intrusive techniques have been found to improve
incubation in test subjects, but these have generally been
crude and potentially damaging techniques such as
electrically suppressing sections of the brain.
There is no direct scientific evidence that the hit and
miss theory of creativity is the algorithm used in the
brain, since decoding instructions in the brain is in practice
impossible (given the network of 14 billion neurones), and
it seems unlikely that it could be proven without vast
advances in technology. However, the evidence of the
incubation process seems to suggest that it is the case.
Additionally, one of the key tenets of the scientific method
is Occams Razor, which states that the simplest
explanation the one which makes the least assumptions
is the best131. Since there are no alternate theories except
for the assumption that there is something immaterial
about humanitys creativity, the hit and miss theory is the
one which is most generally accepted by neuroscientists132.
Although few would deny that the algorithms used by the
human brain for producing creative output have proven
effective throughout human history, the human brain is
actually a very unsuitable piece of hardware. Creativity
requires the production of ideas, but the human brain has
limits to its speed- for instance, it can only perform two
conscious calculations per second. During the incubation
process, faults with human memory can result in good
ideas being lost and useful links between them not being
made. Human brains are also notorious for forming
engrams- closed networks of neurons that become fixed

http://www.psychologytoday.com/blog/the-athletes-way/201202/theneuroscience-imagination accessed 18th October 2013


129 http://www2.ucsc.edu/dreams/Library/domhoff_2004b.html accessed 1 st
November 2013
130 http://www.psychologytoday.com/blog/zig-zag/201304/enhancing-creativeincubation accessed 30th July 2013
131 http://math.ucr.edu/home/baez/physics/General/occam.html accessed 20th
June 2013
132 http://www.ualberta.ca/~chrisw/Preprint-CNC-PB&R.pdf accessed 25th
July 2013
128

ISSUE 2, SEPTEMBER 2014


and can result in closed thinking133. Running similar
algorithms to those used in the brain on a computer could
result in massively improved creative output.
Computer programs which aim to mimic human creativity
generally include several similar steps:
1) Generation of random ideas. These should be restricted
to some degree for example, a musical composition
program should only generate sequences of musical notes
as opposed to numbers or images. However, they should
not be bounded beyond this as the selection process comes
afterwards
2) The combination of relevant ideas. For example, a
mathematical program might be able to spot two
equivalent formulae
3) An ability to select good ideas using pre-programmed
selection criteria, such as originality (which might be
checked for by comparing the idea to a database of existing
idea) or aesthetics (such as a pleasant melody in a piece of
music or an elegant equation in mathematics).
Steps 2 and 3 will often be looped to further combine good
ideas.
The algorithms are complicated to produce, because
programming a computer to look for things like aesthetics
and originality is extremely challenging. However, there
are already a variety of computers which have produced
original creative works.

The Current State of Creative


Computing
Creative computers are often considered to be a thing of
the future. However, a variety of computers have already
been successful in producing original works of art,
mathematics and music.
David Cope is a computer scientist who wrote a program
called Emily Howell in the 1990s134. Emily Howell can
access a large database of a specific artists music and is
programmed to make observations about the techniques
used for example, a particular key or a recurring melody.
Howell then uses the presence of these techniques as the
selection criteria for her own randomly generated music,
and after combining sections and further refining them for
extended periods of time she is able to produce original
pieces which, in experiments, audiences are unsuccessful
at differentiating from actual work by the musician. A
similar program has been written by Psychologist Philip

133

http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/ai_creativity.ht
ml accessed 20th July 2013
http://news.bbc.co.uk/1/hi/programmes/click_online/9777655.stm accessed
19th July 2013
134

65

SAINT OLAVES ACADEMIC JOURNAL


Johnson-Laird, which produces original pieces of jazz
music without even drawing inspiration from real
musicians.135
However, developments have not been isolated to the
musical. Artist and computer programmer Harold Cohen
has written a program called Aaron which produces
original works of art136. He programmed an understanding
of human faces, how bodies moved, different colour tones
and other important artistic concepts. The program then
rapidly generates artistic combinations before using
aesthetic selection processes to choose and combine those
which are best.
However, perhaps the most incredible example of a
creative computer is AM, written by computer scientist
Douglas Lemat of Stanford University in the mid 1970s.
AM is a mathematical computer, but it does not simply
crunch numbers it discovers new mathematics (which is
in line with Bodens definition of creativity) without the
input of mathematicians! AM was programmed with a few
basic concepts, such as set theory, as well as around 200
or so rules for spotting useful mathematical insights.
After having run for an hour, AM had rediscovered the
existence of addition and multiplication, the rules of
Boolean algebra, the existence of prime numbers and had
even suggested that every even number greater than 4
could be written as the product of two prime numbers the
famous Goldbach Conjecture. Centuries of human
mathematical endeavour had been accomplished by a
computer in an hour!
However, AMs results were later thrown into question
with a 1984 paper by computer scientist Graeme Richie. He
pointed out that the programming language used to write
AM, Lisp, has its foundations in Llamda Calculus a
mathematical technique used to probe the basic
foundations of mathematics and that this may have been
the root of AMs success. However, this just means Lisp is a
good programming language to write mathematically
innovative computer programs in, and does not mean these
programs
couldnt
potentially
discover
original
mathematics undiscovered by humans in the future. A
more serious objection was that Lemat may have
accidently provided AM with more mathematical
knowledge than was intended during the programming
process, which would make his leaps of creativity far less
impressive, although this is disputed. Lemat later
programmed Euriska, a more efficient version of AM which
has a larger base of mathematical facts to draw on and can
store new discoveries in its database, effectively allowing it
to learn. No new mathematical discoveries have yet been
made, but the success of AM illustrates the potency of
creative intelligence in computers and the potential for the
future.
135

http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/ai_creativity.ht
ml
136 http://news.bbc.co.uk/1/hi/sci/tech/1647086.stm accessed

ISSUE 2, SEPTEMBER 2014


Alan Turing devised his famous Turing Test to determine
whether a computer can be said to be intelligent. The
Turing Test is a test to see whether a human can tell
whether a statement is made by a computer or a human. I
performed a survey using philosophical statements and
jokes (both creative) made both by computers and real
philosophers and comedians. I had one statement of each
type made by computer program CleverBot and three
made by human philosophers and comedians. My 35
respondents responded with a 29% accuracy rate for
spotting the computers philosophical statement and a 26%
accuracy rate for the joke similar to what would be
expected if they were just guessing. When questioned, 91%
responded that they had guessed. This seems to suggest
that CleverBot a simple online application passes the
Turing Test.
If one abides by this somewhat simplistic definition of
machine intelligence, computers have already reached an
incredible milestone. However, programs such as AM,
Euriska, Aaron and Emily Howell are clearly yet to
surpass human musicians, artists and mathematicians. In
the future, though, this could potentially change.

Conclusion:
The
Future
Creative Computing

of

One of the main ways in which computers could


revolutionize creative endeavour would be if they could
improve on their processing power. Most creative programs
rely on the random generation of ideas and the selection of
those useful, which relies on calculations being performed.
If these calculations could be performed at a faster rate,
their creative output could be vastly increased at least in
terms of quantity. Moores Law is an observation in
computer science which states that the processing power of
computers doubles every 18 months137, and so the future in
this region looks very bright.
However, it is not quantity which is currently lacking , but
quality (Aaron, for instance, can produce a work of art in
mere hours that would take a real artist far longer but
none of his pieces have received critical acclaim from the
art community). Processing power can be overcome by
giving the computer more time (since processing power is
simply the rate at which a computer can perform
calculations), and the memory capacity of computers can be
made effectively limitless through the use of cloud storage,
which means computers can possess vast amounts of
crystallized intelligence already far more than a human.
However, the quality of creative output is still bottlenecked
by fluid intelligence, which is determined by the algorithm
used by the computer.

http://computer.howstuffworks.com/moores-law.htm accessed 30th


October 2013
137

66

SAINT OLAVES ACADEMIC JOURNAL


The algorithms will, no doubt, progress over time through
human ingenuity. However, an interesting field is selfmodifying computer code- computer programs which can
modify and write themselves. If creative algorithms were to
be applied to the field of computer programming, it is
foreseeable that an upwards spiral could be set off better
algorithms which in turn are able to write even better
algorithms. Human creativity may be cut out from the
picture altogether; relegated to the spark that set it off.
In artistic endeavours, many would argue that using
creative computers is pointless the pleasure of art is its
distinct humanity even if aesthetics could be mimicked
by a computer, it is the rough-edged subjectivity of
aesthetics which make art so interesting. However, it is
hard to deny the benefits of using creatively intelligent
computers for the sciences. For example, in theoretical
physics, it is often necessary to visualise in more than
three dimensions especially when using phase space, a
mathematical tool. The human brain is incapable of doing
so, and so is distinctly restricted, but a computer could and
so produce more sophisticated insights. Furthermore, in
the field of mathematics, deep insights are often found
through trial and error, and using a computers
processing power this could be performed far faster than
by human mathematicians. Since computers are built upon
solid mathematical logic, they may also be able to provide
insights more easily as shown by AMs (albeit disputed)
development of centuries worth of mathematics in mere
hours. The computers of tomorrow may be able to duplicate
this feat with future mathematics, propelling humanitys
understanding of the world forwards by centuries.
Contrary to common perceptions, computers are already
capable of creative thought- just look at Aaron, or Emily
Howell, or CleverBots ability to pass the Turing Test. As
processing power increases and self-modifying computer
code is developed, it seems inevitable that computers will
surpass humans perhaps reaching intellectual heights
that are incomprehensible to the human brain.
Once computers can reason creatively at a level equal to or
greater than a human, it is foreseeable that humans may
be rendered obsolete in the workplace as computers are
able to fulfil more and more roles. Even if governments do
not invest in such technology, the economic benefits of
producing such intelligent machines would be immensethey could make accurate predictions of the stock market,
or produce patent-worthy designs, or write original
computer programs. Whether or not you feel this is a
future we should be building, it seems inevitable that it is
coming. It is also the lesser of two evils in a sense, since the
alternative method to improve human intelligence is
genetic engineering (especially growing babies outside the

ISSUE 2, SEPTEMBER 2014


womb, where limits to cranial capacity can be avoided), and
this presents far more serious ethical issues138.
Time-frames are very hard to predict, since computer
science often advances in sudden leaps rather than steady
progress. However, one thing is certain: creative computers
will shape the future in more ways than is comfortable to
consider.

Rowan Wright (Year 11)


_______________________________________________________

Is Interstellar Travel
achievable within
100 years?

Introduction
The question I hope to answer with my project is: Is InterStellar Travel achievable within the next 100 years?
I will break down this question into three factors:

The methods of spaceflight (actual and theoretical)


The economic costs and sustainability of
spaceflight
The effects of travelling in space on the human
body

I have conducted an interview with Kelvin Long, a fellow


at the British Interplanetary Institute, to further my
understanding and to enhance my project. This interview
is attached as an addendum.
I chose this project for many reasons. Firstly, space is a
massive interest of mine and I rank the accomplishments
of the global space agencies as the greatest of humanity.
Secondly, I am taking Physics, Maths and Astronomy at
GCSE and hope that the knowledge I gain out of this
project will aid me in my understanding of these subjects. I
hope to study engineering at university, and it is my dream
to work in aerospace, and maybe to be an astronaut myself
one day.

The Universe in a Nutshell by Stephen Hawking. Published in 2002 by


Bantam Spectra.
138

67

SAINT OLAVES ACADEMIC JOURNAL


It has been 44 years since the first manned moon landing
and it was almost 50 years earler that most of the theory
behind rocket science was created, by such minds as
Hermann Oberth, and Konstantin Tsiolkovsky. The first
liquid-propelled rocket was launched on March 16 1920.
The first man flew into space on 12 April 1961, and when
the New Horizons Probe reaches Pluto in 2015 all of what
we identify as our solar system will have been visited.
What then?
Some would argue that spaceflight is in a deadlock and
that it has ground to an ignominious halt. I would say
otherwise. The work done by the global community in
building the International Space Station was necessary, if
not headline-grabbing. And now, a new collection of private
companies are heading into orbit. There is hope for future
space exploration, but can all the massive challenges be
overcome?

Research Overview
Research Outline
I completed my research one book at a time over the course
of six weeks, which I found to be a very efficient method. I
created three documents, one per sub-question, and added
to them in bullet-point form any relevant information I
came across. I also maintained a search for relevant
websites that could help. I became a fellow of the British
Interplanetary Society early on in the research process,
enabling me to receive many back issues of spaceflight
magazine and the DVD: How to Colonise the Stars, both
produced by the BIS.
Evaluation of primary research
There were two pieces of primary research in my work, and
the first conducted was my questionnaire. While this
research was good to show a very vague pattern, I only
questioned around 50 people in total and they were all the
same gender and from the same school. That limited me in
terms of the conclusions I could draw, but was still good
enough that I could see the trend. The other piece of
primary research I conducted was an interview with Dr.
Kelvin Long, a fellow at the British Interplanetary Society.
He is at the cutting edge of the topic I am researching, and
gave detailed, meaningful answers, which were of great
use. There was little room for bias, as most of the questions
I asked were objective and fact-based.
Evaluation of books
I used four books in my research, and found that print
media is still the most valuable and accessible medium
there is. Packing for Mars is modern and informative, and
it contained all the information I needed to write the
section on human biology. The author drew her
information from sources at space agencies and from

ISSUE 2, SEPTEMBER 2014


astronauts, lending credence to her work. As the book is
factual it is free from bias.
The next book was The Starflight Handbook. The book is
24 years old, but the scientific content is still very much
valid, as little research has been done in interstellar travel
since. However, there will be some aspects that are out of
date, so I strived to use other research as far as possible.
The next was Centauri Dreams. This is only 9 years old,
and still used the same science found in Starflight
Handbook, only the author made it more accessible and
easier to understand.
The final book; space probes, is a definitive history of every
unmanned spaceflight beyond low-earth orbit. It also
provided a well-sourced and cited history on the origins of
spaceflight, and the minds who conceived of them. The
book is not written by someone in the field of aerospace,
but by a doctor of neuroscience, which does raise some
questions over the credibility of the book. However, I do not
doubt the authors scientific and historical integrity, as all
of the information found within appeared to be genuine.
Evaluation of websites
I only used two websites in my research. The first was a
BBC interview with Astronaut Suni Williams about her
experiences in space. As the BBC is required to be
impartial about its coverage and the answers given were
very helpful, I felt comfortable including the website.
The only other one I used was a short singularityhub.com
article on an upcoming NASA mission. Singularityhub is a
respected technology website, and the story was cited back
to an original NASA article.
Evaluation of spaceflight magazine
Spaceflight magazine was very useful as it provided
opinion and news about the cutting edge of the modern-day
space industry. I did use an opinion piece in my research,
but I intend to show the other side of the argument as well.
Conclusion
The variety of sources I used ensured that I have all the
relevant information I need to go about writing my project.

Discussion
Propulsion: there and back again
The furthest craft from the Earth at the moment is
Voyager 1, which on 12th September 2013 became the first
man-made object to reach interstellar space. It took the
craft 36 years to reach that point, and now it will forever
orbit around the Milky Way. Voyager 1 represents
mankinds best efforts in the field of interstellar travel so
far, which brings into focus how far we have to go to reach
Alpha Centauri, our nearest star.

68

SAINT OLAVES ACADEMIC JOURNAL


Voyager 1 launched on a conventional chemical rocket, the
Titan IIIE. The crafts that will carry cargo to other solar
systems will be much bigger, much faster and much more
extraordinary.
Currently there are three bands of theoretical interstellar
craft. Firstly, there are the craft that use energetic fuels,
such as nuclear fusion and antimatter, secondly craft that
pick up fuel as they go along, such as ramjets, solar sails
and laser-beamed propulsion, and thirdly, craft that go
against the laws of physics as we know them, such as warp
drives and wormhole craft.

ISSUE 2, SEPTEMBER 2014


worth considering. Ram Augmented Interstellar Rockets
(RAIR) will funnel charged particles in space into a fusion
reactor, before accelerating them. This process is close to
being realised in reality, meaning that it is very likely that
future astronauts will visit other Solar Systems using a
RAIR.
Another likely band two method of propulsion is the solar
sail. This method relies on using the momentum of photons
to accelerate the craft. The sail required to lift a thousandtonne payload to Alpha Centauri (the benchmark in
interstellar propulsion systems) would need to be the size
of Texas, and the spacecraft would have to come within
0.01 AU (1 AU is the average distance between the Earth
and the Sun) of the Sun.
Band Three Propulsion
Band three propulsion methods are unlikely to materialise
within a hundred years. They include such curiosities as
the Alcubierre Warp Drive that sends ripples through the
fabric of space, and the use of black holes as shortcuts
between two points in space, thereby creating a loophole
around the speed of light.
Conclusion
In conclusion, the technology required for interstellar could
be ready by 2113, but will there actually be an
international drive to go? And will this mission carry
human cargo?

Figure 1 Depiction of Saturn-V launch Power by Paul


Calle 1963, oil on panel

Human health: Learning to live in the void

Band One Propulsion


The first band is the one that we know most about, so I will
start there. The idea and technology required to build a
nuclear fission-powered spacecraft has been around for half
a decade, but none has ever been launched. The
International Test Ban treaty prohibited the detonation of
nuclear devices in space outright, and even if it were to be
lifted serious ethical issues would remain. Nuclear fusionpowered craft would go faster than traditional rockets,
with less fuel, but fusion technology as a whole is still
unreliable. In a hundred years it seems likely that some of
the problems will have been overcome. Antimatter rockets
allow the collision of protons and antiprotons to create
energy, but the cost of producing antimatter at the moment
is titanic, and it is probable that the amount of antimatter
required to mount an interstellar mission will still be
untenable within a hundred years. However, using only
milligrams of antimatter could be used to explore the solar
system.
Band Two Propulsion
Band two propulsion methods drift even further into the
realm of science fiction than band one, but they are still

Figure 2 Picture of Donn Eisele, commander of Apollo 7


during that mission
All through our evolutionary history, humans have lived
within the protection of Earth. Gravity, an atmosphere,
fresh air, we take them all for granted. What happens to a

69

SAINT OLAVES ACADEMIC JOURNAL


person when you take away everything we have on Earth,
and leave them weightless, exposed and trapped inside a
tin can for months on end?
The longest any human has been in space in one stint is
438 days. A speculative trip to Mars would take around
500 days. At around 10% of the speed of light, (realistically
the maximum attainable within 100 years) a trip to the
nearest star would take 34 years.
Psychological and Physiological Problems
Basic artificial gravity systems do exist, and normally work
by simply spinning the spacecraft. Without them, theres
no way humans could survive that long. The microgravity
would primarily create bone loss, eyesight loss and muscle
atrophying, as well as reduced cardiovascular fitness.
So now we face the problem of cabin fever. 34 years is a
long time to spend in a metal box, and there are two
solutions to this: building a massive ark the size of a city or
cryogenically freezing the inhabitants.
The first solution is unattainable within a century, due to
the facts that the added cost would be, astronomical and
propulsion would be a different matter entirely.
Cryogenics, alas, are also fundamentally flawed. The
human body cannot be flash frozen and heated up and
expected to survive. Each individual cell in a human has a
different freeze-thaw rate, meaning that the kidneys, say,
will thaw out before the heart, or the brain. This would
induce organ failure and pointless death.
With our current systems of propulsion it is
overwhelmingly likely that it will be robotic explorers
making the first leap to another solar system, not humans.
There is also the danger that if an ark mission is launched,
that it is overtaken 10 years into its journey by a more
advanced ship. Mechanical explorers are just better suited
to extreme conditions.
Economics- the cold logic behind the dream

ISSUE 2, SEPTEMBER 2014


There is no denying that space programmes are incredibly
expensive beasts. They take billions to maintain and have
become increasingly difficult to justify. Western recession
has led to massive government spending cuts. Agencies
like NASA or Roskosmos are underfunded shadows of their
former selves. NASA's next manned launch is tentatively
scheduled for 2021. The space shuttle is no more. The
Apollo programme is a distant memory.
Who now is left to carry the torch? The answer lies in the
world of business. Companies such as Spacex and Virgin
Galactic promise a golden future for the space industry, but
can they deliver?
New Space -pure
enterprise?

romanticism

or

profitable

There is an unspoken plan for the development of the


private space industry. It goes as follows: first, create a
risk-free, cheap transportation service to low-earth orbit
for tourists, astronauts and cargo. Second, develop
temporary habitats such as space hotels and laboratories
using inexpensive technology such as inflatable modules.
Third: to use the profits from this to fund asteroid mining
and solar energy technology, to solve our energy and
resource problems. Finally, to establish permanent selfsustaining bases in orbit and on the surfaces of other
planets. This is truly a plan of dreams, but will it actually
happen?
To find out, let us focus on the first step; getting to orbit.
Many companies have made this first leap. Bigelow
Aerospace has three unmanned habitats in orbit. Spacex
and Orbital Sciences have completed successful resupply
missions to the International Space Station. There is great
progress among the runners in this new space race. So far,
so impressive.
However, will the private space industry ever extend
beyond shipping cargo, scientists and tourists to Low-Earth
Orbit?
Limitations
This is where problems start to arise. Interstellar craft
require far more capital than private companies can access.
One area that is being pitched as a massive source of
income is platinum mining on asteroids.
Besides the obvious engineering concerns (no spaceflight
has ever returned from an asteroid before, and no asteroid
has been found yet that is suitable for platinum mining)
there are severe, unresolved economic issues as well.

Figure 3 NASA envisaged manned mission to Mars

The price of platinum is currently worth $2000 an ounce,


which values the platinum market at $13bn a year.
However, prices change to suit demand. Just how elastic is
the platinum market? Well, recently, an extra 250,000
ounces came on to the market, causing a 25 per cent
decrease in the price of platinum. Not only is the market

70

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

extremely sensitive, space law prevents ownership of


celestial bodies. Even if platinum mining can be made
profitable, there is nothing to stop the same companies
from mining the same asteroids which would damage the
economics of the project.

and also books about Sri Lankan history and the British
Empire. Internet resources will prove particularly useful,
as many of them are directly related to my project. I will
need to take into account any possible bias, as some may be
written by Sri Lankan authors, etc.

But will private companies ever lead an exploratory voyage


to another star? I consider this to be highly unlikely. There
is minimal commercial gain to be made. The risk would be
too high. The unknowns are too many. In the past all the
great explorers have been government-sponsored.
Columbus was paid by the Spanish royal family to explore,
for example. This does not look set to change in the future.

I have chosen this project because my family comes from


Sri Lanka and I have lived there and celebrated
Independence Day and heard stories about colonisation. I
was intrigued to explore the impacts and how monarchy
progressed onto foreign rule. As I am doing history GCSE,
it will be useful for my analysis skills. I would also like to
explore the connection between past and present. As I have
an interest in finance, I would also like to analyse the
economy and analyse the argument for both sides.

Conclusion
In short, it is extremely unlikely that mankind will reach
another star before 2113. The technology required is too
advanced, the variables are too many. The current and
future global space industry does not seem conducive to
interstellar exploration. Currently, too little is being spent
on space exploration worldwide and economic doubts cloud
the future of the industry.
It could also be argued that humanity is simply not mature
enough to leave the Solar System. After all, if we cant take
care of our own planet why should we assume we could
handle two? Maybe a civilisation that has stabilised the
environment and built international bases on other bodies
in the Solar System, as well as orbiting habitats above
Earth might field a mission, but that civilisation is a very
long way off indeed.

Oscar Hinze (Year 11)


_______________________________________________________

How has Colonisation


Impacted Sri Lanka?

Introduction
During the course of this project, I will be investigating the
question, How has colonisation impacted Sri Lanka? and
I will look at the state of Sri Lanka pre-colonisation, and
attempting to analyse this in comparison to its situation
post-colonisation and in current times. I will be using
resources such as my local libraries to gather information,

I will evaluate life during the pre-colonial era- for example,


at that time, Sri Lanka was a trading hub, and there was a
long period of monarchy. Then, for colonial times, I will
have to use information which I have gathered about
foreign influences, and also the statistics of positive and
negative effects. For recent times, I will use HDI and GDP
per capita, etc. to analyse how well Sri Lanka is
developing- however, I will need to take into account the
civil war, which is also a sub-topic of my research, as precolonial times and colonial times have strongly influenced
the tensions. I will be analysing groups of factors, namely
social, cultural, economic, and political.

Research Review
Research Outline
I began my research on the 6th of August, starting with the
question What was the state of Sri Lanka in pre-colonial
times? and ending with the question What are the effects
of colonisation today? on the 6th of November. I generally
assessed each question and gathered the relevant resources
before moving onto actually investigating it, and if, later
on, I found out that I needed more information regarding
the topic, returned to it.
Evaluation o f Primary and Secondary Sources
I have mainly used the internet for my research so far,
since the books I wanted were located at distant libraries
and only two of them have arrived. I conducted a
questionnaire to find out the opinions of my Sri Lankan
relatives, and although I am aware that this may be
biased, I thought it would be useful to gather information
from those who may have been affected by it first-hand.
This will be more useful as it allows me to make a relative
judgment. Most answers appeared to be unbiased. I
conducted a Q & A session with my uncle who was born
immediately after colonisation and experienced its effects,
and his answers were very useful in finding a personal
opinion.

71

SAINT OLAVES ACADEMIC JOURNAL


E v a l u a t i o n - A Hi s t o r y o f S r i L a n ka b y K . M .
De Silva
This is the book that I have primarily used, and much of
the information I used is taken from there. It is a very
comprehensive documentation of history, and this also
proved to be disadvantageous as it was very difficult to
locate and extract the relevant information. Much of it is
explained in too much detail and has to be thoroughly
examined and cut down. It details the Anuradhapura
Kingdom in four stages, and goes on to explain the
Polonnaruwa Kingdom and then the various conquests of
the island and their impacts, as well as Sri Lanka postcolonisation. This is very useful as my project is split into
stages, and I evaluated these areas and was able to locate a
lot of information which I could not find on the internet.
Most of it appears to be reliable, and sources are cited,
which reduces the possibility of bias. Other books did not
explain in as much detail. However, the author is of
Sinhalese origin, but the language they have used is
unbiased and they have thoroughly researched. They are
also a specialist in this subject area, which makes it more
likely to be accurate and well-presented.

ISSUE 2, SEPTEMBER 2014


had previously read, and as it is tailored towards tourists,
it has to be reliable but it may also be tweaked to make Sri
Lanka a more popular destination.
The Worldbank provided some extremely valuable
statistics in evaluating the state of Sri Lanka in recent and
current times, as it has a lot of information which can be
used to evaluate how Sri Lanka is coping now and may tell
me whether the situation is improving after the civil war,
and will show how it is experiencing the after-effects of
colonisation.
Conclusion of Research
Overall, I have gathered a great deal of information from a
number of books and internet sources. Most of my sources
were reliable, however, especially with books, it was
difficult to find resources related to my project. Books
formed the basis of my research, as they are generally
more reliable and also contain more detailed information,
and were backed-up by internet sources and primary
research where appropriate.

Discussion

Evaluation of Websites Used


Internet websites are very useful as they provide an array
of opinions, facts and very useful information, much of
which is linked directly to my project. Wikipedia, which is
an established website, provides detailed information on
the colonial and pre-colonial history of Sri Lanka, which I
found very useful and allowed me to build on the
knowledge I already had from reading the book Tales of
Ceylon in regard to the system of monarchy which was
used in ancient times, and the various capitals of the
country over the years and how it flourished. Wikipedia
usually provides accurate information, however, as it can
be edited by anyone, it is possible that it could be biased
either way or could have been tampered with. The
information appeared to be fairly reliable, however- none of
it was unbelievable.
Several websites, although not well-known, provided very
useful information- one website, called Beyond
Intractability, even suggested that the unfair distribution
of wealth during colonial times may have led to the Sri
Lankan Civil War, an idea which I have heard about many
times in the past. The University Teachers for Human
Rights website also provided this idea, and most of the
information appeared to be written in an unbiased way,
although it was predominantly negative, and as it was
from a Jaffna area, may have been biased as that group of
people were most significantly affected during the Civil
War. Websites proved very useful; however, it was difficult
to know what was biased and what was not. I also used
Lonely Planet, which provided a brief history of precolonial times and how Sri Lanka used to be a trading hub
and also of the two ancient capitals, Anuradhapura and
Polonnaruwa. This backed up some information which I

S e c t i o n 1 - W h a t w a s t h e s t a t e o f S r i L a n ka i n
pre-colonial times?
Sri Lanka is an island off the coast of India which has been
colonised throughout history. The first settlers there were
Indo-Aryans from India in the fifth century B.C., and they
were followed two centuries later by Tamil migrants. The
Indo-Aryans eventually went on to become the Sinhalese.
According to the Mahavamsa, Vijaya (543-505 B.C.), the
first king of Sri Lanka, came from northern India.
However, it is an ancient text and may be inaccurate. This
was, however, the start of the monarchy which ruled Sri
Lanka until the arrival of the Portuguese.

Figure 1.0 Map of Sri Lanka

72

SAINT OLAVES ACADEMIC JOURNAL


The monarchy was an important part of Sri Lankan
society. Throughout history, there was much political
instability, with kings battling for power. South Indians
(Tamils) did seize power from Sinhalese kings, and ruled
for long periods, and Tamil influence increased. Perhaps
the most famous of battles is between Elara and
Dutthagamani, who were Tamil and Sinhalese
respectively. This could show that the ethnic rivalry
existed even in pre-colonial times, but both kings are
known to have supporters from the other ethnic group.
Elara (205-161 B.C.), a Tamil king from south India, was
accepted by the Sinhalese people and had a reputation for
being a fair ruler, and thus, ethnic rivalry must have been
very limited. There had, however, been ethnic rivalry
during the time of Manavamma, who stripped Tamil
courtiers of their power. During the period of South Indian
invasion, the perception of Tamils may have been
worsened, but it is unlikely to have lasted. Until that time,
Sri Lanka had been divided into kingdoms, and
Dutthagamani (161-137 B.C.) was the first king to bring
the country together under one ruler. There was more
stability during his rule, however, it did not continue. The
system depended on who was in power at the time, which
meant that politics had no consistency. This was partly the
reason that Sri Lanka was unable to defend itself against
colonisation.
Irrigation systems are considered to be one of ancient Sri
Lankas greatest achievements, and throughout the
political instability, it remained the one constant, and just
kept expanding. In 1859, James Emerson Tennent
remarked that, No people in any age or country had so
great practice and experience in the construction of works
for irrigation. This shows the extent of the work which
had taken place, and which began in ancient times, as
early as the time of Panduka Abhaya (437-367 B.C.). The
system allowed rice and other crops to be grown in the dry
regions, which would have otherwise lacked an agricultural
source of income. The fact that they overcame many
problems including an inconsistently sloping landscape
showed an ability to be self-reliant. Important structures
such as the Alahara Canal proved that Sri Lanka was one
of the technology leaders at the time. There were more
crops than necessary, which allowed the kings to invest in
building cultural monuments, such as Jetavaranamaya.
Irrigation complexes and were built around the cities of
Anuradhapura and Polonnaruwa (the two successive
capitals). In the dry zones especially, each region often had
its own ideas to conserve rainwater. The people worked
together in small groups and the country was far ahead of
any of its neighbours at the time. It was however, heavily
reliant on the system.
External trade was also an area where Sri Lanka
flourished - it had ports of which the most important was
the Mahatittha, and dealt with numerous countries mostly
from Asia. Its trading items were gems, ivory, elephants
and cinnamon, the latter which would later become a great
area of interest (See section 3). Through India, Sri Lanka

ISSUE 2, SEPTEMBER 2014


traded with the Roman Empire and became an important
stop on the Silk Road. The port went through periods of
decline and resurgence, but eventually fell into disuse
when the Cholas got involved in the tensions between the
Pandyans and Sri Lankans.
Sri Lanka at those times had an early form of currency and
a system in which grain could be traded. Sri Lanka at the
time had a lack of military power, and it only fought
occasionally. Parakramabahu I was the first king to pursue
a foreign relations policy. However, eventually due to other
pressures the country split into kingdoms- Jaffna- in which
the Tamil kings were determined to take power over the
Sinhalese- as well as Ruhana, Dambadeniya and Kotte to
name a few. However, the fifteenth century saw an
increase in trade, but a decrease in agriculture.
Parakramabahu VI (1411-1466) was the last ruler to bring
the country together, and he managed to resist invasions
far more successfully.
Conclusion
During pre-colonial times, it is easy to see that Sri Lanka
was a trading hub, had great irrigation systems and a
thriving culture. Its progress was hindered by the political
instability and there were problems which resulted in the
decline of the hydraulic civilisation and trade, although the
latter did re-emerge. This political instability would later
prove to be the reason for its colonisation, and although life
during pre-colonial times was relatively calm, it changed
with the arrival of the Portuguese.
Section 2colonised?

Ho w

and

why

was

Sri

L a n ka

Portuguese
The Portuguese were the first to arrive in 1505, but they
only began their quest for power in 1517 by building a fort
in Colombo and expanding from there. They wanted to use
it for trade, as they had seen its potential. They were most
interested in cinnamon, and they had a great deal of naval
power and better technology than the Sri Lankans, and
used it to gain influence. It was viewed as a protecting
force by the king of Kotte, and thus this was the place
where the Portuguese began to have real power, and the
kingdom became increasingly reliant on them. Originally,
they were prevented from taking power in Jaffna, but Sri
Lankas resistance eventually crumbled and the
Portuguese took over in 1591. If there had been better
resistance and firmer rulers, perhaps colonisation would
have been resisted at this point.
Dutch
However, their control was ended by the Dutch, who
contacted Rajasimha II, who wanted the Portuguese out of
Sri Lanka and hoped the Dutch would help. They wanted
to take over the cinnamon trade, like the Portuguese. They
started their conquest in 1640 and by 1658, they had taken

73

SAINT OLAVES ACADEMIC JOURNAL


over from the Portuguese. However, they eventually
developed sour relations with the Kandyan Kingdom, and
the Kandyan ruler wanted another European country to
drive out the Dutch, who had managed to take over a
number of ports.
British
The British took interest in Sri Lanka in 1762, and finally
took over in 1796. They were also interested in external
trade, particularly cinnamon, and at first, they did not
have much control, but eventually, they made their
presence clear.
Conclusion
If Sri Lanka had managed to resist the invasion by the
Portuguese, it may not have been colonised as the failure
would have discouraged other countries, and they would
not have ended up in a vicious cycle in which they were
unhappy with each foreign power and sought a new one
which also proved to be unsuccessful. A stronger military
and political stability would have been required to resist an
invasion, as it was the political battles which gave the king
of Kotte an incentive to join with the Portuguese.
Section 3 - What was the quality of life during
colonial times?
Portuguese Period
When the Portuguese took over, things took a turn for the
worse. They killed Sri Lankan kings, and then exerted
dominance over the areas they had previously ruled. The
Portuguese tried to take over Jaffna and gained control of
pearl fishery, and the trading of elephants. However, they
were unable to extend their control to Kandy, showing how
the people disliked colonisation and wanted the Portuguese
to remain separate from the countrys affairs - however,
they attempted to spread Roman Catholicism across the
country. This is what led to the arrival of the Dutch in Sri
Lanka - everyone was desperate for the Portuguese to
leave, showing that the quality of life at that time was poor
in terms of freedom, and that the people felt oppressed in
terms of religion, as well as with trade. They also used the
caste system to their advantage, which would have
undermined certain groups of society.

Figure 2.0 Portuguese (Later Dutch) fort

ISSUE 2, SEPTEMBER 2014


At the time, the Portuguese introduced thombos to the
country, where details were given of how much needed to
be paid to the state. This was a very useful system which
was later utilised by the Dutch. The Portuguese also
introduced the concept of paying quit rent for holding
services. They took over a number of villages, and took
money from the state of Sri Lanka, thus exploiting its land.
They changed rules about buying and selling land so that
the trading goods would not be jeopardised as the
producers of these items were selling their land - this was
good for them, and also for the state, but they would have
benefited more as they had a good deal of control over
trading (especially cinnamon). They increased this control
by making Colombo, where they had control, the only place
where cinnamon could be traded, so that they could
increase profits. This was a total exploitation of the state
and its trade- villages were also forced to sell areca palm at
very low prices to the state. The people were unhappy and
deprived of their earnings. However, the Portuguese did
help Sri Lanka to become a trading presence.
Dutch Period
During the Dutch period, matters quickly worsened as they
too wanted to exploit the countrys resources, and in 1658,
they presented a bill to the king in which they had heavily
underplayed the value of goods in order to receive more
money from the state for its protection services. However,
from the start, the king was aware of and disliked this
exploitation. They took over ports in order to gain more
control over trade, and also parts of the kingdom of Kandy.
However, the Dutch were still not on good terms with the
king, and he repeatedly tried to resist them. After this, the
Dutch took control of Jaffna and had a totalitarian hold on
its trade as well. They increased tax rates even though the
people could not afford to pay them- this led to a poor
quality of life. Thus the people were subjugated by a
foreign government in their own country, and the Dutch
increased the number of trading regulations, and
completely took over the trade of all goods except rice.
At the time when the Dutch took over, they took over the
ports, there was an economic slump, and it was the Dutch
who made profits and exploited the producers by paying
them low prices, just as the Portuguese had done, but to a
worse extent. In all matters except trade and territory, the
Dutch were happy to help the Sri Lankans, and assisted
them in spreading Buddhism in other Asian countries.
However, throughout their rule, the Dutch introduced a
legal system which especially affected property, people and
rules of succession. Overall, Dutch rule in Sri Lanka
resulted in a country which was not functioning for the
peoples benefit, but for that of the Dutch. The only
positives were the continued trading presence and the
improvements to the legal system. They also introduced
registration for births, marriages and deaths, and they
created the first printing press, and in that way
introduced Sri Lanka to the wider world.

74

SAINT OLAVES ACADEMIC JOURNAL


British Period
During the British period, many things changed. The
turning point in British rule over Sri Lanka occurred when
they made the decision to try and improve the country as a
whole, and they were the first out of all colonisers to take
an interest in helping the people, even if they had the
ulterior motive that this would result in greater production
of goods. They already had control of trade - at this time,
cinnamon was the only major export. At this time, it
became a leading cinnamon exporter and this is thought to
have been partially to the British handling of trade. It
continued to boom as the years went by, and investments
were made in the trading of coffee towards the end of the
1920s, which improved the economy. The British also
brought in rubber, which was in demand, and this would
have made Sri Lanka a trading hub of the time.
They built several roads and in 1858, they also started to
build a railway, which is considered one of the most
important contributions to Sri Lanka, as it enabled
transportation of goods and people and increased interterritorial trade. The British had taken complete control,
which meant that the leaders of past Sri Lanka had no
power and no political influence. Eventually, the British
created a bureaucracy made up of various groups from
different parts of Sri Lanka, and were encouraged to
abolish the mentality that they were all separate groups,
and unite them as Sri Lankans. This is contrary to claims
that colonisation was one of the causes of the civil war (see
Section 4). This made the Sri Lankans more confident in
the British rule and won the British their support.
With their construction of hospitals and implementation of
provisional healthcare, they also improved social aspects.
Electricity, gas and communications developed, and these
flourished in cities. Dr Mahen Tampoe describes the
Ceylon Civil Service which was introduced by the British
as a genuine gift to Sri Lanka, and it marked a key
change from the political instability which rocked the
country during pre-colonial times. The British also revived
old monuments and thus impacted the cultural aspects of
Sri Lanka. The education system improved, and the
University of Ceylon was opened. The 72.8% literacy rate
in Colombo at the end of British rule is a testament to this
improvement, as this is a substantially higher rate than
most other Asian countries of the time.
However, cinnamon workers had a reputation of being
overworked, leading to deaths, desertions and poor quality
of life. Tea and coffee labourers also had this problem and
had poor living conditions, similar to how cinnamon
workers. Young boys were sometimes also sexually abused
by officials. Sri Lanka participated in WWI, which had
little impact. However, it was involved in WWII as a
defence base. It could be argued that it was unfair to
involve Sri Lanka in a problem which had nothing to do
with them, but only with Britain. Although certain groups
in Sri Lanka were wealthy, there was a large economic

ISSUE 2, SEPTEMBER 2014


inequality and most people were still living in poverty, and
almost nothing was done to change this, leading to a richpoor divide and Sri Lanka was following in the footsteps of
Britain, which was experiencing similar problems at home.
Economic and social reforms were issued, but were
immediately met with dissatisfaction as tax levels
increased, and matters looked to be no better than they
had been before. In 1815, the British took control of Kandy
and a rebellion followed in 1817-1818 - however, the
British subdued it. In 1818, the British took control of the
whole country, much to the chagrin of the people.
Out of all colonisations, the British one was the most
successful for the people, and although they were still
largely powerless and prevented from trading, as well as
sometimes overworked, the numerous improvements in
infrastructure, politics and economy made by Britain were
invaluable, especially to a country which had fallen into
disrepair after the other colonisation attempts.
Conclusion
There were positive and negative impacts at the time for
all of the colonisation attempts, with the Portuguese being
the least successful due to the scarce number of positive
effects, and the British being the most. There were common
aspects of all the colonisations- for example, both the
Portuguese and Dutch used terror to capture land- they
burned down royal palaces. However, all three
colonisations resulted in a flourishing education system
which reached its peak during the British time. The
Portuguese and Dutch founded schools which taught
vernaculars and arithmetic. However, generally the quality
of life in Dutch and Portuguese times was very poor, and in
British times it was varied, depending on your social
group. All three groups showed a general lack of concern
for the welfare of the people, who went largely unnoticed
(especially the poor), and this shows that they were out to
gain from it, and that the quality of life would not have
been very good as it was not one of the main objectives.
S e c t i o n 4 - W h a t w a s S r i L a n ka l i ke
colonisation and how was it affected?

post -

When Sri Lanka gained independence from Britain in


1948, it was taken over by a Sri Lankan government which
was composed mainly of Sinhalese. However, it did not
appear to be much different from colonisation, and ruled in
a very autocratic fashion, rather than democratically. It
could be argued that Sri Lankans were unaware how to
lead a democracy as all previous rulers had been almost
autocratic, but it is likely that they would have had some
idea. It was affluent families who entered into politics, and
this was a direct consequence of the economic inequality
formed during British rule, although the traditional caste
system had partially broken down. The United National
Party (UNP) took power and gave the impression that it
wanted equality for Sri Lankan citizens. At that time, the

75

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Indian Tamils who had come to Sri Lanka to work on tea


plantations were anti-UNP and there were worries that
they would vote for left-wing parties. This is also a
consequence of British rule-in 1948 and 1949, citizenship
acts were published which made it almost impossible for
them to become citizens. This was followed by an Act of
Parliament which stated that workers originating from
India where not permitted to vote. This increased the
influence of the UNP as the group which voted against it
had been stopped.

effects, as visible, were positive, such as the exports which


were highly successful and continued to provide a source of
income. However, the way in which the country was ruled
could have been altered had the British left the Sri
Lankans with some ideas as to how to do it and what made
a successful country. The civil war is also one of the
potential highly negative after-effects.

The country continued to produce and export rubber, tea


and coconut, all of which had been introduced and built up
during the British rule, so this is a positive long-term
effect. In the years following its independence, Sri Lanka
showed great promise, with a flourishing trade system,
three acres of jungle land at its disposal, and a higher
standard of living than its Asian neighbours. This was as a
result of social welfare schemes implemented by the
British during the 1930s, which continued to have an
effect. The mistake made was not inputting enough of the
governments budget into industrial works - according to A
Comprehensive History of Sri Lanka, this would have kept
the economy functioning, and also spending money on
hydroelectric power, which turned out to be a
disappointment. The Gal Oya project commenced in 1952,
and cost 10% of the annual revenue for tea. The new items
of export played their part in helping the government to be
able to afford to commence new works.

In todays Sri Lanka, where the civil war ended four years
ago, colonisation continues to have an impact, although
much less than it did immediately post-colonisation. The
infrastructure created by the British continues to be used
and the building systems they used are now widely in use
in Sri Lanka. The education system has simply continued
to expand, and the country is now home to a number of
universities. Lessons are often taught in English at
schools, which shows how it is the British rule which
continues to have an impact. Sri Lanka continues to be one
of the worlds largest exporters of cinnamon, and the native
tea is very famous as Ceylon tea. Coconuts, rubber and
areca palm also continue to be popular exports from the
time of the British and Portuguese.

It has been claimed that there was rivalry between Tamils


and Sinhalese during the British rule, but they did not
intervene and sometimes encouraged the enmity. The
separation of Indian Tamils from the Sri Lankan people
could also have encouraged the mentality that they should
remain separate and that one was inferior to the other (see
section 3). It is also alleged that the British treated the
Tamils better deliberately in order to anger the Sinhalese
and divide the two groups as part of the divide and
conquer strategy, and that they just wanted power. If that
is so, it may have been one of the causes of the civil war
which raged across Sri Lanka from 1983-2009 between
Sinhalese and Tamils. This is a very severe effect, but
there is no strong evidence that the British ever did this, or
that it was one of the causes. (See appendix) Sri Lanka did,
however, continue to experience growth long after its
independence. For example, the GDP figures were $190 in
1978 and increased to $330 in 1984. However, this cannot
necessarily be completely attributed to British rule as it is
long after and there is sufficient time for Sri Lanka to have
developed.
Conclusion
Sri Lanka was still experiencing the after-effects of British
rule and colonisation long after its independence, which
shows the extent of the impact it had. Portuguese and
Dutch rule had very few long-term effects; only the legal
system from the Dutch rule stayed. Some of the long-term

S e c t i o n 5 - W h a t a r e t h e e f f e c t s o f c o l o n i s a t i on
today?

The globalisation of Sri Lanka and its introduction to the


rest of the world is credited to the empires, mainly that of
Britain, for its successful marketing and managing
strategies, and also the Portuguese, for being the first to
realise the importance of the cinnamon trade and export it
widely. Since it is these exports which sustain Sri Lankas
economy today, these are positive effects of colonisation.
The legal system implemented by the Dutch also has
elements which continue to be used in Sri Lanka today.
The people are no longer oppressed, but are still impacted
by colonisation- even the genetic make-up of the country
has changed, and there is now a new race of people called
Burgher people who originate from the British, showing
the long-term impact. 6% of Sri Lankans are Catholics,
showing the effect of the empires, although the methods
used were forceful and this is perhaps what caused the
impact to become long-term.
The standard of living continues to be high, especially in
comparison to that of its Asian neighbours, and it is the
only country in South Asia with a HDI rating of high,
with a figure of 0.715. It also has one of the highest GDP
real growth rates, and is in the top forty in the world with
a figure of 6.8.
Had the civil war not happened, it would have probably
experienced this rate of growth much earlier. The civil war
is one of the negative impacts which is still felt today- it
cannot be put down to colonisation alone, but it has had
many detrimental effects such as loss of resources, loss of
lives and strain on the state. The GDP (PPP) per capita is
$6,247, which is relatively low in comparison with the rest
of the world, but high in comparison to its neighbours,

76

SAINT OLAVES ACADEMIC JOURNAL


showing its position in the global economy. However, after
colonisation the country has lost some of its direction and
the disruption following colonisation caused political
rivalry and instability, as was experienced in pre-colonial
times. The exploitation of natural resources including gems
has meant that Sri Lankas initial wealth has been very
much depleted, leaving it with crops as its main trade.
Q&A Session with my grand -uncle, who directly
experienced the effects of colonisation

In your opinion, has Sri Lanka benefited at all from


colonisation (e.g. having an increasing global presence)?
Yes, somewhat.

Do you think the Sri Lankan civil war was caused by


colonisation or other factors?
Both colonisation as well as many other factors caused it
(namely communal jealousy, lack of vision by leaders, lack
of tolerance, lack of constitutional rights to the minorities
under united Sri Lanka etc)

Would Sri Lanka today be better if colonisation had not


happened and why?
Yes, given the events post-colonial era, it is hard to argue
otherwise.

What is your opinion of life during colonial times?


Submissive

Was colonising fundamentally wrong on the part of the


Dutch, British and Portuguese?
Any form of colonisation is fundamentally wrong.

Overall, has colonisation impacted Sri Lanka negatively or


positively (including all different time periods)?
Both positively and negatively, perhaps more negatively
than positively.
Evaluation of Answers
As my grand-uncle was born in Sri Lanka just after
colonisation ended, he would have directly experienced its
effects and therefore I thought he was the most appropriate
person to ask. He has a relatively unbiased point of view,
but one has to take into account that because he is Sri
Lankan, there may be unintentional bias. However, he has
answered using his first-hand experiences and from his
answers, it is clear that you can say colonisation did have
positive effects, but they were outweighed by the negative
effects, especially at the time, and it might have been
better if it had not happened at all.

ISSUE 2, SEPTEMBER 2014


Conclusion
Overall, colonisation has had many effects on Sri Lanka
but only a fraction of them are long term and still
experienced today. We know that it continues and will
continue to have an impact as the periods of influence have
altered the country in precise ways such as the widespread
usage of English. Most of the effects experienced today are
positive, except for the lack of certain trading goods and
potential effect of the civil war.

Main Conclusion
Overall, it is clear that colonisation has had both positive
and negative impacts on Sri Lanka, with some
colonisations being far more beneficial to the country than
others. However, it has very much shaped its history and
still has an effect on the present situation. There have been
effects on extreme opposites of the scale- British
contributions such as the education system and civil
service cannot be forgotten and are still in place today,
whereas the ruthless control of the Dutch over trade and
the subjugation of the people in their own country was
highly upsetting to the people. Each empire had very
different approaches, but looking at the broader picture, it
is possible to evaluate whether overall, their impact was
positive or negative.
Portuguese rule had a negative impact overall, as their
contributions such as agricultural registers were
outweighed by the killings and forced conversions which
also took place. They set the example to the other empires
of monopolisation and took away the peoples freedom for
the first time, trapping Sri Lanka in a cycle and disrupting
the trading and cultural developments which were
progressing well before. Dutch rule also had an overall
negative effect, although the legal system which they
implemented made Sri Lanka more Westernised and gave
it an advantage over its Asian neighbours. However, they
subjugated the people in their own country and the tax
rates and trading regulations which were imposed upon
them had a severe negative effect and made the people
extremely unhappy.
British rule is more difficult to conclude, as they were the
first empire to take the people into account and contributed
electricity, the railway and the civil service among many
other things, which have improved Sri Lanka over the last
century and kept it up to speed with the rest of the world.
They improved stability and gave Sri Lanka tea and coffee,
which are still widely traded today. Although this new
global image benefited Sri Lanka in many ways, the
British overworked labourers and still maintained their
hold on trade. Although if the British had not arrived in Sri
Lanka, the situation would be worse today- the British
made improvements and compensated for the wrongdoings
of the Dutch and Portuguese. Without British intervention,
it is unlikely that Sri Lanka would have progressed as
quickly as they did not have all the necessary resources.

77

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Their focus was mainly on irrigation systems and the like


in pre-colonial times, and they would have progressed
naturally from there, but most likely would have only
gained the development of electricity much later.
Therefore, they may not have reached the state of
development which they are at now - 0.715 on the HDI
scale.
Overall, I believe that colonisations negative effects
outweigh the positive ones, as the Sri Lankan people did
not want to be colonised and attempted to resist all three
empires. All the empires had ulterior motives and wanted
power over the country purely for trading purposes- and
other than the British, were not that interested in
improving the country. They exploited Sri Lanka and
denied the people the right to what was truthfully theirs.
Whether or not colonisation was successful depends on the
happiness of the people, and at the time, they were
unhappy - so the effects at that time were mostly negative.
However, today, colonisation is having a more positive
effect, with good education and legal systems in place and a
developing country.
Although it cannot be said if the civil war occurred because
of colonisation or not, if it did, this would have been a very
severe negative effect which would outweigh all the
positive contributions. However, the lack of freedom which
the people had to cope with and the clear belief that their
happiness came second to any possible gains, it is possible
to see that Sri Lanka has been overall, negatively impacted
by colonisation- the developments introduced would have
still come about (although at a much later stage which
would delay development), albeit by a small margin.

Jeevan Ravindran (Year 11)


_______________________________________________________

78

SAINT OLAVES ACADEMIC JOURNAL

Extended Projects
The Extended Project allows A-level students to plan,
manage, realise and then review a topic in depth. Many of
the dissertations produced by year 13 students are close to,
if not matching the standard of undergraduate level work,
and some of the best have been published in this journal.
_______________________________________________________

Is Parthenogenesis in
insects a viable
alternative to sexual
reproduction?

Abstract
Ever since the birth of evolutionary biology with Darwin's
infamous "On The Origin Of Species" and the acceptance of
Mendel's laws of genetics, there has been a long-upheld
belief that the only way forward in the race for species to
survive and multiply their numbers is to reproduce
sexually i.e. a mating event must take place between a
male and a female. The reasons given for the supposedly
compulsory nature of this event are rather traditional,
namely that sexual reproduction allows a mixing of genes
to create variation, plus there is no predicting exactly
which two individuals will mate with each other every
time, meaning that even the acquisition of the two precise
sets of genes necessary for the creation of a new individual
is itself random and by definition variable. This concept
prevailed despite accounts of parthenogenesis, an
alternative reproductive method by which a female
reproduced without any male aid. Even well into the 20th
century this was a popular belief, re-affirmed by biological
theories such as Mller's Ratchet. However, as this
dissertation will emphasise, scientists of today are
beginning to re-evaluate the need for sex in light of the
emergence of many more parthenogenetic species,
especially insects, the class of organisms on which this
dissertation will principally focus. On top of this revelation
there exists a greater knowledge of the two broad
parthenogenetic mechanisms (automixis and apomixis), an

ISSUE 2, SEPTEMBER 2014


exploration of the frequently related case of polyploidy, a
questioning of whether parthenogenesis is solely associated
with the isolation of females from possible mates, a known
reason for the natural failure of the process in mammals
and documentation of variant reproductive methods
(gynogenesis, hybridogenesis and kleptogenesis). While
this dissertation only deals with insect parthenogenesis in
detail, since this is still by far the most prevalent form,
especially in a facultative state, it is important to
emphasise that many vertebrates have also now been
added to the ever-growing list of recognised parthenogens.
As for the insect species mentioned, despite previous
theories such as those above not supporting their existence,
at least for the species which are facultatively
parthenogenetic there appear to be few issues associated
with such an asexual breeding strategy, while in the short
term even obligately parthenogenetic species may thrive,
or even in the long term if they have a hybrid origin.

Introduction
Section 2.1: Initial Discovery
In the 18th century the Swiss naturalist Charles Bonnet
discovered a behaviour which has recently been found in
many places in the natural world and has left scientists
thunderstruck. Bonnet observed that aphids were
multiplying without any male input i.e. the females were
undergoing asexual reproduction. It was inconceivable that
an organism, even one as primitive as the aphid, could defy
the so-called "rules" of nature and not need a male to
propagate. Having been left a thoroughly baffled man,
Bonnet proceeded to contact leading scientists with news of
this amazing discovery. Unfortunately for him he was not
considered very highly in the scientific community due to
accusations regarding the use of invalid and hence
worthless scientific experiments in his research, with this
latest example acting as no exception to the rule, leading to
his findings being ignored for a long time. However, the
discovery did gain some recognition in the 19th century,
while the 20th and 21st centuries saw a boom in research
focusing on this most puzzling of biological processes. By
then the process had long been given a name;
parthenogenesis.
Section 2.2: 19th Century Work
Parthenogenesis is a combination of the ancient Greek
words parthenos (virgin) and genesis (birth), thus literally
coming to mean "virgin birth". It had been given this name
by the time the famous 19th century British scientist
Richard Owen wrote a short book entitled "On
Parthenogenesis, Or The Successive Production Of
Procreating Individuals From A Single Ovum" in 1849.
Owen returned to the original parthenogen, the aphid, in
his work, describing an observation which he termed "the
alteration of generations", a fact which does truly apply
and is still accepted today. However, parthenogenesis was
still a fledgling of a scientific concept, leading to some

79

SAINT OLAVES ACADEMIC JOURNAL


inaccuracies concerning the process. Most notably any form
of asexual reproduction in animals was referred to as
parthenogenesis, something which does not apply today
(section 2.3). The existence of the process in aphids was
still a mystery however, and led to more erroneous
theories. One was that the parthenogenetic reproductive
methods of aphids were akin to the reproductive methods
in the honeybee (section 3.3.2). Ultimately it was put down
to the superior being of God, still considered a real figure
even by professional scientists, since the revolutionary "On
The Origin Of Species" had yet to be published. There were
no such errors in another short work of the 19th century,
the 1857 book "On A True Parthenogenesis In Moths And
Bees; A Contribution To The History Of Reproduction In
Animals" by Carl Theodor Ernst von Siebold. This book
dealt with more specific instances of parthenogenesis,
mostly various moth species but also chronicled the long
quest to solve the exact reproductive method of the
honeybee (section 3.3.2), eventually arriving at an
empirically-backed answer which has since been
corroborated by modern scientists. It was clear that
parthenogenesis was gaining greater recognition within
the scientific community, but all the work into it revolved
around observing invertebrates which appeared to
reproduce in this way, with no known examples of any
parthenogenetic vertebrates. The main
bulk of
understanding parthenogenesis was to come in the
following centuries, but not without leaving some
mysteries unresolved.
Section 2.3: Work in the 20th & 21st Centuries
The more recent scientific research into parthenogenesis
has proved by its plentiful nature that there is still a lot to
be understood about this biological wonder. Since all such
work from the current century and the one before have
involved trying to find more parthenogenetic species and
increase our knowledge of the process, it is no use trying to
separate the two centuries in terms of their work and
discoveries. The point about the lack of parthenogenetic
vertebrates (section 2.2) no longer stands, as most of the
vertebrate groups can exhibit at least one parthenogen;
birds (class Aves), reptiles (class Reptilia), bony fish
(superclass
Osteichthyes)
and
cartilaginous
fish
(superclass Chondrichthyes) fall into this category. The
jawless fish (superclass Agnatha) and amphibians (class
Amphibia) could yet theoretically yield a parthenogen of
their own, but the process is naturally impossible in
mammals. It was also discovered that bacteria in the
genera Wolbachia, Rickettsia and Cardinium can infect
insects, particularly wasps, forcing them to undergo
parthenogenesis when they wouldn't naturally do so,
resulting in female offspring that are also infected. It
serves as a method by which the bacteria can reproduce, as
the next bacterial generation can be passed onto the next
host generation via eggs but not via sperm, hence why
male wasp production would halt the bacterial line.
Asexual reproduction in animals has now been split into
three main types; as well as parthenogenesis there exist

ISSUE 2, SEPTEMBER 2014


budding (when offspring develop as a growth on the
parent's body, then break off as in the genus Hydra or
remain attached to form colonies as in the class Anthozoa)
and fragmentation (the animal spontaneously breaks up
into pieces, with each piece giving rise to a new individual
as in the phylum Nemertea). There has also been research
into the phenomenon of polyploidy, which sometimes
accompanies parthenogenesis in certain species. Polyploidy
often leads to an inability to produce gametes and hence
reproduce sexually, but parthenogenesis can save
polyploids from extinction.
Section 2.4: Reasons for Choosing this Project
The previous sections merely summarise the fascinating
examples of parthenogenesis which are currently known,
the same examples which captured me and led to me
choosing the question I wish to pose and hopefully answer,
"Is Parthenogenesis In Insects A Viable Alternative To
Sexual Reproduction?". By "viable" I ask whether the
offspring produced by parthenogenesis can still develop
and grow with as much success as (or even with greater
success than) offspring produced sexually. There are
examples on either side of this debate to help me discuss
the argument and reach a conclusion e.g. individuals
produced in this way in the speckled cockroach (Nauphoeta
cinerea) appear to be less fit than offspring from sexual
methods (section 3.2.1) but the ant Mycocepurus smithii
has never revealed such a problem (section 3.2.2). I have a
passion for such debates in any subject, but especially in
biology. I chose this topic because of my aim to study
Biological Sciences at university, with the hope of entering
the course having already had a go at doing some
independent research about a topic and then gaining a
qualification from it. This should be more rewarding given
that my topic focuses on biological concepts which are
completely new to me (the A-Level Biology course only
mentions asexual reproduction in passing) and as such will
definitely increase my wider knowledge of the subject. The
entomological viewpoint should also serve this purpose
since the class Insecta is not the one with which I am the
best-acquainted. The argument aspect of this dissertation
will also act as invaluable preparation for encountering a
science at degree level and beyond, as science rarely
contains absolute fact; rather, it evolves to fit the most
recent research, only for it to then receive even more
challenges in the future.
Section 2.5: Key terms
Here follows a list of the key terms I will use throughout
my dissertation, adapted, unless otherwise specified, from
the "Merriam-Webster" dictionary:

Parthenogenesis - reproduction by development of an


unfertilised female gamete that occurs especially
among lower invertebrate animals.
Gamete - a mature male or female germ cell usually
possessing a haploid chromosome set.

80

SAINT OLAVES ACADEMIC JOURNAL

Polyploidy - having more than two copies of each


chromosome.

Chromosome - the part of a cell that contains the


genes which control how an animal or plant grows.
Gene - a DNA sequence which controls the
inheritance and expression of one or more traits coded
for in that sequence.
Allele - one version of a gene which codes for a specific
characteristic.
Homozygous - having two identical alleles at one or
more loci.
Heterozygous - having two different alleles at one or
more loci.
Locus - the position in a chromosome of a particular
gene or allele.
Cytological - of or pertaining to a cell.
Automixis - a type of parthenogenesis where meiosis
still takes place at some point but measures also occur
such that development still begins in a diploid state.
Apomixis - a type of parthenogenesis where meiosis
does not take place and development essentially
begins through mitotic cell division.
Facultative - taking place under some conditions but
not under others.
Obligate - restricted to one particularly characteristic
mode of life.
Thelytoky - parthenogenesis in which only female
offspring are produced.
Arrhenotoky - parthenogenesis in which only male
offspring are produced.
Deuterotoky - parthenogenesis in which male and
female offspring are produced.
Meiosis - the type of cell division that reduces the
number of chromosomes in somatic cells by half.
Mitosis - the type of cell division that results in two
daughter cells identical to the original parent cell and
to each other.
Crossing Over - a process by which chromosomes
randomly exchange various pieces of genetic
information.
Evolution - the historical development of a biological
group by a change in the allele frequencies of a
population over time.
Natural Selection - the non-random survival and
reproductive success of individuals best adapted to
their particular environment.
Selection Pressure - a force that impacts negatively on
reproduction such that it is evolutionarily
advantageous to possess genes that oppose this force.
Eusociality - a form of co-operative living where there
is
one
reproductively-active
female,
several
reproductively-active
males
and
many
more
reproductively-inactive individuals, the last of which
look after the offspring.

ISSUE 2, SEPTEMBER 2014

Discussion
Section 3.1: Cytological Mechanisms
There are two broad types of parthenogenesis in terms of
the cell action that occurs, automixis and apomixis. Both
will now be discussed; the former with its many different
types and the latter as a type on its own. It is important to
note that the treatment reserved for these mechanisms in
this dissertation is a very brief one, but the genotypic
outcomes do each have their own implications. The
included diagrams show each of their respective processes
progressing from left to right. Where applicable, the set of
events on the right of the division show each process
without crossing over, while on the left of the division the
set of events is shown with crossing over taking place. A
starting heterozygous genotype for the mother is assumed.
It may be fair to say that crossing over will more likely
occur than not (since in most cases it would have the
positive effect of increasing the genetic variation) and as
such the right side of each diagram is slightly more
theoretical than actual.
Section 3.1.1: Gamete duplication. This is a form of
automixis where meiosis takes place at the beginning of
the offspring's development and there is later fusion of two
cells derived from the products of this type of cell division.
Here follows a diagram of the process:

Figure 1: Gamete Duplication


From the diagram it is clear that whether or not crossing
over occurs makes no difference to the genotype of the
offspring. A heterozygous female will give rise to
homozygous offspring in just one generation. This is
evidence that parthenogenesis does not always lead to
offspring that are identical clones of their mother, although
this point does apply elsewhere, for example with aphids
(section 3.2.1). Reduced fitness is a common hallmark of
homozygous individuals, so species employing this
automictic method should in theory be at the "mercy" of
natural selection. It is known in artificial form from
invertebrates infested with any of the afore-mentioned
types of bacteria (section 2.3). However, these bacteria do
not kill their hosts (except when males were eradicated)
because otherwise there may be a risk of not allowing the

81

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

host enough time to develop and reach a level of fitness


that can withstand parthenogenesis. Earlier in
evolutionary time, if any of these bacteria had been too
potent there would have been evolution towards lower
forms of potency as are seen today, since these forms are
more successful in propagation due to increased lifespan of
the host(13).
Section 3.1.2: Terminal fusion. This is a form of automixis
where two of the cells produced by meiosis fuse together
while the other two are discarded, as opposed to gamete
duplication where three of the cells produced by meiosis
are discarded while the one that is not divides and then the
two daughter cells fuse (section 3.1.1.). An accompanying
diagram follows:

Figure 2: Terminal Fusion


From the diagram it is clear that this time the presence of
crossing over does govern the genotype of the offspring.
Without crossing over the offspring will be homozygous,
but probability dictates that half of the offspring will be
homozygous for one allele and the other half of the
offspring will be homozygous for the other allele, all
dependent on which pair of meiotic products is cast off and
which kept. If crossing over does occur then heterozygosity
is maintained in the offspring. Such a method of
parthenogenesis is used in termites and some solitary
Hymenoptera (insects from the eponymous order). Their
survival may be attributed to the fact that they will mostly
be spared homozygosity via crossing over. Despite
originating as exact copies of their mother, heterozygous
offspring may still hold the advantage over homozygous
offspring because, if one considers an individual from each
genotype, the heterozygote still possesses greater genetic
variation.
Section 3.1.3: Central Fusion. This is a form of automixis
where again two of the cells produced by meiosis fuse
together, while the other two are discarded. However, there
is a crucial difference between central fusion and terminal
fusion. While terminal fusion involves the fusing of the two
daughter cells from one of the two intermediate cells in the
meiotic process (section 3.1.2), central fusion involves the
fusing of one daughter cell from one of the meiotic
intermediates with one of the daughter cells from the other
meiotic intermediate, while the other cell from each
intermediate is discarded. A diagram follows:

Figure 3: Central Fusion


Crossing over will make a difference in this instance,
though the diagram does not clarify this. With no crossing
over the offspring will be heterozygous in the same fashion
as their mother, but crossing over leads to only half of the
offspring being heterozygous in that way, while a quarter
of the offspring is homozygous for one allele and the
remaining quarter is homozygous for the other allele. Some
parasitic wasps can be used as examples for organisms
using central fusion. The frequency of crossing over means
that the genetically variable offspring (one half
heterozygous, the other half alternatively homozygous) are
more likely to arise, though natural selection should still
favour the heterozygotes.
Section 3.1.5: Secondary Oocyte Fusion. This is a form of
automixis where the two cells at the intermediate meiotic
stage fuse together and divide to give two diploids, one of
which is cast off while the other is maintained. A diagram
of the process follows:

Figure 4: Secondary Oocyte Fusion


This time the diagram implies the outcomes correctly in
that crossing over will make no difference to the outcomes
of this method of cell division. However, a more detailed
analysis is required to obtain the probabilities of each
possible genotype. Two thirds of the offspring will be
heterozygous like their mother, one sixth will be
homozygous for one allele and the remaining sixth will be
homozygous for the other allele. As ever, natural selection
will favour the heterozygotes. An example species for this
method is the lichen case-bearer (Dahlica lichenella), a
type of bagworm moth. Its survival may be explained by

82

SAINT OLAVES ACADEMIC JOURNAL


the variability of heterozygotes and the fact that their
inception is more probable than that of homozygotes.
Section 3.1.5: Pre-meiotic Doubling. This is a type of
automixis which differs from the rest in that meiosis takes
place at the very end of the process, whilst in the others
meiosis begins the process. A germ cell (the cell that
normally leads to gamete formation) quadruples its
chromosome number by going through two doubling acts.
The resulting cell then undergoes meiosis to reduce the
chromosome number back to the germ-cell number, with
only one of the four meiotic products not being discarded.
This is illustrated in the following diagram:

Figure 5: Pre-meiotic Doubling


Crossing over in this process is restricted to identical sister
chromosomes, such that it makes no difference to the
genetic variation. Offspring start out as genetically
identical clones of their mother, with variation between
them and her solely due to mutations. An example of this
method can be found in the world's first parthenogenetic
grasshopper, Warramaba virgo (section 3.2.2), whose
parthenogenesis produces identical clones and is therefore
an example of thelytoky.
Section 3.1.6: Apomixis. This type of parthenogenesis
cannot be split into many different types in a similar
fashion to automixis because of a lack of meiosis, meaning
that development begins with what is essentially a mitotic
cell division, a process which is then constantly repeated to
give rise to a new individual. Automixis could be split
because of where meiosis manifested itself in the chain of
events and which particular cells fused together each time.
However, the comparative simplicity of apomixis means
that it is an indivisible type in itself. The lack of meiosis
also means that there is no crossing over, such that
apomictic parthenogenesis always leads to offspring which
begin as genetically identical clones of their mother, an
effect which continues down the generations. It would be
expected that, barring any constant environments then
populations which do not create any genetic variation via
reproduction are at the full "mercy" of mutations, most of

ISSUE 2, SEPTEMBER 2014


which would be harmful and subject the whole species to
the full force of not being favoured by natural selection,
which would inevitably then lead to the species becoming
extinct. An example of species which do however utilise
this method are parthenogenetic weevils, beetles belonging
to the family Curculionidae. It is not known how they can
survive by using apomixis.
Section
3.2:
Facultative
Parthenogenesis?

or

Obligate

Section 3.2.1: Facultative Parthenogenesis. Species which


undergo this type of parthenogenesis are not entirely
dependent on it for reproduction, but use it under certain
circumstances. Most parthenogens are facultative and as
such there are many potential examples, including some
well-known ones from the insect class.
One of the most famous examples of facultative
parthenogenesis is found in the aphids (superfamily
Aphidoidea). Aphids not only undergo parthenogenesis
facultatively, but also have a clear pattern of
parthenogenesis mixed in with sexual reproduction. One
generation produces its offspring sexually, which then go
on to reproduce asexually via parthenogenesis, then sexual
reproduction takes place again and so on in a never-ending
cycle. Such a clearly defined, constant switch from
parthenogenesis to sex is sometimes known as cyclical
parthenogenesis. This method is the same as that
described by Owen in his 1849 work under the name
"alteration of generations" (section 2.2). Eggs laid by a
fertilised female hatch in the spring. The female offspring
from these eggs wait for the bounty of plants that is
brought by summer, then they undergo thelytokous
parthenogenesis, producing a genetically identical
daughter as frequently as every 20 minutes. Daughter
clones are born pregnant, such that the mother nurtures
her granddaughters even before the birth of her daughters.
The purpose of this rapid asexual multiplication is to reap
the benefits that summer brings in terms of a plentiful food
supply. As the daughters are clones then the maximum
number of their mothers' genes is passed on down the
family line, very useful since those genes are likely to be
advantageous if they have allowed the mother to survive
long enough to reproduce (remarkable since aphids have so
many natural predators). The colony can easily expand
because half of the clones have wings. Males are born in
the autumn for the next bout of sexual reproduction. The
spread of each colony means that mating will most likely
take place between individuals with no near relatedness,
thus introducing the maximum amount of genetic variation
into the future offspring, which must survive the hostility
of winter as embryos. All the embryos are different so that
the changing environment will (probably) not claim all of
their lives before they have really begun. This inclusion of
sex is a ploy by the females to give their offspring and
hence their genes (50% in each offspring) the maximum
chance of survival. Natural selection appears not to trouble
the aphids where variation is concerned due to some use of

83

SAINT OLAVES ACADEMIC JOURNAL


sex, while the clones, which are genetically no different
from their mothers, appear not to be affected too much
either, perhaps because many are taken by predators
before they can be taken by natural selection(4).
In contrast, another insect is not quite so fortunate when it
undergoes parthenogenesis instead of sexual reproduction.
The speckled cockroach is another insect that is a
facultative parthenogen, but in this species there is no
clear alternation between parthenogenesis and sex. The
cyclical nature of parthenogenesis in aphids has probably
been established for thousands of years and its
continuation can only be put down to the fact that its
inclusion in the reproductive strategy of that insect does
not pose any significant problems. The rare nature of
parthenogenesis in the speckled cockroach, however, might
point to this species not benefiting from the exclusion of
sex. Once it evolved sexual reproduction it may have been
difficult for parthenogenesis to arise, as sex became more
and more integral to survival of the offspring. There are
therefore far more prerequisites for parthenogenesis in this
species, unlike the aphid, where only a plentiful food
supply is needed (very likely in the summer). The speckled
cockroach females that can reproduce parthenogenetically
must be heterozygous, since the parthenogenesis is
apomictic and therefore offspring begin as genetically
identical to their mothers (any homozygous females
reproducing by parthenogenesis would expose their less
favourable genotype to the negative effects of natural
selection in more than one individual, such that their genes
would be less likely to last over time). Offspring produced
by parthenogenesis show decreased fertility and rate of
development; in other words, they are less fit than their
counterparts which were produced sexually. The products
of parthenogenesis are less likely to survive long enough to
reproduce, and as such it is more favourable for a female to
reproduce sexually. Despite parthenogenesis meaning that
more of her genes get passed on more quickly, the
likelihood of no further progress with this strategy means
it is more worth a female's while to take longer to produce
fewer offspring, each containing only half of her genes, by
first mating with a male, as the increased genetic variation
means that these offspring are more likely to survive,
reproduce and pass on those same genes. This is most
likely the reason for parthenogenesis only materialising
when a female has no male access, her strategy signifying
an act of desperation. As it happens, the colonies of these
species contain more males than females which were
produced sexually, so that parthenogenesis is even less
likely to happen. A male-biased sex ratio could therefore be
an evolutionary adaptation to try and rid the species of
parthenogenesis, since here it appears to be of no major
benefit on any level.
Section 3.2.2: Obligate Parthenogenesis. This is a form of
parthenogenesis where species can only reproduce by this
method, without any choice. Reasons may include difficulty
in forming gametes due to a certain degree of polyploidy or
of having otherwise lost the ability to reproduce sexually

ISSUE 2, SEPTEMBER 2014


due to ending up in an environment whose conditions
favoured parthenogenesis, which in turn favours faster
reproduction and more copies of genes passed on down the
generations. However, many believe that such species,
such as the examples which follow, are too reliant on a
constant environment for their continued existence.
One of these obligate parthenogens is the afore-mentioned
ant species M. smithii. This is an insect which has a very
wide distribution across most of Latin America, and it is
very easy to see why. The ants have a colonial structure
similar to other ant species (a fertile queen, sterile workers
and offspring) but with one key difference - there are no
drones; in fact, no males exist at all in this species. It is one
of seven ant species to be thelytokous, but it is the only one
known so far with no males. It is therefore clear that in
this species parthenogenesis only ever produces females.
This rapid form of breeding has allowed the species to
spread as far as it has today, exploiting new and plentiful
resources in the process, much like aphid clones (section
3.2.1), but incessantly rather than temporarily. Due to the
unique nature of this species much study has taken place
to confirm the presence of its proposed features. This
included a continuous three-year stint of collecting
specimens and analysing them both in the wild and in
captivity. Throughout this period no males were ever found
as adults or as pupae. M. smithii was also compared to
other sympatric ant species which were sexual to maintain
the validity of experiments. It was found by dissection that
both M. smithii queens and queens of related sexual
species had developed ovaries and evidence for previous
egg-laying, but one key difference really confirmed the
obligate parthenogenesis of M. smithii: each of its queens
had an empty spermatheca (sperm receptacle and depot),
whereas the queens of the related sexual species had full
spermathecae. On top of this, virgin queens of the related
sexual species also had empty spermathecae but with
undeveloped ovaries and no evidence for having previously
laid any eggs. The comparisons continued where behaviour
was concerned. The colonies of the related sexual species
widened their entrances before the occurrence of a nuptial
flight, a key feature of the eusocial Hymenoptera, whereby
a swarm containing a future queen and the drones emerges
and some of the drones will fertilise the future queen which
will then proceed to found a new colony. The obligate
parthenogenesis and male absence in M. smithii of course
meant that no such entrance modification took place. Not
widening the entrance to the colony has its advantages,
such as decreasing the probability of predation or parasitic
infection. However, there are also disadvantages associated
with this obligate parthenogenesis. With offspring starting
development as identical to their mothers, the only
variation incorporated into the gene pool comes from
random mutations. Most mutations are bad, and a lack of
sex means that it is practically impossible to create enough
variation to get rid of their effects. This is the basis for a
biological concept known as Mller's Ratchet, which states
that individuals undergoing asexual reproduction succumb

84

SAINT OLAVES ACADEMIC JOURNAL


to the effects of the deleterious mutations which they have
accumulated by not having sufficient genetic variation.
This effect may be delayed by a recent origin and a
constant environment. M. smithii has both of these - it
lives in environments that are fairly constant and only
arose recently (a clue to the latter fact is in the species'
very low genetic diversity). However, as time progresses,
more and more deleterious mutations will appear, plus
climate change may drastically alter the species'
environment, meaning that extinction will probably be
inevitable.
Another insect which displays obligate parthenogenesis is
the grasshopper species Warramaba virgo, first discovered
in Australia. It too is thelytokous and so the obligate
nature of this parthenogenesis means that this is also a
species which is devoid of males. This was very quickly
observed upon its initial discovery, with no males found
after extensive searches, so that now its status as an allfemale species is practically accepted universally. As
opposed to M. smithii, this species displays surprisingly
high genetic diversity given its asexual reproductive
strategy. It is probably true to say that one reason for this
species being much more variable than M. smithii is
because of its age. It is most likely that W. virgo is much
older than M. smithii, for the simple reason that
grasshoppers as an insect group are much older than ants
as an insect group, with the difference being in the tens of
millions of years. While this statement is by no means
necessarily supportive, it was given that M. smithii
probably arose only very recently, such that it is most
likely that it would be younger than many other species of
ant, let alone a species of grasshopper which may have
arisen even before ants themselves did. This begs the
question as to why W. virgo has not succumbed to the
effects of deleterious mutations as predicted by Mller's
Ratchet. The answer may lie in the origin of this species,
which has been postulated to have been hybrid in manner.
It is a well-known fact that hybridisation is not always
successful in producing offspring, but if it is successful then
the offspring will obviously be more variable than if mating
had been intraspecific. Such a giant increase in genetic
variation may lead to heterosis, where the hybrid offspring
show fitness levels much higher than those of their parent
species. However, hybrid offspring may still be barren and
as such obligate parthenogenesis would be the only method
of reproducing and passing on copies of genes. If the hybrid
origin is correct then it may also be that the species arose
more than once in the west of Australia, so that more
variation would be thus introduced into the population.
With the high probability of greater species age in W. virgo
than in M. smithii it can be put forward that some
variability may also result from age differences between
individual W. virgo specimens and the rate of accumulation
of mutations in each individual. Older individuals are
likely to possess more mutations, while the rate of
mutation accumulation will be different for each individual
since mutations are random and therefore do not

ISSUE 2, SEPTEMBER 2014


discriminate between individuals. It follows that the older
an individual is and the faster it accumulates mutations
then the more variable it will be. This however begs
another question, one also asked with reference to M.
smithii, as to how the effects of Mller's Ratchet have not
been witnessed, especially in a much older species than
previously considered. The answer again probably lies in
the constant outback environment in Australia. (though
the variable genes responsible for heterosis must go some
way towards "soaking up" the effects of deleterious
mutations). With this in place, the species has made full
use of its parthenogenesis and colonised much of the
country, exploiting new resources along the way. As ever, if
climate change kicks in then the species may be in deep
trouble; even with a hybrid origin and high variability the
lack of genetic recombination would still be the species'
Achilles heel in the face of environmental alteration.
Section
3.3:
Deuterotoky?

T h e l y t o ky ,

A r r h e n o t o ky

or

Section 3.3.1: Thelytoky. This is the type of


parthenogenesis which produces exclusively female
offspring. Many insects, such as those already mentioned
(section 3.2) employ this method of parthenogenesis since
producing females is more favourable from their
"perspective" (all of their genes will be passed on to
offspring, with little change due to mutation). Thelytoky
passes on more of a mother's genes to daughters which can
carry on the process for many generations, so that few of
the genes from the mother will ever be lost.
A very organised mixture of sexual and parthenogenetic
reproduction can be found in the termite Reticulitermes
speratus. Termites are eusocial insects from the order
Blattodea but live much like their ant, bee and wasp
counterparts in the order Hymenoptera. Eusociality across
two different insect orders can be used as an example of
convergent evolution; two groups of organisms not closely
related sharing similar features due to their similar
lifestyle requirements. In this termite species a colony
begins in much the same fashion as in other termite
species - a king and queen found a colony and between
them produce all the rest of the colony members such as
workers and soldiers. However, there is a key difference in
this species which sets it apart from the rest. In all eusocial
insects there is no distinguishing a queen from a worker in
the immature stage except by the food that they are fed
since they are both produced in exactly the same sexual
manner. In R. speratus the workers are still produced
sexually but future queens arise via thelytokous
parthenogenesis and start out as genetically identical to
the current queen. The purpose of this strategy is very
clear. The workers possess the usual genetic variation
which is handy when considering that they do most of the
interacting with the external environment and as such the
species can evolve so that the workers are adapted to the
outside world. Keeping the workers at maximum fitness in
this way makes it more likely that they will continue to

85

SAINT OLAVES ACADEMIC JOURNAL


provide food and thus sustain the colony as a whole (since
it is possible to model a eusocial insect colony as one big
super-organism). As for the queen, she benefits because
even more of her genes are passed on to the next
generation through cloning herself. When the queen dies
she will be replaced by one of her daughter clones who will
in all likelihood continue to maintain the colony's survival
since all the genes of her mother, including those which led
to the establishment and maintenance of a colony, will be
found in her as well. In all likelihood the original king will
still be alive (perhaps the queen dies first due to the
exertion of reproducing, especially with some of the high
frequency characteristic of parthenogenesis) and as such he
can mate with the new queen because genetically she is the
original queen rather than his daughter, so more
favourable genes will be passed on to the workers.
Therefore the king benefits as well since he couples his
genes with those same favourable queen genes to produce
workers which will continue to provide for him as well as
the whole colony. When the original king later dies a male
raised as a future king will take over and mate with a clone
of the original queen, increasing the genetic variation in
the workers even further, bringing even more advantages
to him and the whole colony. The fact that cloned queens
never enter the outside environment means that they are
less vulnerable to the effects of deleterious mutations,
bringing an extra benefit to them and the rest of the
colony.
Another known thelytokous insect species is the citrine
forktail (Ischnura hastata), a type of damselfly and the
only example of a parthenogen from the order Odonata.
This too is a facultative parthenogen, most obvious since
the bulk of the population, found in the Galapagos Islands,
the Caribbean and North America all reproduce sexually,
whist the parthenogens come from an isolated all-female
population in the Azores. These isolated females only
reproduce via parthenogenesis and have hence made the
most of the advantages associated with the process, such as
rapid colonisation and the maximum number of their genes
surviving down the generations. Their isolated home
provides support for the idea that parthenogenesis arises
when the alleles allowing parthenogenesis to occur
suddenly become advantageous upon female isolation from
males, where asexual reproduction becomes the only
possible breeding strategy. Scientists were of course
intrigued by this unique instance of parthenogenesis in the
Odonata and as such the citrine forktail has received much
attention in the scientific community. As with M. smithii
and W. virgo it was decided to study this population over
many years to see if any males could be found. Over five
years of study 3,000 females were reared from offspring to
adults but not a single male, rejecting any doubt that
thelytoky existed entirely in this population. Advantages
were also found that went beyond rapid colonisation. In an
all-else-being-equal situation this species, as with other
thelytokous species, could produce up to twice the number
of female offspring as sexual species, so that quadruple the

ISSUE 2, SEPTEMBER 2014


number of genes get transferred down the parthenogenetic
lineage (since each offspring contains all rather than half of
its mother's genes). The parthenogens also showed greater
fertility than their sexual counterparts, suggesting that sex
is not actually very favourable in this species as it must be
associated with some very high costs. The Azores
population may also not have been adversely affected by
mutations due to another instance of a constant
environment. A constant environment means that
individuals that are surviving long enough to reproduce are
well adapted to their surroundings, such that throwing
away half of their genes through sexual reproduction may
be disadvantageous, thus any mutant males which ever
arose would quickly die without having ever mated. In this
species however, the benefits of parthenogenesis may
extend to the other populations outside the Azores. Sexuals
of this species were found to be reluctant to mate.
Combining this with the cost of sex means that there is a
sizeable selection pressure acting on the sexuals to become
parthenogenetic, since the start of parthenogenesis by
mutations in genes leading to mating behaviour may be
manifesting itself, so the evolutionary leap to asexuality
throughout this species has already begun and will
probably not take much longer to be achieved.
Section 3.3.2: Arrhenotoky. This is the type of
parthenogenesis
which
produces
male
offspring.
Arrhenotoky is rarer than thelytoky because most species
employ parthenogenesis simply to pass on the maximum
number of genes to the next generation in a faster
continuous sequence, so producing males would here be
disadvantageous. However, most eusocial Hymenoptera
employ arrhenotoky instead of thelytoky (M. smithii and
the other six similar ants are therefore exceptions rather
than rules) because it increases the positive effects of a
process known as kin selection.
By far the most famous of the eusocial Hymenoptera is the
western honeybee (Apis mellifera). Contrary to R. speratus
and other termites, the eusocial Hymenoptera do not have
a king in their colony but they do have a queen. While in
termites the king and queen breed multiple times to add to
the colony's population, the queen of a colony of eusocial
Hymenoptera fertilises eggs using sperm stored from a
single mating event with drones during the nuptial flight
from the colony in which she hatched, as described with
reference to the sexual relatives of M. smithii (section
3.2.2). This breeding strategy was first worked out in the
western honeybee in the 19th century after much
deliberation, finally putting the previously wrong theories
(including how queens and drones were given the wrong
gender labels) to bed. During this same time it was also
worked out that female eggs, be they of workers or future
queens, were fertilised, but drone eggs were not fertilised,
and so it was clear that a queen could control the hive's sex
ratio. It was later found that there was no mechanism in a
drone egg to restore a diploid number of chromosomes, so
drones hatch and live out their whole lives as haploids.
Drones are therefore very vulnerable to the effects of

86

SAINT OLAVES ACADEMIC JOURNAL


deleterious mutations. Many drones are produced by a
queen so that a future queen will inject plenty of genetic
diversity into her worker offspring (since each future queen
mates with many drones from many different colonies).
Since a hive queen fertilises worker eggs with drone sperm
an unusual situation can become apparent. All drone
sperm cells are identical since they are haploid like the
drone itself, so each worker contains exactly the same set
of paternal DNA. The workers also contain half of the
queen's DNA in the normal fashion. This makes workers
100% related through their paternal DNA and 50% related
through their maternal DNA so that overall they are 75%
related if they have the same father, which is not always
the case due to the multiple mating events by the queen
(even though she ejects most of the received sperm, making
it less likely that a mixture of sperm persists); workers
with different fathers will still however be 50% related.
While multiple mating events can increase the genetic
diversity of offspring so that more copies of the queen's
genes are more likely to survive, the 75% relatedness
between some workers increases the extremity of kin
selection, where an individual (or many in this and other
cases) strives to preserve the transmission of copies of its
genes down the generations through a close relative, even
if it means sacrificing its own reproductive potential.
Workers cannot breed since they are born sterile but do
look after the larvae, including future queens which are
also female and will often be 75% related to them like a
worker, so each worker can still make a genetic
contribution
without
actually
breeding
herself.
Arrhenotoky can therefore increase co-operation, a concept
that is central to the functioning of any colony of eusocial
Hymenoptera.
Section 3.3.3: Deuterotoky. This type of parthenogenesis
allows individuals to produce mixed broods of offspring
asexually. Instances of deuterotoky are not as well
documented as thelytoky or arrhenotoky but as the next
example will illustrate, it does have advantages which are
not possible to achieve in thelytokous or arrhenotokous
species.
The example of deuterotoky used in this dissertation is
that of the wasp Biorhiza pallida. It is one of many species
of gall wasp; that is, offspring develop inside a growth on a
plant (which is sometimes the fruit) into which their
mother laid her eggs. This follows a cyclical pattern of
parthenogenesis much like the aphid, alternating between
sexual and asexual generations. Females which will
reproduce asexually are known by one of three names
depending on the sex of the offspring to which they will
give rise. Androphores only give rise to male offspring
(arrhenotoky), gynophores only give rise to female offspring
(thelytoky) and gynandrophores give rise to both male and
female offspring (deuterotoky). The life cycle begins when
the offspring hatch in their gall. First to emerge are the
haploid males, which later mate with the diploid females
upon their emergence. The offspring from this coupling are
females which then proceed to reproduce via

ISSUE 2, SEPTEMBER 2014


parthenogenesis, laying their unfertilised eggs on tree
roots. After perhaps three years of development these eggs
will hatch into wingless parthenogenetic females which
climb up the tree and lay unfertilised eggs on a tree in such
a way as to lead to the formation of a gall when the larvae
begin development. The larvae which hatch then restart
the cycle. A common feature of galls is multiple founding,
where a gall contains offspring of both genders. Such a
situation arises when one or more gynandrophores lay eggs
in a gall or when at least one gynophore and one
androphore lay eggs in the same gall. The reason for
multiple founding is a mystery, but may hint at resources
being limited if more than one female has laid eggs inside a
gall or females trying to protect their offspring for the
maximum transmission of their genes; if more offspring
develop inside a gall then the gall will grow larger and so it
is less likely that parasites will be able to lay their eggs in
the same gall, or it may be due to females laying across
many different sites to reduce the chances of losing all
their offspring in an unforeseen catastrophe and by chance
laying in an already-occupied gall. If a mixed-sex gall
contains eggs laid by a gynandrophore then the offspring
will be siblings and so mating between these males and
females risks the negative effects of inbreeding. However,
gynandrophores are thought to be the rarest of the three
types of female and so most males will hatch either in a
gall with unrelated females or in a gall by themselves so
that they must fly to another gall to mate (which will most
likely contain unrelated females). Deuterotoky always
results in some genetic variation because of the mixture of
sexes, so it is more advantageous than thelytoky, and
always provides some females for producing the next
generation, so it is more advantageous than arrhenotoky.
This raises the question as to why deuterotoky is not more
common. In this species at least it may be to lessen the risk
of inbreeding. In other species it would still hold the same
advantages, but may be rare because the complexity of
producing mixed sexes asexually is much higher than only
producing one sex asexually and so is just very unlikely to
arise in any one species.
Section 3.4: Conclusion
Section 3.4.1: Pros Of Parthenogenesis. Asexual
reproduction of any type obviously means that organisms
do not have to exert themselves to find a suitable partner
with which to mate and by which to bear offspring, so of
course in these instances there is no such thing as the cost
of sex. Only 0.1% of animal species exhibit asexual
reproduction of any type, so even fewer are specifically
parthenogenetic insects. The citrine forktail (section 3.3.1)
serves as a useful example of such an insect, even if only
through one isolated population. However, the fact that
females from other populations were found to be reluctant
to mate and those from the Azores were more fertile means
that females of this species have paid or still pay the
profound cost of sex, such that natural selection would
favour this species becoming entirely parthenogenetic due
to the undesirable sex cost's strong selection pressure.

87

SAINT OLAVES ACADEMIC JOURNAL


The obligately parthenogenetic grasshopper W. virgo may
be used to illustrate the advantages of parthenogenesis
where rapid colonisation is concerned (section 3.2.2). The
species has no concept of sex whatsoever because at the
moment it appears that it has managed just as well if not
better without it. It is very widespread over much of
Australia because each female can produce up to twice as
many daughters as related sexual species. Its deliberately
permanent sexual abstinence has led to many more copies
of its genes surviving since no variation has been created
through sex, plus practically every female has been able to
reproduce and force the species to widen its distribution,
during which time each individual will be more likely to
survive long enough to reproduce by exploiting new
resources. However, it cannot be forgotten that part of
their success probably lies in the effects of heterosis which
have shielded them from the effects of deleterious
mutations.
The arrhenotokous western honeybee (section 3.3.2) may
not be able to favourably churn out more females like
thelytokous species, but what its parthenogenesis does lead
to is increased co-operation and stronger kin selection.
Without haploid drones mating with new diploid queens it
would be impossible for many workers and future queens
to be 75% related to each other, half as much again as
normal siblings. The workers' sterility means that they
themselves cannot breed to pass on their genes but it is in
their best interests to help raise some of their sisters which
will be future queens since they will be maximising the
chances of at least 50% of their genes being passed on
(equivalent to reproducing themselves) or even 75% of their
genes if they have two parents in common. This cooperation can even be extended to more serious situations
such as when an outside threat puts the whole hive in
danger and one or more workers rush to the hive's rescue,
sometimes sacrificing themselves to save it; their efforts
will most likely not have been in vain since they will have
probably saved two or more future queens which will then
be more likely to breed and pass on more copies of their
genes to future generations, but of course at least half if
not three quarters of these genes were genes that were
found inside the original workers, which sacrificed
themselves for the apparent benefit of the hive, but it was
really (with far greater subtlety) for the representation of
some of their own genes in the next generation.
Section 3.4.2: Cons Of Parthenogenesis. There has been a
long-upheld belief that parthenogenesis is a dead end in
evolutionary terms, particularly for obligate parthenogens.
It has been quite widely agreed that while such
parthenogenesis removes any costs of sex (section 3.4.1)
and allows rapid colonisation with exploitation of new
resources (section 3.4.1) it is mostly not sustainable in the
long term due to the theories behind Mller's Ratchet. An
exception to this rule may be W. virgo (section 3.4.1)
because of its hybrid origin, but as far as is known there is
no such lifeline for M. smithii. Its very wide geographic
distribution hides the fact that it is a species of very recent

ISSUE 2, SEPTEMBER 2014


origin, and as such it has very low genetic diversity, since
all genetic variation comes from random mutations (most
of which should be disadvantageous). Its constant
environment means that it will not feel the full force of
many of these mutations, but environmental change is now
sweeping across the planet such that no environment will
be sheltered from these climatic forces. With very little
positive genetic variation, it and other similar asexuals will
suffer all the way to probable extinction.
Parthenogenesis is clearly not always favourable even
where facultative parthenogens are concerned, as seen
with the speckled cockroach (section 3.2.1). Thelytoky in
this species results in daughters that are genetically
identical to their mothers. The implication is that a female
which has survived long enough to reproduce by any
method must possess at least decent fitness levels, so the
same should apply to her genetically identical daughters,
which could be fitter than if they had been produced
sexually. However, this most definitely does not appear to
be the case. Any products of parthenogenesis show marked
decreases in fitness, developing more slowly and being less
fertile. Perhaps parthenogenesis is not favoured in this
species because the state of the environmental conditions
does not allow for a lack of genetic variation. If females are
isolated from males then they can still attempt a virgin
birth since ultimately they are concerned with the survival
of their genes. In the slight probability that a daughter
clone survives to reproduce herself, sexually or asexually,
then she is of a desirable fitness and as such her offspring
may inherit her ability to survive against the odds.
However, the remoteness of this situation ever coming
about means that the species overall still does not favour
parthenogenesis. As already discussed, a response to this
may be the production of more males than females so that
parthenogenesis hardly ever occurs.
To conclude, it appears that insect species which undergo
parthenogenesis have variable success rates across
different time spans, only to be expected when different
species are involved, since each species is by definition
unique. The effect of parthenogenesis on facultative insect
parthenogens, with the exception of the speckled
cockroach, appears not to be detrimental to survival of
genes or the species as a whole, since the environment is at
its most favourable when parthenogenesis occurs to exploit
the plentiful resources which accompany the favourable
conditions. These species could therefore still be successful
even in the long term. For obligate parthenogens it is a
different story altogether. Short term survival should be
achievable, but long term survival in today's ever-changing
climate is much more doubtful. W. virgo may be an
exception due to heterosis, but M. smithii does not have a
hybrid origin and so it and other non-hybrid obligate
parthenogens will struggle to adapt to environmental
change. It is therefore only fitting to decide that
parthenogenesis in insects is a viable alternative to sexual
reproduction in facultative parthenogens, but most likely

88

SAINT OLAVES ACADEMIC JOURNAL


only in the short term for obligate parthenogens (except W.
virgo and other hybrids).

Literature Review
A book I used as a source for my dissertation was "On
Parthenogenesis" by Richard Owen. It was clear from
reading this book that some of the information was rather
dated. It was published in 1849 and as such the language
was difficult to understand. On top of this, parthenogenesis
was said to occur in organisms in the genus Hydra, which
is now known not to be true (they do use asexual
reproduction, but their type is called budding). It was
clearly stated that parthenogenesis was not known to have
occurred in any other vertebrate, which has also been
disproved now. Most strikingly, the mystery of
parthenogenesis in aphids was likened to reproduction in
honeybees (one mating fertilising eggs for many years; this
too is not the whole story) but was ultimately put down to
God and his superiority. Given that this book was
published ten years before Darwin's revolutionary "On The
Origin Of Species", it is not surprising that even scientists
were Creationists. Despite subsequent findings refuting
facts in this book, it does serve as an indicator of how our
understanding of parthenogenesis has evolved over time.
Given that Owen was a professional scientist who was in
direct correspondence with many others in the field, I am
sure the information presented in this book was reliable at
least for its time.
My other source in book form was "On A True
Parthenogenesis In Moths And Bees" by Carl Theodor
Ernst von Siebold. This too was an old book, published in
1857. This book was much more useful than Owen's work,
mainly due to being easier to understand. von Siebold gave
an approximate timeline for how scientists' understanding
of honeybee reproduction panned out, eventually
concluding that one of the proposed theories was correct that of the queen bee storing sperm after a single mating
period and releasing it to fertilise eggs which were to
become workers and new queens, or not releasing it to
leave eggs unfertilised to become drones. A series of
experiments later supported this hypothesis. I am certain
these experiments were carried out with validity in mind
especially because they supported an idea which is the
accepted one today. Other experiments were used to test
the theory of parthenogenesis in moths. von Siebold does
however disagree with Owen over aphid reproduction.
Essentially von Siebold says that aphid reproduction by
virgin females is not true parthenogenesis. It is not
surprising that there is disagreement between two
prominent scientists, nor for one of them to be wrong
despite producing other reliable work, as Owen's belief in
aphid parthenogenesis still holds.
I used the website "users.rcn.com" to find information
about the various methods of asexual reproduction in
plants as well as animal. These were meant to serve as

ISSUE 2, SEPTEMBER 2014


differences from parthenogenesis. They supported what I
previously knew about them, thus these examples are
probably reliable. As far as parthenogenesis was
concerned, aphids were mentioned as example species,
therefore backing up previous findings. The advantages of
sexual reproduction in terms of ability to adapt to
environmental change were also stressed, backing up my
previous knowledge of this area. There were also details of
genomic imprinting (the reason parthenogenesis does not
naturally occur in mammals) and forced parthenogenesis
in wasps through infection with bacteria in the genus
Wolbachia. Both facts were backed up with subsequent
research, assuring me that this website possessed reliable
information.
I used a short video clip on the "BBC Nature" website to
hopefully give a more visual representation of the process
of parthenogenesis in aphids. I consider this clip to be
reliable as it too backs up previous findings, namely the
parthenogenesis in summer and the sexual reproduction in
autumn to maintain genetic diversity in the next
generation. Also, it is highly unlikely that the BBC would
ever lie about such an innocent topic.
I read a description of parthenogenesis on "infoplease.com".
The facts supported previous findings e.g. the lack of
fertilisation, the occurrence of parthenogenesis in aphids,
that most species using this reproductive method are
insects, that arrhenotokous parthenogenesis takes place in
many eusocial insects and that parthenogenetic offspring
can be genetically identical to their mother depending on
the species. The established reliability of these facts makes
it more likely that the other facts related here and nowhere
else (that parthenogenesis was first discovered in the 18th
century and that parthenogenesis has been induced in the
laboratory in frogs and rabbits) are also likely to be true, so
the source can be judged as reliable.
I also used the journal "PLOS One", which looked at
thelytokous parthenogenesis in a species of ant with no
males. Many observations hinted at this being the case, but
detailed and rigorous experiments took place just to be
sure. The scientists also looked at a colony of ants of
another species (which reproduced in the normal eusocial
insect fashion) to compare it to the species under test.
There was a supporting reference to a particular
subspecies of honeybee which also undergoes thelytoky,
increasing the reliability of this source, as did support for
the instances of Wolbachia-induced parthenogenesis in
nature. The samples used were sufficiently large in my
opinion, so I deem the experiment and its conclusions to be
valid and the source to be reliable, especially due to it
appearing in a peer-reviewed journal.
In addition I used "The Journal Of Evolutionary Biology" to
read about a parthenogenetic species of cockroach. The
main argument was that parthenogenesis in this species
only occurred in extreme instances e.g. with no available
males. It was questioned as a favourable reproductive

89

SAINT OLAVES ACADEMIC JOURNAL


mode by the fact that offspring produced in that way were
slower developers and had decreased fertility. There was
no direct mention of any experiments to back up these
ideas, but many other scientific papers were cited, so the
information is probably reliable. What is more, this journal
as with many others is peer-reviewed, so it is very difficult
if not impossible for a paper containing serious errors or
unreliable observations to be published.
I was fortunate enough to be granted access to a very
recent paper in the journal "Cytogenetic And Genome
Research", which at last explained the two main types of
parthenogenesis, automixis and apomixis. The former had
many different types, which were also individually
explained. This paper also mentioned a parthenogenetic
moth, the same one from von Siebold's work, increasing
this fact's reliability. Most of this paper focused on
polyploidy, not something I was immediately concerned
with but that I still considered important to reference
briefly in my introduction. The main conclusion drawn was
that polyploidy rather than parthenogenesis was most
important to an organism's success, which supports one
worm species' positive correlation between geographic
distribution and degree of polyploidy. While this is only one
example, the fact that this article was peer-reviewed before
publication means that the conclusion is probably still
valid.
I used a paper from the journal "Ecological Entomology" to
describe an instance of deuterotoky in the natural world,
specifically in a type of gall wasp. Having previously found
examples of female offspring from parthenogenesis in some
species and male offspring from parthenogenesis in other
species, it was very rewarding to find an instance of
parthenogenesis giving male and female offspring in the
same species and from the same clutch of eggs. The species
was also described as a cyclical parthenogen, where sexual
and asexual generations alternate. The same strategy is
present in aphids, making me more confident that that fact
at least was correct. As for the rest of the information, it
too is likely to be correct because this and other papers in
the journal would not have been published without peerreview, when any papers not up to the credibility and
validity standards simply would not be published, so the
fact that this paper has been published, both online and in
hard-copy form suggests that it is both an accurate and
reliable source for my dissertation.
A further source I used was a paper from the journal
"Heredity" about thelytokous parthenogenesis in an
isolated population of a species of damselfly, the only
instance of parthenogenesis so far recorded in this group of
insects. There was mention of other groups of bacteria
besides Wolbachia which can induce parthenogenesis for
their own reproductive success, namely Rickettsia and
Cardinium. There was a reference to rotifers undergoing
parthenogenesis which backs up facts found in other
sources. There was also a detailed discussion of the
implications of parthenogenesis, including not having to

ISSUE 2, SEPTEMBER 2014


pay the cost of sex, something I will probably include in the
conclusion part of my dissertation. It was also the first
source where a gene-based description of parthenogenesis
suddenly arising was offered, a very valuable piece of
information. I feel that the methodology used to investigate
this parthenogenetic population was valid, especially
because a large sample size was used which still yielded no
males, thus increasing the reliability of the results. The
publication of this paper in a peer-reviewed journal is also
favourable because there should be very few if not any
mistakes after checking by other scientists, so I therefore
feel that this source is reliable.
From the journal "Science" I used a paper about a new
mixture of parthenogenesis and sexual reproduction in a
eusocial insect colony, specifically that of a species of
termite. In this species the future queens are produced by
parthenogenesis, while the rest of the colony members are
produced via the usual sexual reproduction between the
queen and king. This novel system was deemed of great
advantage to both the queen and king for the spread of
their genes and the survival of their colony. This paper was
cited by the paper from the "Annual Review Of
Entomology" and as such the brief reference to this
breeding strategy in the latter was reproduced exactly in
the former, the original. This consistency of facts makes it
more likely that both papers are reliable. The "Science"(11)
paper also mentioned the large samples used to help reach
conclusions, further increasing reliability as does the
execution of the ubiquitous peer-review process.
I used another paper from the journal "PNAS" about a
parthenogenetic species of grasshopper, the first of its kind
to be discovered. The paper re-affirmed many of the
traditional beliefs about parthenogenesis, namely that it
was always thought of as a recipe for disaster in terms of
evolution because of throwing away genetic variation and
being exposed to the full, merciless force of natural
selection. This has been a common opening statement from
many of the sources I have used and as such it is probably
true. The paper comes from a peer-reviewed journal and so
is likely to be reliable, for its time anyway. This is because
it was written in 1976, when the species in question was
labelled as an apomictic parthenogen, when in fact this
year's "Cytogenetic And Genome Research" paper names
this species as an example of an automictic parthenogen. I
am more willing to believe the latter because of it being
more up-to-date. However, I still have overall confidence in
the former, plus the error mentioned would have been easy
to make since the specific form of automixis now attributed
to it (pre-meiotic doubling) does the same as apomixis in
giving rise to offspring that are genetically identical to
their mothers, plus the methodology used in the
experiments appears valid, so I still trust it as a reliable,
even if not accurate, source.
I used a paper from the journal "Annual Review Of
Entomology" to look at thelytokous parthenogenesis in the
eusocial Hymenoptera. This paper provided me with some

90

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

examples of insects which used


particular forms of
automixis in their parthenogenetic reproduction, as well as
expanding the number of bacteria genera which I knew
could force parthenogenesis onto their hosts (this backed
up those found in the "Heredity" paper). I am confident of
this paper's reliability and accuracy because of its
publication in a peer-reviewed journal, its youth and its
replication on the website of one of the world's leading
educational institutions, Harvard University, something
which surely would not have happened had there been any
accuracy or reliability issues.

travel and the media. An awareness of its significance and


liability to change is essential if English is to continue to be
used as a lingua franca.

For the purpose of defining as many of my key terms as


possible using one source I used the online dictionary
provided by "Merriam-Webster". This was a very useful
source because despite me having to use much highly
specific vocabulary, in most cases this site was able to
return a definition which I later adapted. I consider this
source to be highly reliable via cross-referencing, since I
compared each of its definitions to the definitions given in
the other relevant sources I used and they were pretty
similar each time.

When Ludwig Wittgenstein wrote these words in 1922, he


probably never realised just how accurate they would be.
For many speakers of English today, the limits of their
language are the limits of the globe. Due to the
globalisation of business, politics and many other forms of
communication after the Second World War, it is extremely
complicated to define what English has become. There has
never been another language which has spread to so many
countries and has been learned at as fast a rate as English.

Alexandros Adamoulas (Year 13)


_______________________________________________________

To what extent has the


definition of English been
changed since it has
become a World
Language?

So to answer this question, it is necessary to ask if the


definition of English has changed from the beginning of its
time as a World Language to today, and if so how.

The limits of my language are the


limits of my world.139

Through my interest in Far Eastern culture, I came to


realise how often American expressions and English words
are dropped into everyday conversation, and have become
part of Oriental languages. I also learned about the
English speaking manias sweeping the region, especially in
China, where English is learned with fanatical fervour at
mass meetings. I was surprised to discover the extent to
which this language was being taught across the globe, and
the speed with which new varieties of English were being
created.
I began to wonder what the definition of English was, now
that these New Englishes were associating themselves
with this one language. How could the English language at
once change and stay the same? This prompted me to
reflect on the nature of English as a World Language and
to consider how it has either been significantly altered
since becoming a global language, or if it has, contrary to
received wisdom, remained much the same.

Language is the road map of a


culture. It tells you where its people
come from and where they are
going.140
As the English language grows in popularity and continues
to dominate global communications, it is interesting to
ponder the fate of the English language as a World
Language. Globalisation is rapidly shrinking the world we
live in, and as people and businesses all over the planet
become more connected, it is the English language that is
facilitating these links.
Thus our definition and understanding of English is
extremely important. This is the language of business,

As more and more people use English as a lingua franca, I


was also keen to focus on the social definition of the
language, to find out how people have shaped it for
themselves in the 70 years that it has been a World
L, 1922. Tractatus Logico-Philosphicus. [e-book] Translated
from German by C.K Ogden. Project Gutenberg. Available through Project
Gutenberg website: <http://www.gutenberg.org/files/5740/5740-pdf.pdf>
[Accessed 19 December 2013].
140 Quoteland. Rita Mae Brown quotations. [online]. Available at:
<http://www.quoteland.com/author/Rita-Mae-Brown-Quotes/1327/>
[Accessed 19 December 2013].
139Wittgenstein,

91

SAINT OLAVES ACADEMIC JOURNAL


Language. I wanted
manipulating English
identity in a rapidly
problems with the
imperialistic language.

ISSUE 2, SEPTEMBER 2014

to find out how people were


to give themselves a sense of
shrinking world, as well as the
cultural connotations of this

spreading its language to several other continents through


its entertainment industry, diplomatic missions and
business links. A global language can be defined as one
that has develop[ed] a special role, recognised in every
country and this is certainly true of American English.

According to one online dictionary, the definition of English


is the Germanic language of the British Isles, widespread
and standard also in the US and most of the British
Commonwealth141. However another source states an even
wider interpretation of this word, the language of
England, now widely used in many varieties throughout
the world.142 Both of these sites also offer the opportunity
to search the same definition in a World English
Dictionary143 and a US English Dictionary144 respectively.
Therefore with two different descriptions of English
discovered within a minute of searching the Internet, as
well as entirely separate dictionaries for variations of
English, how are we supposed to know which is the most
correct definition?

World English can be described as the various languages


which come from English, and which have subsequently
spread throughout the world, at first because of the
influence of the British Empire. It is the language now
used in countries on every continent as a language of
government, or as a unifying language in countries where
dialects are mutually unintelligible.

The English language today is not the English language of


ten years ago. It is a well-known fact that languages are
always changing and evolving, and therefore the definition
of any language at any point in time is hard to pin down.
This has been exceptionally true for the English language
in recent years, as it has become a World Language and
spread across the globe. Consequently it is now used in
many different countries and has an overwhelming number
of non-native speakers according to English professor
William Machan only about 25-30% of the worlds
Anglophones have English as their first language.
Any attempt to define World English is a huge task. The
difficulty with trying to define a language in this way is
that perceptions of a language are very subjective. They
depend on a persons background and experience as well as
many other factors, and are intensely personal to each
individual. Thus I will aim to look not only at a linguists
definition of this phrase but also the social connotations of
it, in order to gain a full understanding of attitudes
towards World English across the Anglophone world and
beyond.
Renowned linguist David Crystal uses the phrase New
Englishes to describe the range of dialects of the English
language which are spoken around the world. There is a
tendency for those in Britain to believe that British
English is the global variety in use, but most people think
of the English language as American English. This is
because it is the most powerful country to use English as
its language, as well as the most influential country in
Dictionary.com, 2013. Definition of the noun English. [online]. Available
at: <http://dictionary.reference.com/browse/english?s=t&ld=1173> [Accessed
19 December 2013].
142 Oxford Dictionaries, 2013. Definition of the noun English in English.
[online]. Available at:
<http://oxforddictionaries.com/definition/english/English?q=english>
[Accessed 19 December 2013]
143 Dictionary.com, op.cit.
144 Oxford Dictionaries, op.cit.
141

For the purposes of this essay, I will state that English


became a World Language in 1945, at the end of the
Second World War. There could be much debate about this
estimation as it can be argued that becoming a World
Language is an ongoing process, developing year upon
year. It is no coincidence that I have chosen the same date
as when America became a superpower.
However, defining English like this does not take into
account the multiple difficulties in deciding what counts as
English. One example of this is with Singaporean English,
a dialect of English spoken in the culturally diverse
country of Singapore, which is used in government
communications and is one of the four official languages.
Yet some people call it a creole145 or even a wrong version
of English146, as it uses such sentences as Too slow lah, I
nd that printer147, which would be unnatural to hear
from a British English speaker. Therefore, how different
does a language have to be from Standard English before
it is no longer classed as part of World English, or before it
is classed as a language in its own right? The above
definition is too vague to deal with these anomalies; it does
not set out any boundaries for English.
One way in which people have dealt with this issue in the
past, is to pay attention to grammar rules. This allowed
people to decide what kind of language could be accepted as
English and what could not be. However recently there has
been a backlash from those who point out that grammar
does not take into account the natural evolution of
language, and may be simply a form of elitism and class
distinction. Thus, it does not seem useful to consider the
grammar of a language as the deciding factor in whether it
can be called English.
And so I would classify World English as the dialects of
English which are spread around the world, as well as
languages such as Tok Pisin, the English-derived dialect
Yoon Soon Chye, D, 2009. Standard English and Singlish: The Clash of
Language Values in Contemporary Singapore [pdf]. Available at:
145

<http://www.als.asn.au/proceedings/als2009/yoongsoonchye.pdf> [Accessed
19 December 2013]
146 The Perspectivist, 2011. What is Singlish arh? [online] Available at:
<http://www.perspectivist.com/politics/what-is-singlish-arh> [Accessed 19
December 2013].
147 Leimgruber, J R E, 2011. Singapore English. [pdf]. Available at:
<http://jakobleimgruber.ch/papers/LLC.pdf> [Accessed 19 December 2013]

92

SAINT OLAVES ACADEMIC JOURNAL


spoken in Papua New Guinea, which have developed from
English.
Now I must clarify why I have chosen English as the
World Language rather than another language such as
Arabic,
Portuguese
or
Mandarin
Chinese.
The
aforementioned languages are international languages,
whereas English, being in the right place at the right
time has become a global language, a lingua franca. With
numerous world groups and businesses having been set up
as well as the advent of the Internet, the planet is more
connected than ever. It is English which has become the
language of this international community. For instance, in
1995-6, 85% of the 12,500 international organisations used
English (the next most widely used was French with 49%).
Mandarin Chinese does have more native speakers than all
other languages, and will remain in this way for the
foreseeable future.148 However despite the speculation that
it would become a new lingua franca, this has not yet
happened. Although it is becoming more and more popular
as a foreign language as its economy grows it has not yet
been adopted. Perhaps this is due to the difficulty it
presents to learners, since it does not have a Roman
alphabet and it is very difficult to pronounce the tones.
Spanish is the global language which has roughly as many
native speakers as English. In some parts of the United
States, certain towns now have predominantly Spanish
speaking populations, and the language is prominent in
Latin America as well.149 It has growing influence in
America and therefore growing power as language.
Nevertheless, it is yet to reach beyond these continents and
so cannot yet be considered a global language.
Arabic is another major language, spoken by 280,000,000
people150 and an official language in 26 countries. It is
growing, demographically, faster than any other
international language.151 Despite these figures being
impressive, there are many different dialects which can be
mutually unintelligible152. English has official language
status in 83 countries, and is spoken in 105 other
countries,153 so English is clearly still more widespread and
widely used at the moment.

Graddol, D, 2006. English Next: Why global English may mean the end of
English as a Foreign Language. [pdf]. Available at
148

<http://www.britishcouncil.org/learning-research-english-next.pdf>
[Accessed 19 December 2013]
149
150

ibid

Nations Online Project, 2011. Most widely spoken Languages in the

World. [online]. Available at:

<http://www.nationsonline.org/oneworld/most_spoken_languages.htm>
[Accessed on 19 December 2013]
151 Graddol, D, op.cit
152 BBC, 2013. A Guide to Arabic 10 facts about the Arabic language
[online]. Available at:
<http://www.bbc.co.uk/languages/other/arabic/guide/facts.shtml> [Accessed
19 December 2013]
153 Nations Online Project, 2011. Most widely spoken Languages in the
World. [online]. Available at:
<http://www.nationsonline.org/oneworld/most_spoken_languages.htm>
[Accessed on 19 December 2013]

ISSUE 2, SEPTEMBER 2014


Since English has reached the status of a global language,
many academics have pointed out that English is now so
widely established that it can no longer be thought of as
owned by any single nation. Furthermore, it is obvious
that languages are not only a means of communication, but
also have history, customs and traditions embedded in
them. These factors have combined to create a need for
speakers of what are sometimes referred to as the New
Englishes to stamp their own identity on their variety of
the English language. Some see this as a great obstacle
needing to be overcome before a language can become
integrated into a society and widely used. The speakers of
New Englishes are remaking it, domesticating it,
becoming more and more relaxed about the way they use
it154. As 113 distinct territories across the Earth now use a
form of English in some capacity155, hundreds of millions of
people are wanting to make the language their own. This
has led to huge changes in the scope and range of English
used, especially in relation to word formations, wordmeanings,
vocabulary,
collocations
and
idiomatic
phrases156. This could be the most important factor of all in
determining language change, as it recognises the
dependence of the shape of the language on the social
background and motives of the people who are use it.
Trying to define a pidgin or a creole can be a problematic
issue, and even the eminent linguist Tom McArthur notes
that the uses of the term are unhelpfully far from uniform,
not always precise, and where precise are generally
contentious. To out-line these two terms briefly, a pidgin
is a simplified form of a language draws vocabulary,
grammar, intonation and other linguistics traits from one
or more other local languages. Thus many of the early
pidgins which were created by sailors from European
countries communicating with those from regions of Asia or
Africa. A pidgin is not spoken as a home language, but
used for essential communications. A creole is the name
which describes a pidgin that has evolved into a mother
tongue, and is spoken as a home language. It is the
established language of a group of people, and so new
generations grow up speaking it. The more a creole
becomes fixed and entrenched in a society the more likely
it is to develop into an official language.
These two types of language cause a huge amount of
contention when defining English, and they test the
boundaries of the language, depending on your point of
view stretching or shrinking them. One well-known
example of this is Tok Pisin. Even a website for Tok Pisin
translation highlights the uncertain nature of this
languages status, describing it as both an official language
of Papua New Guinea as well as a form of Melanesian

Rushdie, S, 1991. Imaginary Homelands: Essays and Criticism 19811991. Random House.
155 McArthur, T, 1998: 216. The English Languages. Cambridge University
154

Press.
156 Crystal, D, 2003: 146. English as a Global Language. 2nd Ed. Cambridge
University Press.

93

SAINT OLAVES ACADEMIC JOURNAL


Pidgin English157. English is listed separately as another
official language of this country (CIA Factbook, 2013).
This suggests that Tok Pisin and English are two separate
languages; anyone who can speak both of them would
effectively be bilingual. This affects the definition of
English, as it could dramatically reduce what some see as
the English languages (as noted before with the term The
New Englishes) if we see these pidgins and creoles not as
part of the English language family, but as separate
entities in themselves. It also creates confusion between
what counts as English or not. If Tok Pisin is a different
language to English, how long until American English is
too? One famous quote states that a language is a dialect
with an army and a flag and a defence policy and an
airline158. Tok Pisin can be taken as proof of this, as in
Papua New Guinea it is this Pidgin which is said to be
widely used and understood whereas English is only
understood by 1-2% of the population (CIA Factbook,
2013).
Thus the boundaries of English are even more difficult to
define now, as there are so many new varieties of English,
and more are emerging each day. McArthur names no less
than 35 different creoles or pidgins, based on English,
which are in use across the world159.
On this factor alone it would have to be assumed that the
definition of the English language has changed, and to
some extent beyond all recognition, as it must now include
new pidgins and creoles. It has also introduced new
controversy over the boundaries of English which seem to
be expanding and contracting at the same time.
The rise of English in status and its spread across the
world has happened at the same time as many new states
have been founded, mostly as the result of the ending of
colonial rule. Crystal believes that it is inevitable in a
post-colonial era, there should be a strong reaction against
continuing to use the language of the former colonial
power. This has meant that a number of recently
independent nations are now looking to establish their new
identities on the world stage at the same time as English
has reached the status of a language of knowledge, wealth
and power. These states are now able to choose a national
language with which they can express their identity, and
many are choosing English.
Several of these countries were once part of the British
Empire and so have a large percentage of English speakers
already. This, combined with the reputation of English, has
meant that many nations have continued to use English as
their main language, especially in those nations which

Tok-Pisin, 2013. Tok-Pisin Translation, Resources and Discussion.


[online]. Available at: <http://www.tokpisin.com/>
[Accessed 19 December 2013]
158 Rosen, B, 1994. Is English Really a Family of Languages? The
International Herald Tribune, 15 Oct.
159 McArthur, T, 1998: 177. The English Languages. Cambridge University
Press.
157

ISSUE 2, SEPTEMBER 2014


have many indigenous languages. One example of this is in
Nigeria, where there are over 500 local languages160, and
English is employed in order to avoid disputes over which
language should be made official between ethnic groups.
In addition, other newly independent countries are
adopting English simply because it is now, thanks to the
influence of the United States, as well as being the
language of science, business and the Internet. Even in
Algeria, a former French colony, it is English that is taught
as a second language in schools as opposed to French.161
This has only increased the popularity and varieties of the
language across the world.
However for certain states, English is linked to memories
of colonial rule and seen as a reminder of the former
slavery of the country. Kenyan author Ngugi wa Thiongo
declared that Africa needs back its economy, its politics,
its culture, its languages162.Tanzania and Malaysia are
just two examples of countries which have disestablished
English as an official language, showing that maybe the
boundaries of English are shrinking.163 On the other hand,
as new generations grow up with no recollection of colonial
rule, countries are more likely to embrace English as
essential for business and economic development, despite
its former connotations. Furthermore, since people are not
forced to speak English, they are more motivated to learn
it; it becomes a choice rather than a chore.164
One cause of debate about the future of English is based on
the unprecedented number of non-native speakers of the
language as compared to native speakers, a group whose
total is only increasing. The British Council estimates that
out of all the speakers, 375 million have English as their
mother tongue and 375 million as their second-language.
Having said that, the number of non-native speakers
seems to be rising at an incredible rate, with 750 million
people learning the language at any one time165, due to the
English learning mania sweeping Asia, as well as to the
easily accessible language-learning resources on the
Internet. These large figures demonstrate how far the
definition of English has changed because of its
globalisation. It has gone from the language of one island
to describing a huge range of people in a variety of
different countries.
Furthermore, the majority of conversations happening
across the world in English are more likely to be between

CIA Factbook, 2013. The World Factbook: Nigeria [online]. Available at:
<https://www.cia.gov/library/publications/the-world-factbook/geos/ni.html>
[Accessed 19 December 2013]
161 Crystal, D, 2003: 126. English as a Global Language. 2nd Ed. Cambridge
University Press.
162 Ngugi wa Thiongo, 1986: xii. Decolonising the mind: The Politics of
Language in African Literature. James Currey
163 Crystal, D, 2003: 126. English as a Global Language. 2nd Ed. Cambridge
University Press
164 William Machan, T, 2013:232. What is English and Why Should We
Care? Oxford University Press
165 British Council, 2013. Frequently Asked Questions: The English
Language. [online] Available at: <http://www.britishcouncil.org/learningfaq-the-english-language.htm> [Accessed 22 December 2013]
160

94

SAINT OLAVES ACADEMIC JOURNAL


two non-native speakers, than native ones. Thus the
language may be subtly altered and modified each time
these conversations take place. This demonstrates how
hard it has become to maintain a standard of the
language, as so many countries have given English official
status since its inception as a World Language and the
issue arises of which standard should be used worldwide.
Another major concern about the growth in the popularity
of English which has come to light in recent years is about
code-switching. This is when two or more individuals, who
each speak two or more languages, converse in a mixture of
these languages, switching seamlessly between them. This
is an issue as so many more people are using English since
it became a World Language, that code switching is
occurring between a huge number of languages and
English. Some of these language fusions have even been
given jokey names by their speakers such as Franglais,
Tex-Mex and Spanglish166. This is seen by McArthur as a
fundamental paradox within the language and therefore
within the definition of the English language, it is
monolithic and multiple at the same time.167
One language tree model which has been in use for some
time shows the Englishes having evolved from Old English
through Middle English to Modern English with very little
variation168. Recently however, some linguists have moved
to challenge this model, in order to place World English as
the version of the language which is currently spoken.169
As a result it can be seen that English has transformed so
considerably in itself since becoming a World Language,
that it has heralded an entirely new stage in the history of
the English language.
At the end of the Second World War, English was mainly
seen as the language of the Inner Circle countries: the
United Kingdom, the United States, Australia, New
Zealand, Ireland, anglophone Canada and South Africa,
and some of the Caribbean. However now English has such
a wealth of varieties around the world, including Indian,
Nigerian and Jamaican forms among many others, not to
mention the Englishes which are developing due to codeswitching. Today, the language takes on the identity of the
people who are speaking it, and I do not think it will ever
go back to how it was at the beginning of its time as a
World Language.
Furthermore, the immense expansion of the Englishspeaking world, as well as the role of the Internet in our
society, will continue to alter how we view English in ways
which we cannot yet guess. Finally the introduction of new
language models, as recently as the last decade,

Crystal, D, 2004: 29. The Language Revolution. Polity.


McArthur, T, 1998: 201. The English Languages. Cambridge University
Press.
168 Milward, C M and Hayes, M, 2010. A Biography of the English
Language. 3rd Ed. Heinle & Heinele Publishers Inc., US.
169 Graddol, D., 1997. The Future of English? [pdf] British Council. Available
at: <http://www.britishcouncil.org/learning-elt-future.pdf> [Accessed 22
December 2013]
166
167

ISSUE 2, SEPTEMBER 2014


definitively demonstrates just how far and how
significantly the definition has altered, as the world now
acknowledges the importance of World English for
communication.
On the other hand, it is clear that there is a strong need for
regulation of the language, so that it can still be used as a
lingua franca and so that all speakers can still comprehend
one another. Moreover, it seems that although English has
changed in some regions beyond all recognition, these
incidents are often sensationalized by the media. This has
created, to some extent, an illusion that the language is
transforming much faster than it actually is. Thus
although English has vastly changed since its inception as
a World Language, fear of language change has meant that
many people have false perceptions about how it is
changing. Overall, despite the widespread change of the
language, there is still extensive confusion and
misconceptions about the true extent of the changing
definitions of English.

Sinead OConnor (Year 13)


_______________________________________________________

Is Quantum Mechanics
Philosophically Justified?

Abstract
In this project I will look to defend my support for the
Many Worlds Theory (a branch of the Everett
Interpretation) as being the strongest theory that currently
exists as a rationalisation of quantum theory. The aim of
this project has been to identify how the breakthroughs of
recent centuries concerning quantum mechanics have
influenced our philosophical understanding of the universe.
This will be done by reviewing the popular and lesser
known current theories to analyse which most clearly
explain the evidence and then analysing the philosophical
implications of each of them. This shall involve looking at
the works of physicists, philosophers and philosophers of
physics. I have used my introduction and research review
to explain the physical knowledge, which is required to
understand the themes which I will explore. I will then
begin to analyse each theory focussing specifically on
whether or not it: offers a solution to the measurement
problem; provides a comprehensive explanation of to how

95

SAINT OLAVES ACADEMIC JOURNAL


to interpret the quantum findings and; whether the
evidence or proof provided for each theory is sufficient.
Subsequently, I will go on to analyse how accurately I
think each one explains our universe in conjunction with
the works of some of the great philosophers of our time.

Introduction
2.1

What Is Quantum mechanics?

Quantum mechanics is how we predict the movements of


particles on sub-atomic level- this is as opposed to classical
Mechanics, which we use for objects on a macroscopic scale.
Objects on so small a scale (10-34 m) do not behave like
objects as seen in the classical world, and we soon learnt
that trying to applying classical methods was yielding
incorrect results.
Although it was originally invented to explain a world
happening on a level which is difficult to even conceptually
grasp, the techniques and applications that have been
invented as a direct result of quantum mechanics have
revolutionised the world as we know it, bringing us the
Information age. For example, quantum encryption has
allowed us to securely store hugely important data on the
internet without fear of it being intercepting and the
revolutionary quantum computers are able to complete
algorithms at speeds exponentially greater than their
classical predecessors.

ISSUE 2, SEPTEMBER 2014


released an electron and then later that the energy of the
light was directly proportional to its frequency and no
electrons were released if the frequency of light was below
a certain value, no matter what the intensity of the light or
for how long it was shone. According to classical theory, the
energy of the light should be directly proportional to the
intensity it was shone at, and there should be no
relationship between frequency and energy. To try and
explain this, Albert Einstein created a radical new theory
that the light was delivering energy discretely in quanta.
These quanta associated with EM waves became known as
photons- the mass-less charge-less energy carriers, the
energy of which is directly proportional to the frequency of
the EM wave by a constant, which became known as
Plancks constant. This radically changed the previously
held classical ideas, meaning that we could no longer
ascribe classical mechanical methods to these phenomena.
Here we have waves behaving as particles; this is referred
to as the wave particle duality.
2.4

Wave Particle duality on a broader scale

Einsteins theory had many repercussions, and the fields


since has been dramatically developed. Following from the
idea of wave particle duality, one physicist names Louise
de Broglie went on to suggest in his PhD dissertation that
if waves had been shown to behave like particles, then
particles should also behave like waves. He put forward the
equation

which he had derived from equating

Einsteins equation of relativity E=mc2 to his photo electric


2.2

Why do we need Quantum Mechanics?

The quantum concept was originally thought of by Max


Planck in a paper on thermal radiation. He was the first
physicist to suggest that energy was directly proportional
to frequency and even proposed a constant for the equation
which became E=hf or

. His problem was concerning

Black body radiation. When we heat up gases, they often


release light of certain wavelengths and Planck suggested
that the energy that was released was quantised. He at
first believed that this was an isolated case, and that the
quantisation didnt apply to the Electromagnetic (EM)
waves themselves, including light. However, it turned out
that the theory was much more general than he had
thought.
2.3

Light as a particle and the photon

According to classical
physics, light behaves as
an electromagnetic wave
a
self-propagating
transfer
of
energy.
However,
physicists
discovered that when
they shone light of a
certain frequency at a
sheet
of
metal
it

equation of

. Although this was radical at the time,

experimental evidence showed that electrons did indeed


show wavelike properties, as they diffracted through the
atoms in carbon, the spacing of which were approximately
equal to the predicted wavelength of the electron meeting
necessary conditions for diffraction. We know from this
that electrons orbit atoms as waves. Following this logic,
all objects behave like waves, however for most
macroscopic objects, is so small that the oscillation is
negligible.
2.5

Wave functions and Probabilities

To further develop the theory of electrons behaving like


waves, scientists repeated the famous double slit
experiment usually conducted with light, however instead
of light they used electron beams and as they expected,
they achieved similar results in that there were peak areas
of high and low concentration due to constructive and
destructive interference of the electron waves. The
experiment was then repeated firing single electrons one at
a time at the slit, and surprising results occurred in that
the same lines of high and low concentration were
produced even though there could be no interference
occurring. This lead onto the idea that the wave of
electrons simply represented the probability of its being in
any position at one time

96

SAINT OLAVES ACADEMIC JOURNAL


2.6

The Philosophical Problem

The problem with this way of thinking about location is


that mathematically the electron should be in all the stated
positions at the same. This is not the case if we are
thinking about a single electron and therefore
philosophical physicists must work to try to make the
current laws of physics fit with our understanding of the
world and as the physicist is empirical, this means
adapting our own understanding. The main three ways of
doing this are using Dyamical Collapse Bohmian
Mechanics and The Everett Interpretation which I will be
explaining in more detail later.
To understand these three options, we first have to explore
some quantum mechanics theory, which I will do in this
section.

ISSUE 2, SEPTEMBER 2014


dimension
of
x).
However there are a
variety
of
solutions
including three of the
four quantum numbers
needed
to
give
information about the
state of the particles. In
order to get exact values
for
each
of
these
numbers, it is necessary
to separate the equation
into functions r, and
(to represent the wave function of each of the dimensions
seen above) hence the wave function can be represented as
a product of these
.

Research Review
3.1

Electron Double Slit Experiment

One of the first proofs of DeBroglies


predictions came from an adaptation of
the well-known double slit experiment,
in which light diffracts through two
narrow slits to form a diffraction pattern
of bright spots and dark spots formed
from constructive and destructive
interference of the light waves.170
Richard
Feynman
repeated
this
experiment; however instead of using
lasers producing light waves, he fired
single electrons at the screen. What he
found was that the electrons created a
pattern which gradually built up to an
exact replica of the interference pattern
of light. This meant the electrons must
also act as waves in order to interfere to produce this
pattern.
3.2

Schrodingers Equation

Ernst Schrdinger used the single electron double slit


experiment to suggest that the wave idea of the electrons
only represents the probability of each electron being in
that location and that this should be known as the wave
function () . The probability of the electron being in each
location can be found by solving Schrodingers equation:

.171
Solving this equation in 3 dimensions x, gives three of the
three of the four quantum numbers172 (each being one
This is when two peaks resonate to form a greater peak which produce
bright light or two troughs interfere to form a greater trough which
produces no light. This leads to a series of bright and dark spots.
171 Where U(x)is the potential energy and E represents the system energy
and x respresents the location of x.
170

We extend this equation using the principle of linearity,


which says that for a differential equation, if f and g are
both solutions then any linear combination of af + bg is also
a solution. Each of the above equations has a variety of
solutions, which can be represented a complex equation
each with an individual coefficient e.g. af + bg + ch +di etc.
The probability of each of the solutions is found from
squaring the coefficient and then rationalising it into a
continuous distribution between -1 and 1.
3.3

Energy Levels (n)

The radical173 part of the


equation solves for n or the
energy level of the electron.
Following from the discrete
theme of quantum mechanics,
electrons are arranged in
energy levels around the
nucleus. Electrons will first
occupy the ground state
lowest energy levels at 172 Each particle has 4 quantum numbers which define it and are unique to
that particle. These are the: principle quantum number tells the energy
level of the electron; the Subsidiary quantum number (l) which tells about
angular momentum of the e- ; magnetic moment quantum number (m) whichtells about orientation of electron cloud w.ith respect to . a magnetic
field and ; spin quantum number (s) which tells about the direction of
rotation of electron about its axis
173 That is R(r)=Np(r) ekr where R(r) is rewritten as a p(r) - a polynomial
in r with a coefficient n, and k is a positive constant.

97

SAINT OLAVES ACADEMIC JOURNAL


13.6eV as these are where it is most stable. As more
electrons are added they will occupy the higher energy
levels, as the number of electrons each shell can hold grows
exponentially by 2n . Also, when electrons gain energy they
can be excited to higher energy levels. The principal
quantum number, n, represents the energy level of the
electron with n=1 represented the ground state and n=2
representing the next highest energy level etc.
3.4

Orbitals, Magnetic Quantum Number & Paulis


Exclusion Principle

The solution for the


function of gives the
shape of the orbital. The
orbital is the name given
for the shape of the wave
that the electron follows in
orbiting the atom. There
are four different orbitals
each of which have four different shapes (shown visually
adjacent) and in this model, are represented by four
different values of .
The magnetic quantum number ml is given by solving the
function of . This gives the orientation of the orbital i.e.
whether its along the x y or z axis. The fourth quantum
number comes from the intrinsic angular momentum of the
electron. This can be thought of as how it rotates compared
to the magnetic field.
In 1925 Wolfgang Pauli proposed the idea that no two
electrons (or fermions) can have 4 identical quantum
numbers- this became known as the Pauli exclusion
principal and explains the filling of the atomic orbitals in
the order seen on the periodic table as otherwise, each
would go to the lowest energy level.
3.5

Probabilistic Interpretation

An important point to make about quantum physics is that


when working with wavefunctions, you can only ever
obtain the probability that a particle will be in a certain
location at any given time. The wave function is often in
the form sine or cosine (as is it a differential equation) and
therefore the boundaries are theoretically infinite.
Therefore physicists need to normalise the wave function to
give values in the form of probability which are familiar to
us. This is done first by taking the square of the modulus of
the wave function. This is because they can often be
imaginary or negative and a real positive value is
required as a probability is only ever this form of number.
It is then necessary to integrate the value between the
limits -1 and 1 to find the expectation value; which is the
location in which it is most probable to find the electron.

ISSUE 2, SEPTEMBER 2014


3.6

The Heisenberg Uncertainty Principle

Another very important concept is The Uncertainty


Principle. Quantitatively this is defined as the
impossibility of exactly knowing both the momentum and
the position of a particle as the more accurately we know
one the less accurately we know the other. Qualitatively it
can be shown that

174

Applied to a macroscopic

object this gives negligible uncertainty.


We can never know both these quantities as in order to
observe a particle we must use some form of EM wave to
detect it. However, by doing so, we are making a photon
incident to it. By doing so we are altering its momentum,
and therefore we are not detecting it accurately. To detect
the position of a particle more accurately we need to use
EM waves with higher wavelength and therefore, the
photons in them have lower energy. Therefore they alter
the position of the particle by a smaller amount.
The uncertainty principle can also be written as

This implies that particles with moderate energies can be


created so long as they have a very short lifespan and is
hugely useful in Physics.175
3.7

Philosophical Rationalisation

Problems begin to arise in the area of quantum physics


when we consider that is it difficult to define the area of
physics which quantum mechanics addresses. There are
also philosophical issues which arise when we consider the
mathematics of the particle. Mathematically, every
physical theory consists of a state space and dynamic. The
former represents the different possible states of the
system at time t0 whereas the dynamics represents how
the system evolves as t increases. When this is related to
wave function, a particle in state i means that it is
certain that the particle will be at i, however a particle can
be in state x + y as it exists in a superposition of
possible states.
Albert Einstein suggested that the coefficient preceding
each term simply represented a probability of finding the
particle at that point and that this could be calculated by
squaring the modulus of the coefficient. The problems
which arose from this were the problems of interference.
The idea x and y would interfere and therefore impact
on each other thereby altering the calculated probability
of their location.

Where is change in momentum and


is chance in displacement
Although this is irrelevant to my project, it has the most fascinating
repercussions for particle physics as it means that particles for very high
energies can exists for tiny time periods. This offers an explanation for why
we dont see the many new particles discovered in the large hadron collider
in daily life. This is because they are all high energy particles which only
exists for tiny fractions of a second.
174
175

98

SAINT OLAVES ACADEMIC JOURNAL


3.8

Measurement
problem)

Problem

(or

macro-objectification

This problem is defined as the unresolved problem of how


we can rationalise the wave function collapse. Meaning,
why is one definite value observable, when from a
mathematical point of view, they should all be true? The
wave function collapse is the evolution of the wave function
into its individual Eigen states each of which allows us to
ascribe an observable value at the instant of seeing the
particle. This essentially is saying how we can break up
x + y into x and y to allow us to get a definite
solution for each one which give it an exact state as we
observe it.
Now we have addressed the basic of quantum mechanics
theory and analysed the philosophical problem in a little
more detail, we can return to an analysis of the current
theories to resolve the issue of the electron existing in
many positions at once.

ISSUE 2, SEPTEMBER 2014


They evolve according to the guiding equation:
in which r represents the set of n
variables representing possible positions position at time t.
3.10.1 How
does
Bohmian
Mechanics
accommodate quantum mechanics?
The Bohmian Mechanics
accommodates
quantum
mechanics
because
it
supplies
a
probabilistic
interpretation which fits
observation fairly accurately
but still allows us to
Image 1 understand the location of
the electron. This equation
predicts the possible trajectories of the particles e.g. for the
two slit experiment the trajectory is as in image 1.
3.11

3.9

Collapse Theories (Dynamical Reduction Program)

The idea is essentially that in a period of time, a particle


which exists as a superposition of states (i.e. x + y)
has a certain probability of collapsing into x or y. The
theory states that this probability decreases as the number
of particles within the system increases, therefore we dont
experience superposition the same way in macroscopic
objects as we do in smaller ones.
There are also two ways in which this collapse can occur:
firstly it may be brought on by a measurement or secondly
it is simply an evolutionary decay of an isolated system.
3.9.1

How
does
the
Collapse
Theory
accommodate quantum mechanics?

The theory accommodates quantum mechanics because it


suggests that when we observe a particle it is in only in one
state as opposed to a superposition of many. The
predictions it makes are relatively close to the true values
obtained.
John von Neumann was able to prove the possibility of a
wave function collapse as described, however has yet to
prove it exclusively. It also must be noted that his proof is
based on experimental evidence from the 1930s which is
contradicted by much present day research. There are
many theories which have gone on to use this
interpretation to formulate theories on quantum mechanics
however there are also many which disregard it.
3.10

Bohmian Mechanics

Bohmian Mechanics states that each particle has a definite


position, so it adds a set of n extra hidden variables to
quantum mechanics where each variable represents the
exact location of a particle in a system with n variables.

The Everett Interpretation

The most famed philosophical interpretation in quantum


mechanics is the Everett interpretation or many worlds
theory. This states that there are superpositions for all
objects including macroscopic ones- but they represent a
multiplicity as opposed to an indeterminacy. States like
x + y represent two independent systems, one of
which we observe and the other of which is occurring in a
perpendicular universe which we cant ever observe or
measure.
3.11.1 How does The Everett Interpretation
accommodate quantum mechanics?
The Everett Interpretation accommodates quantum
mechanics by explaining why we only observe probabilistic
states in sub-atomic particles but we observe definite
states in macroscopic objects. It states that all possible
states are simultaneously occurring; or similar just in
different dimensions on which we have no impact.
I will now go on to asses in greater detail the validity of
each of these theories.

Discussion
4.1

Defining the philosophy of Quantum Mechanics

There is wide variety of definitions defining the philosophy


of quantum mechanics, each of which can lead to slightly
different interpretations of how to approach it. I found the
most satisfying definition to be: The method of
rationalising our philosophical understanding of the
universe with our understanding of quantum mechanical
properties so that they can directly be applied to a world of
which we have a practical understanding.

99

SAINT OLAVES ACADEMIC JOURNAL


In order to understand to what extent quantum mechanics
can describe our universe, we must first understand the
way in which we are able to experience the phenomena
within it. Defining the structure of our universe as the
mathematical model of a quantum mechanical system,
events176 without an independent observer are considered
to exist in a superposition of all possible outcomes. Thus,
each event is indeterministic. However, in the world we
experience a definite result of all physical events. This
presents a paradox between that which physics says to be
true and our own most basic understanding.
4.1.1

Kant, Schopenhauer and Idealism

One form of rationalisation to this is in an idea proposed by


the philosopher Immanuel Kant. This was that we dont
interact with things in themselves; we only encounter the
phenomena, each person then forming their own
representation.177 He went on to suggest that there exist
two universes, the phenomenal world (the one which we
perceive) and the noumenal (the true world independent of
experience). This means that although there is a strong
correlation between the two worlds, we must have only
superficial understanding of the noumenal world as we
only interact with the phenomenal. This idea is known as
called transcendental idealism.
This idea has been analysed and reviewed by many
philosophers Arthur Schopenhauer went on to develop this
idea to suggest that as humans we have created a logical
world for ourselves because we are logical beings. He
begins his most famous book, The World as Will and
Representation, with the lines The world is my idea: this
is
a
truth
which
holds
good
for
everything... though man alone can
bring
it
into
reflective and
abstract consciousness
(Schopenhauer,
1818) in which he praises Kants presentation of the ideal
and the real being distinct however he also identifies what
he considers to be major flaws in the Kantian logic.
It was later proposed by philosophers of science that we are
able to apply mathematics to physical structures due to the
rationality of human beings being reflected in the
rationality of the universe. This is an interesting example
of symmetry within the universe and it is almost to say,
that what we experience is rational because that is the only
way in which we would be able to comprehend it.
Emminent physicist Paul Davies developed this idea that life and mind are etched deep into the cosmos (Davies,
2001) and has gone on to explore the idea that universe is
ultimately a creation of our minds.

Event in this context refers to a single occurrence of a process.


A good way of understanding this is to think of how bats interact with
the universe. They are able to record frequencies of electromagnetic waves
that arent visible to humans; hence they experience a completely different
phenomenon to us; however one which is no less valid.
176

ISSUE 2, SEPTEMBER 2014


4.1.2

Idealism and Quantum Mechanics A


Probabilistic Universe?

Idealist views such as these provide fascinating contrast to


the deterministic way which humans interact with the
world as we previously saw in 4.1.1. They also have proved
invaluable in trying to come to terms with the implications
of results involving quantum particles, and there is an
element of idealism to almost any theory which tackles the
philosophical rationalisation of quantum mechanics.
The main objection, which seems counterintuitive to
humans in which each event must be definite, is the idea of
entering into a universe governed by laws of probability.
This is often seen as going against the idea of rational
events as it is assumed that as we have definite reactions
with things, the world should then be definite to
complement this. However, it could be argued that
although our actions are definite, before carrying them out
there are near infinite possibilities about what action we
could carry out; this idea is most definitely reflected in
quantum mechanics.
4.2

The Copenhagen Interpretation

The Copenhagen interpretation is fundamentally a collapse


theory; it interprets quantum superposition to imagine
that the particle should simultaneously exist in all possible
states before collapsing to a definite state. This is triggered
by the observation of the particle which causes
instantaneous collapse thus altering the state.
4.2.1

P r o b l e m s o f S p o n t a n e ou s C o l l a p s e

The concept of evolving wave functions is already familiar


within quantum mechanics as the famous Schrodinger
Equation lies at the heart the whole theory. However
problems arise with spontaneous collapse theories as the
idea isnt one which is well formulated or explained due to
many issues concerning the nature of the system.
4.2.2

No Need to Alter Mathematical Theory

The Copenhagen Interpretation works well with the


metaphysics as it means that our current physical model is
valid and doesnt need to be adapted as the calculations it
makes are fundamentally correct. This makes it favourable
to physicists as it requires minimum adaptation of the
current quantum mechanical model as it fundamentally
states that we can use the basis of what we have, and just
think about it slightly differently.
4.2.3

Bohr Sommerfeld - Microscopic Concepts


in a Macroscopic World

One mainly disregarded criticism of the Copenhagen


interpretation is that it overturns the Bohr-Sommerfeld
core model of atomic structure178 which was still in
application during the time in which quantum mechanics

177

This states that any microscopic quantity must have a parallel in the
macroscopic world
178

100

SAINT OLAVES ACADEMIC JOURNAL


was being discovered. However, the development of new
quantum mechanics saw the introduction of concepts which
have no classical parallel, such as the Pauli Exclusion
Principle179 and the concept of spin.
The general consensus has since changed and BohrSommerfeld model of atomic structure is now seen to be
insufficient as there are many phenemomena which it fails
to address however , it is still useful to present an
interesting philosophical point on the validity of concepts
without macroscopic parallels.
This point is generally resolved through the use of
macroscopic example of such processes, showing that
although there are no direct comparisons, the ideas exist in
the macroscopic world. The most famous being the
Schrodingers
cat
thought
experiment
which
demonstrates how an unobserved object occupies a variety
of states at once as in quantum superposition.
4.2.4

The Universal Observer

The fundament of this theory is that definite events are


only possible under observation. As we live in a complete
deterministic macroscopic universe, this implies that there
must always be a subject. In cases which havent currently
been observed by humans, this creates the case for an eye
of providence to allow all objects to occupy a definite state,
an idea which has yet to be proven either way.
There have also been extensions from this that, without
the existence of an omnipresent being, before there were
people in the universe, events still took place without an
observer, therefore the wave function continued to collapse,
hence the observations that are currently being made
either by human or by machine must be influencing the
events happening in the past. This way of thinking
completely alters our current perception of reality and also
meets conflicts with Einsteins theory of special relativity
as it implied events or information can travel at a speed
much faster than that of light, which is taken to be the
universal constant. In addition, it implied that the
dimensions of time are not linear, as we experience them,
which is a difficult concept for humans to grapple with.
Many physicists and philosophers disagree with this
interpretation, stating that the third party observer needs
only to be thought of as having the function of registering
decisions (Heisenberg, 1958) thus the issue returns once
more to the measurement problem considering a classical
observer in a quantum world.
This has led to criticism by physicists such as Steven
Weinberg arguing that the theory itself states that the
observer must too be quantum. Essentially he stated that
it offers no solution to the measurement problem deriving

That there may never be a particle with all 4 of the same quantum
numbers
179

ISSUE 2, SEPTEMBER 2014


probabilistic laws from deterministic theories with no true
solution nor explanation of how exactly they work.
4.2.5

A Probabilistic Universe (Once Again)

There are also philosophical implications of the stochastic


universe presented in this model. The idea of a random
universe appears to many as a philosophical quandary in
which human being have no purpose and it is impossible to
know anything with certainty. Looking at the Gauss Curve
however (in which virtually all the possible methods of
collapse occur) we know that roughly 70% of all possible
results occur within 1 standard deviation of the most likely
event and virtually all occur within three standard
deviations, hence although the world may fundamentally
be random, in practical terms all the possibilities occur
within a very small range.
4.2.6

A Complete Theory?

It also assumes quantum mechanics to be complete, which


has been heavily disputed by Einstein, Podolsky and Rosen
in the EPR paradox.180 The completeness of quantum
mechanics has led to its own theories concerning the
philosophy of quantum mechanics and is not something
which has of yet been proved, thus it is not a valid basis for
a theory.
4.3

The Everett Interpretation

The Everett Interpretation is the one which is most widely


accepted by physicists. This is because it allows for the
existence of all the quantum states as according to this
theory, each quantum state occurs simultaneous in a
different universe, each of which are perpendicular to each
other thus showing how each state can exist.
4.3.1

A Sensible Newtonian Universe

The Everett Interpretation comprehensively explains the


validity of Newtonian physics; this is because the size of
the object is inversely proportional to the time taken for
decomposition hence for a macroscopic object becomes so
small that it cannot be perceived by the naked eye - thus
these objects appear to remain stable. It also us to continue
developing the theory without fear of running into a
philosophical obstacles, as we can be sure than the
fundamental principles agree with each other.
4.3.2

What is Infinite Probability? ...


Where Are the Infinite Universes?

And

One of the main problems with this interpretation is that it


requires each event to happen an infinite number of times
This can be explained as follows: Due to particle pair production, if an
electron and a positron are produced they become entangled meaning that
the property of one influenced the property of the other. However, this
means that at the moment that a property of one becomes definite, the
property of the other becomes definite according to the Copenhagen
Interpretation. Therefore the information between them both seemingly
travels at a speed greater than the speed of light which violates Einsteins
theory of relativity.
180

101

SAINT OLAVES ACADEMIC JOURNAL


in perpendicular universes. This makes it very difficult to
apply probability to any outcome without employing set
theory. Furthermore, as explained earlier, this is one of
the key issues which is being explained; the idea that the
square of the coefficient represents its probability of
occurring. Consequently, it becomes impossible to apply
probability to an outcome and we are left in a universe
which is neither deterministic nor probabilistic.
This idea can be interpreted many ways, and physicist
Henry Stapp thought of it as such: that if the universe has
been evolving since the big bang, then it would have
become such a smeared-out cloud of a continuum of
different possibilities (Stapp, 2002) that the Earth would
no longer have a well-defined position nor would any of its
components.
However its widely believed that the theory of decoherence
as contrived and developed by Zeh and Zurek in 1970s
shows that the same mechanism that is responsible for the
suppression of interference in the quantum realm is also
applicable for macroscopic objects. Decoherence shows that
a natural basis will form which prevents us from
experiencing branches which involve indeterminate
macroscopic objects. This creates a strong case for the
Everett interpretation as it solves this problem for the
many world theory however doesnt solve the measurement
problem when combined with collapse theory or hidden
variable theory.
4.3.3

A Brave New World

Philosophically many people find this theory difficult to


deal with because it requires a complete overhaul of many
of the key principles which we apply daily. Firstly it
requires a complete change of our definition of a world.
The accepted definition of the world is The totality of

(macroscopic) objects in a definite classically described


state. as when considering the Everett interpretation, its
necessary to be more rigorous and empirical with the
definition, in order to assign meaning in a mathematical
context.
Following this definition, it has since been understood that
macroscopic objects are merely made up of a collection of
smaller particle, hence it seems to be obvious that an
adapted definition for a world would be as a collection of all
the macroscopic objects and hence all the smaller particles.
However, according to quantum theory, these smaller
particles exist in a superposition of states, where as a
particle in a world must exist in a definite state.
It has also previously been asserted that the past, present
and future of a world is unique and confined within that
world, whereas under the Everett interpretation, this
cannot be the case, as each moment of the present in a one
world represents an infinite number of worlds in the
future.

ISSUE 2, SEPTEMBER 2014


4.3.4

Philosophically
Impossible,
Structurally Simple

Yet

However, the overriding support for the Many Worlds


Theory is in its structural simplicity. It only requires the
full application quantum mechanics as it stands and it
provides a deterministic theory which is consistent with
explains both allows for our realist interactions with
phenomena but also uses idealism to explain why each
persons experience is unique; this meaning that it doesnt
predict we should experience the world differently. In
addition, this interpretation, although it may be non-local,
doesnt require for there to be action at a distance in the
way Bohmian mechanics does. A key lauding feature of the
Everett interpretation it that it makes it possible to
consider the universe as complete, within requiring an
external observer; this is often seen in quantum computing
where there is an issue of parallel processing on the same
computers, which can be considered in a similarly to the
idea of a multiverse of different outcomes.
4.3.5

Or Have We Given Up?

On the other hand, the theory has received criticism based


around it gives up trying to explain things (Steane, 1999)
. Steane himself believed that Schrodingers cat was
unfeasible, as it ultimately yields the result that the cat is
either dead or alive, therefore in deterministic terms, it
never actually exists in a superposition of states.
4.4
4.4.1

Hidden Variable Theory (HVT)


Deterministic After All?

The main attraction of hidden variable theory is the


reduced dependency of the probabilistic interpretation of
quantum mechanics, which some scientists believe to be
inconclusive. Einstein was the most famous supporter of
the theory as he believed any laws which relied on an
indeterminate universe were not valid, and HVT is the
most widespread deterministic theory.
Another main attraction of HVT is that the metaphysics
of classical physics maintains i.e. the founding principles
governing Newtonian physics would remain unaffected and
furthermore, we can use the same logic
the basic
philosophical principles of quantum mechanics remain
unchanged and we dont have to radically alter our
understanding of science.
4.4.2

God Doesnt Play Die

Many have shared the view of Einstein that God doesnt


play die and therefore they believed that statistical laws
only showed that the results were valid for certain
particles and didnt show that this was applicable in all
cases i.e. there was no proof offered that this was indicative
of a general result .
Following the famous paper Is quantum mechanics a
complete description of our universe which gave rise to the

102

SAINT OLAVES ACADEMIC JOURNAL


famous EPR paradox, scientists now take it as proved that
quantum mechanics does not offer a complete description of
our universe due the contradictions raised between
conservation laws and special relativity.
4.4.3

An Issue of Locality

Although it is asserted by many that HVT offers a solution


to the EPR paradox, this raises its own issues. Firstly
following the experiments of John Stewart Bell, it was
conclusively proved that it is impossible to have a local
181hidden variable theory. This provided hard evidence that
quantum mechanics worked in a way radically different to
the assumptions that we have about the universe as we
know it Einstein even assumed that any theory would
need to be local.
This being said, following the work of David Bohm in
developing DeBroglies early theories, non-local theories
were developed which were able to predict the trajectory of
the double-slit electron experiment to incredible accuracy ,
however this is done through a major shift in our version of
reality, yet a version that has become accepted by some
scientists as the true representation.
4.4.4

Its Not Ideal But...

The main boast of HVT is being the only realist


182interpretation of quantum mechanics through which it
earns automatic sympathy from both scientists and realistsympathetic philosophers, as it can be directly applied to
our universe without the need for an additional mental
rationalisation. Moreover, as each wave function of a
system is complete (Passon, 2006), each measurement is
definite and thus there is no need to consider the
measurement problem using this method. This adds to
support for this theory being the simplest of all.
4.4.5

Who Created The Laws Of Nature?

However the main criticism is that, for a theorem which


argues that the laws governing the outcome of all events
are inherent within the universe , all the possible
interpretation which have been found have been contrived
by humans as opposed to being derived naturally or from
first principles . For example, the most developed and well
accepted theorem in this field is the aforementioned
DeBroglie-Bohm theory. This was originally formulated by
DeBroglie and then Bohm extrapolated these principles so
as to produce results as close as possible to experimental
values. It is an indication of the confidence that Bohm had
in his own theory that he himself was trying to prove the
possibility of the existence of such variables as opposed to
finding an ultimate theory.

The principle of locality states that an object is influences directly only by


its immediate surroundings
182 The principle of realism is a philosophical one stating that everyone is
materialistic i.e. there is a version of reality completely independent from
mind and dependent on matter alone. It is the opposite of idealism, the idea
which has been explored earlier
181

ISSUE 2, SEPTEMBER 2014


4.4.6

The Importance of Being Elegant

In addition to this, many feel it is too inelegant. It makes


each aspect of the already mind bending-ly confusing
quantum mechanics even more complicated for example
adding the guiding equation into the Schrdinger equation.
However, elegance after all is just a matter of taste and
there are those who feel that the natural description of all
movement through definite equations is the most beautiful
thing in existence.
4.4.7

A Theory In Its Own Right?

Another well circulated objection is that Bohmian


mechanics is simply a reformation of standard quantum
mechanics and not a theory in its own right. Many believe
that it offers a completely different theory as to what is
going on merely by manipulating formulae. However this
view is disputed by those who affirm that Bohmian
Mechanics merely adds clarity to the confusion which is
quantum mechanics by explaining it in a deterministic
fashion.
4.4.8

Wait... Are we Quantum or Classic?

However, another key issue is that of the measurement


problem. Bohmian mechanics requires us to consider the
observer classically and the object quantum mechanically.
This raises the issue of when to switch from one to the
other; as Sir Roger Penrose asserted, a measure of scale is
required for defining when classical-like behaviour begins
to take over from small-scale quantum activity (Penrose,
2005) which he believed was not adequately done by
Bohmian mechanics. He was also one of those who feared
for the validity of classical behaviour under quantum
mechanics, which both he and Leggett believed were
illogical and didnt accurately explain classical behaviour,
however these fears are generally refuted.
In summary, although Bohmian Mechanics is a useful tool
for helping to explore the nature of quantum mechanics,
until it is able to offer a comprehensive explanation of the
whole of quantum phenomena, it doesnt reasonable define
it as the philosophical justification of quantum
mechanics.

Conclusion
4.4.9

Main Discoveries

Having discussed the merits and pitfalls of each theory, it


becomes necessary to address directly the question of
whether any theory provides a justification for quantum
mechanics. In my opinion, the only viable theory is the
many world approach seen in the Everett theory as the
ability to allow for all quantum states simultaneously
which the other theories all insist on manipulating as
opposed to taking the mathematics at face value. I believe
that each of the problems it raises has a logical solution or
can be solved by re-evaluating the way in which we relate
to our universe- for example we may say that we exist in

103

SAINT OLAVES ACADEMIC JOURNAL


this universe as opposed to any of the other infinite
alternatives because it is the only universe in which it is
possible for us to exist. I also think that we can overcome
the proposed preferred basis problem by understanding
that it is unnecessary for us to think of the events
occurring as a probabilistic outcome. Although physicists
have found this the easiest interpretation of the quantum
world, if each event occurs an infinite number of times this
becomes a meaningless concept. The theory is also
strengthened by the proposed theory of de-coherence which
helps to solve the measurement problem as it means that
we would be unable to experience events which didnt
conform to reality as we know it.

ISSUE 2, SEPTEMBER 2014

Should we continue to
screen
for
breast
cancer in the UK?

4.4.10 How My Ideas have Developed


At the beginning of the project, although I had a vague
appreciation of what quantum mechanics meant to our
universe, this was much closer to the typical view that it
changes the way in which we think about our universe
being deterministic, whereas this theory shows it to be
probabilistic. However throughout the course of the project,
I have learnt that the implications are actually much
broader than that. I have also begun to understand
philosophy as a pure science; one, which may even be
superior to physics and mathematics. This is because it
must be carried out making no assumptions at all only
using logic, we can never assume that which is proposed by
another to be the truth in philosophy whereas this is a key
part of the development in all other sciences.
4.4.11 Summary
To summarize, although the many worlds interpretation is
being developed, it beautifully intertwines the development
of physics and philosophy into joint disciplines which
together help to explain the universe to those who live
within it.

Louise Selway (Year 13)


_______________________________________________________

Abstract
Screening was introduced in the UK in 1986 so that breast
cancers could be diagnosed in asymptomatic women and
hence earlier than women who presented with symptoms.
Women between the ages of 50 and 70 are currently
screened every three years primarily using X-Ray
mammography. However there are limitations to this
technology which result in false negatives ad false
positives.
The subjective nature of histopathological analyses of
breast tissue biopsies means that cells may meet the
definition of cancer without being consequential (resulting
in symptoms in the lifetime of the woman). In such
instances of uncertainty, the current standard practice is to
begin treatment as if the woman has invasive cancer even
though this may not be the case. However, this results in
overdiagnosis because if no invasive cancer is present then
iatrogenic problems and harms occur. This makes cause
specific mortality rates less useful in determining the
effectiveness of screening than all-cause mortality rates
which account for overdiagnosis.
Improvements in mortality rates can also be attributed to
improvements in therapy since the implementation of
screening. A decline in the quality of life in overdiagnosed
patients is another negative consequence of overtreatment
but this is very difficult to quantitively measure. Statistics
about breast cancer screening trials can be misinterpreted
and result in biases (length, lead-time and selection) which
inflate survival statistics in favour of screening.
Thus it is clear that in hind sight screening has been far
less effective than was first thought and there is very little
reliable evidence to suggest it has had significantly positive
impact when overdiagnosis is factored in. However, more
evidence about overdiagnosis from randomised trials is
required to say with more confidence that screening is
indeed more harmful than it is helpful. Hence screening
should continue until enough evidence about overdiagnosis
and overtreatment is accumulated to conclusively say that

104

SAINT OLAVES ACADEMIC JOURNAL


the benefits of screening are not significant enough to
warrant a national programme.

Introduction
Invasive Cancer, or a malignant neoplasm, is the greatest
cause of death in the developed world and the second
greatest cause of death in the developing world. Although
statistically heart disease is the most deadly, cancer is
more feared from a cultural perspective. It is a silent
killer, a relentless and insidious enemy, the emperor of
all maladies. Deaths from cancer in 1990 were 5.8 million
worldwide and rates have consistently increased since.
This is because cancer is largely a disease of
immunosenescence due to accumulated mutations over
time and so the greatest risk factor for cancer is age.
Demographic changes in the form of ageing populations,
particularly in the west, make cancer a highly prevalent
and significant disease.
Thus due to its increasing prevalence, the study of cancer
is central to modern medicine. Furthermore, cancer poses a
uniquely immense psychological battle for patients and
their families to endure. Their fears of death and arguably
worse, the unknown, are very real burdens and this adds
another dimension to coping with cancer for both patients
and doctors. Neither silver bullets nor blanket cures have
been found, but in its 4000-year history treatments for
cancer have improved with great efficacy and creativity.
Science has always been a passion of mine because I relish
the intellectual challenge of demystifying the world around
me in a logical fashion, and I appreciate the great
importance of the applications of science in the context of
society and the community. It is the latter of these two
factors that has inspired me to pursue Medicine at
university.
Breast cancer screening is topic of much debate in the
medical profession and NHS. Although screening is
recognised as a highly effective method of early detection,
some argue that more harm is being caused than good. Not
only will I be educating myself in the physiology of the
most common cancer amongst women, but I will also gain
an insight into the workings of a central public health
programme. First, I hope to establish why screening was
implemented in the UK. Then I will to consider why we
should continue to screen for breast cancer, or rather, are
there any reasons to stop? These are the decisive questions
which have and continue to split opinion amongst medical
professionals and researchers. This polarity of opinion is
something which interests me and an extended research
report will provide a good platform from which relevant
information can be scrutinised to answer these key
questions. Next, I must research how screening can
continue to be conducted effectively. The most important
factors under consideration are age, frequency and
screening methods. In other words, who should we screen?
How frequently should we conduct screening? What

ISSUE 2, SEPTEMBER 2014


screening methods should we use and when? Much of the
discussion will try to determine whether screening does
more good than harm and if this question is resolved the
remainder of the discussion will explore specifically how
screening should be conducted.

Research Review
My research materials fall into two main categories. The
first category details the history of breast cancer diagnoses,
symptoms and treatments throughout history. By studying
the observations and constantly evolving theories of the
disease over time, we can understand why screening
methods eventually became employed in healthcare
systems. This can broadly be considered, an attempt to
validate the question. The second category details
conflicting bodies of evidence which are responsible for the
polarity of opinion regarding breast cancer screening. This
includes prominent trials and studies on the subject and a
brief history of NHS policies.
The history of cancer and breast cancer in particular is a
fascinating tale and is wonderful example of scientific
progress through building on the work of those before. It is
important to understand that a truly multidisciplinary
and niche approach was not adopted toward breast cancer
until the 20th century when surgical and technological
advances became especially rapid. It is therefore more
appropriate to separate the history of the disease into
cancer in general and breast cancer specifically.
3 . 1 - Hi s t o r y o f C a n c e r
The most ancient known descriptions of cancer describe
eight cases of breast tumours or ulcers that were treated
with cauterisation. The original papyrus document was
written in 3000BC in Egypt183. In 400 B.C. Hippocrates,
known today as the father of medicine, proposed the
Humoral Theory of Medicine, which states that the body is
composed of four fluids, or humours184. These include the
blood, phlegm, yellow bile and black bile. Any imbalance of

The Emperor of All Maladies by Siddhartha Mukherjee - A biography of


cancer, Fourth Estate Paperback Edition 2011, Pages 193-201. Arguably the
foremost biography of cancer in recent times and certainly the most
critically acclaimed, The Emperor of All Maladies is invaluable in tracking
cancers progress through history. It won the Guardians First Book Award
and the Pullitzer Prize for non-fiction in 2011. The book has been subject to
the most possible scrutiny from medical professionals and writers, and it
has emerged an immensely well respected piece of medical journalism.
Mukherjee is himself an experienced oncologist so he writes not only with a
scientific clarity of thought but an appreciation of the subtleties of clinical
practice too. Having been published only two years ago and considering the
breadth of knowledge he explores, the book is an excellent resource for this
dissertation.
184 News-Medical.net at http://www.news-medical.net/health/History-ofBreast-Cancer.aspx
183

On their website, they state their aim is to segment, profile and distribute
medical news to the widest possible audience of potential beneficiaries
worldwide and to provide a forum for ideas, debate and learning. Although
its open nature allows contributors of all levels to contribute, the website is
frequently checked and edited by a highly qualified team of people. These
include Natural Scientist, doctors, toxicologists and chemists. Their
academic profiles and qualifications are provided, assuring me that they are
a reliable source of information.

105

SAINT OLAVES ACADEMIC JOURNAL


these fluids was thought to cause disease and he attributed
cancer to an excess of black bile. Hippocrates was the first
to use the words "carcinoma" to describe tumours, and
hence the term "cancer" was coined. In 168 A.D. Galen, a
Roman physician who also believed in the Humoral Theory
of Medicine, thought cancer to be curable in early stages
and that advanced tumours should be operated upon either
by cutting around the affected area or by cauterisation.
Paul of Aegina, one of the most prominent Byzantine
physicians, noticed breast and uterine cancer to be the
most common in 657 A.D. He recommended the removal of
tumours in the breast as opposed to cauterisation. Moses
Maimonides then wrote in 1190 that excising and
uprooting the entire tumour and its surroundings up to the
point of healthy tissue is effective unless the tumour
contains large vessels and or the tumour happens to be
situated in close proximity to any major organ. John
Hunter developed this idea when in 1750, he postulated
that cancers could be removed if they remained localised to
nearby tissues. He supported Stahl and Hofman's lymph
theory of cancer; cancer is composed of fermenting lymph of
differing pH and density. Nearly 90 years later and a
decade after Recamier first recognised the idea of
metastasis, Muller - a German pathologist - began to
establish pathological histology as an independent branch
of science in 1838. He demonstrated that cancer was
composed of cells, although he thought that cancerous cells
arose from undifferentiated cells between regular tissue. In
1889 Steven Paget proposed his "seed and soil" theory of
cancer. He analysed over 1000 autopsy records of women
who had breast cancer and found that the patterns of
metastasis were not random. Thus, he proposed that
tumour cells (the seeds) have a specific affinity for specific
organs (the soil), and metastasis would only result if the
seed and soil were compatible. In 1895, oncology was
revolutionised when Wilhelm Rontgen discovered X-rays,
making the detection of tumours in the body much easier
and non-invasive. Then in 1939, Huggins discovered that
hormones were necessary for the growth of certain cancers
through his research on androgen levels and prostate
cancer in dogs. This laid the groundwork for hormone
therapy for many cancers including breast cancer. 1976
Harold E. Varmus and J. Michael Bishop: discovered the
first cellular oncogene and a decade later, the first tumour
suppressor gene was isolated. This gene was also one of the
first associated with an inherited form of cancer.
3 . 1 . 1 - Hi s t o r y o f B r e a s t C a n c e r
The ideas of Le Dran and Le Cat (leading French
physicians) - that surgical removal of breast tumours
would be effective provided infected lymph nodes and
armpits were removed too - influenced surgeons of the late
19th century. In 1890 William Halsted, who was the first
Professor of Surgery at Johns Hopkins, Harvard and Yale,
began performing radical mastectomies (removal of the
entire breast, the muscles in the front of the chest, and the
lymphatic system of the breast) to treat breast cancer. This
period of surgical oncology was often criticised for its

ISSUE 2, SEPTEMBER 2014


paranoia of metastasis. The psychological urge of the
patient and surgeon to remove the foreign cells from the
body, often surpassed basic objectives of improving the
patients health. Hence the surgeries were often far more
invasive and radical than were necessary. Often, only
lumpectomies (a lump of the breast is removed) were
required. The development of antiseptics, anesthesia and
blood transfusions during this time had made survival
after a surgery more realistic and this contributed to the
apparent alacrity to operate. Radical mastectomies, despite
their highly traumatic and disfiguring nature, would
become the gold standard for treating breast cancer for
over half a century. During this time however, a number of
further discoveries and theories occurred that would
greatly augment knowledge about breast cancer and
improve treatment options as a result. In the 1890s, an
adventurous Scottish surgeon named George Beatsen,
became intrigued by the inextricable link between the
ovaries and the breast. He had learnt that removing
ovaries from cows altered their capacity to lactate. Upon
removing the ovaries of three breast cancer patients, he
found that the tumours shrank.
3.2 - The Efficacy and Practicality of Screening
3.2.1 - The Forrest Report, 1986
In 1985, Mr Kenneth Clarke, the Minister of Health,
convened an expert committee chaired by Professor Sir
Patrick Forrest and known as the Forrest Committee to
report on whether screening for breast cancer should
commence in the UK185. By 1986, it was well-known that
breast cancer was the commonest form of cancer amongst
women in the UK, with approximately 24,000 news cases
and 15,000 deaths annually. The report outlined the
principles of screening, screening methods and procedure
as well as subsequent assessments and treatments. The
premise for which the report argued, was that early
detection substantially reduces mortality rates. So the
hypothesis was the value of early detection by mass
population screening is best tested by observing whether,
in well-conducted controlled trials, fewer women offered
screening compared to those not offered screening die at a
given age from breast cancer.3
The committee presented its report to ministers in 1986
and concluded screening by mammography can lead to
prolongation of life for women aged 50 and over. There is a
convincing case on clinical grounds for a change in UK
policy on the provision of mammographic facilities and the
screening of symptom-less women. It also concluded that
the necessary back-up services would need to be provided
to assess the abnormalities detected at screening. A
mammogram is a scan of the breast tissue using X-rays,
used mainly in an attempt to detect cancerous cells. The
NHS Breast Screening Programme (NHSBSP) was
Department of Health. Breast Cancer Screening Report to the Health
Ministers of England, Wales, Scotland and Northern Ireland by a Working
Group Chaired by Sir Patrick Forrest. London, HMSO, 1986.
185

106

SAINT OLAVES ACADEMIC JOURNAL


established in March 1987 and began inviting women in
1988, aiming to offer routine mammographic screening to
each woman in the UK aged 5064 years once every three
years. The Forrest Report changed the face of breast cancer
care in the UK and its significance is still being strongly
felt today. It summarises the ethics, research and
practicalities behind the implementation of breast
screening in the UK, thereby allowing a benchmark by
which the implications of recent developments can be
measured.
3.2.2 - The 2006
Programme Review

NHS

Breast

S c r e e n in g

A review, published in February of 2006 by the NHSBSP


(NHS Breast Screening Programme), gives a very useful
timeline
of
breast
cancer
screening
since
its
implementation in March 1987186. The Advisory Committee
on Breast Cancer Screening was set up in 1986 to advise
ministers and the Department of Health on the
development and effectiveness of the breast screening
programme. In 1991, A review of the evidence gathered in
the Forrest Report five years earlier, remained supportive
that screening continue for those aged 50-64 years old so
the NHSBSP was inviting over one million women a year
for screening by this point. In 1994 the first results on
interval cancers (cancers diagnosed in the period between
screens) were published and they were disappointingly
high. This is indicative of poorer quality screening, because
such diagnoses should have been made at the previous
screen. This issue was resolved following a pathology audit
later that year, when it was shown that two view
mammographic screening was more effective than the oneview screening system already in place. Optical density the logarithmic ratio of the radiation falling upon the
tissue to the radiation transmitted though the tissue - was
also standardised because more evidence about optimal
optical densities had arisen. In 1998 breast examinations
were no longer deemed suitable as a technique for
screening when randomised controlled studies found that it
was not effective in preventing death, and actually caused
harm through needless biopsies and surgery. The 2006
Review recounts how in 2000, it was estimated that there
was only a 6% reduction in breast cancer mortality in
women aged 55-69 that was attributable the NHSBSP.
Analysts concluded the full effects were yet to be seen and
that the remaining decline in mortality was due to
improved treatment methods. As a result in 2000, the
upper age limit was extended to 70 year olds. 4
The review discusses trends in breast cancer incidence and
treatment, reduction in mortality resulting from
mammographic screening, comparisons of breast cancers
diagnosed at screening and the symptomatic service,
screening methods, women at younger and older ages and
at high risk of breast cancer and ultimately benefits versus
NHS Breast Screening Programme 61: Screening for breast cancer in
England: Past and future. Published February 2006 at
186

http://www.cancerscreening.nhs.uk/breastscreen/publications/nhsbsp61.pdf

ISSUE 2, SEPTEMBER 2014


risks. 187The Review touches upon the fundamental issues
of screening and evaluates them with the support of
numerous case-control and cohort studies which have been
published in the two decades that preceded it. In its
Executive Summary, the review highlights, There may
come a time when mortality rates are so low that the
absolute number of lives saved by breast screening
becomes smaller and smaller, to the point where screening
may no longer be necessary. For the moment, however, this
is not a realisable possibility. The justification of this
conclusion is what I am most interested in exploring.
Ostensibly, the case for increased breast cancer screening
is clear. Cancers detected earlier are easier to excise or to
treat with radiotherapy, chemotherapy etc. Earlier
diagnosis means a lower likelihood of metastases, which
are the major causes of fatalities in cancers of all kinds. So
who
would
argue
against
increased
screening?
Unfortunately this picture, seductive though it is, is just
too simplistic and obscures a bitter controversy that has
raged across the medical literature but which barely
registers in public consciousness.
3.2.3 - Nordic
Gtzsche

Cochrane

Review

and

Peter

A Cochrane Review entitled Screening for breast cancer


was published in November of 2012. This review is the
most recent addition to the body of research (the most
prominent papers have been published by Cochrane) that
supports the idea that breast screening does more harm
than good.188 Peter Gtzsche, a Danish medical researcher
and Professor of Clinical Research Design and Analysis, is
the director of the Nordic Cochrane Centre. As a prominent
sceptic of the value of breast screening in the general
population, he expresses his views in Mammography
Screening: Truth, Lies and Controversy.189 As an expert in
clinical trial design and results analysis he came to the
topic with no real experience of breast oncology,
chemotherapy or surgery. However it was from this
position of independence that he looked at the data from
the studies that had been performed on breast screening

The Structure of the NHS in England by NHS Choices, found at


http://www.nhs.uk/NHSEngland/thenhs/about/Pages/nhsstructure.aspx
188 Cochrane Collaboration Screening for Breast Cancer Trials Review at
http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001877.pub5/abstrac
t
The Cochrane Collaboration is an international network of more than
31,000 dedicated people from over 120 countries. They work together to
support well-informed decisions about health care, by preparing, updating,
and promoting the accessibility of Cochrane Reviews over 5,000 so far,
published online in the Cochrane Database of Systematic Reviews, part of
The Cochrane Library. They also prepare the largest collection of records of
randomised controlled trials in the world, called CENTRAL, published as
part of The Cochrane Library. Their work is internationally recognised as
the benchmark for high quality information about the effectiveness of health
care. However, it must also be noted that papers with conflicting conclusions
will arise and that, as with all organisations that publish research, the
views of one are not necessarily concordant with the views of other
contributors in the same field.
187

Mammography Screening: Truth, Lies and Controversy [Paperback]


Peter C., M.D. Gotzsche (Author), Iona Heath (Foreword), Frank Visco
(Foreword).
189

107

SAINT OLAVES ACADEMIC JOURNAL


and decided, based on the evidence that he and his
colleagues uncovered, that far from being an unalloyed
good, there were real and significant harms being
perpetrated on women taking part in breast cancer
screening programs. In analysing the data, Gtzsche
focuses on overall all-cause mortality. He argues that while
some women might be saved by early diagnosis and
treatment, others will die from over-diagnosis and the
results of over-treatment.190 In their latest advice,
Gtzsche and his coworkers state that screening produces
patients with breast cancer from among healthy women
who would never have developed symptoms of breast
cancer. Treatment of these healthy women increases their
risk of dying, e.g. from heart disease and cancer.

ISSUE 2, SEPTEMBER 2014


that supply the ducts with milk before it is transferred to
the nipple.191 Cancer which forms in the milk ducts is
known as ductal carcinoma whereas cancer which forms in
the lobules is known as lobular carcinoma. Very rarely,
cancer (known as sarcoma here) forms in the connective
tissue in the breast which forms muscle, fat and blood
vessels.

3.2.4 - Conclusion
The literature review indicates the diversity of sources
employed, the wealth of knowledge which has been
accumulated over time and the polarity of opinion present
on the subject. This is expected given the significance and
complexity of the issue. A number of further secondary
sources will be utilised throughout the dissertation where
some of the ideas raised by research already discussed is
scrutinised, developed and honed.

Discussion
Having summarised the extensive research upon which my
dissertation is based, it is now appropriate to outline the
structure of the discussion. Firstly, I will set up the debate
by discussing, in greater scientific detail, the physiology
and development of breast cancer. This is because
researching the physiology of breast cancer is necessarily
to effectively evaluate the value of breast screening. Next, I
will confront the heart of the issue which is whether or not
mammographic screening is effective enough in early
diagnosis to justify its large-scale presence in the NHS.
Here, I will firstly consider the benefits of screening in the
context of the natural progression of breast cancer. I will
then explore notion of overdiagnosis before evaluating
recent and significant pieces of literature, some of which
conclude that screening should continue and others of
which argue the contrary. Following the debate, I will
discuss whether the breast screening programme should be
extended, scrapped or modified.
4.1 - Pathophysiology of breast cancer
The most common forms of breast cancer in women,
originate in the inner lining of the milk ducts or lobules
190

Center for Medical Consumers at

http://medicalconsumers.org/2012/03/31/book-review-mammographyscreening-truth-lies-and-controversy-2/ On their website it says that the


Center for Medical Consumers is a non profit 501(c) 3 advocacy
organisation, founded in 1976. In our 36 years of existence, we have never
received funding from the pharmaceutical or medical device industries. The
lack of motives to express a bias (with regard to Gtzschs views of
screening) reassure me that the source is reliable. The fact that the article
stresses the importance to be open to both sides of the debate is indicative
that prejudice toward the value of screening is minimal if not absent.

Fig.1 (Left)

Fig.2 (Right)

4.1.1 - Genetic alterations


Fundamentally, cancer is a disease caused by the failure to
correctly regulate tissue growth. For a normal cell to
become cancerous, the protein signals coded by a very
small proportion of total genes in each cell which regulate
mitosis and differentiation, must be affected.
The affected regulatory genes can be divided into two broad
groups. Oncogenes promote cell growth and reproduction.
Tumour suppressor genes inhibit cell division and
survival. Malignancy can occur through the formation of
new oncogenes, through the inappropriate over-expression
of normal oncogenes, or by the under-expression or
disabling of tumour suppressor genes. Typically, changes
in many genes are required to transform a normal cell into
a cancer cell, which is why cancer often takes a long time to
develop.
Genetic changes can occur at different levels and by
different mechanisms. Although, the gain or loss of an
entire chromosome can occur through errors in mitosis,
mutations - changes in the nucleotide sequence of DNA are more common. Large-scale mutations involve the loss
or gain of a fraction of a chromosome, which occurs when a
cell gains many copies (often 20 or more) of a small
chromosomal locus, containing one or more oncogenes and
adjacent genetic material. Generally, cells that divide are
at a much higher risk of developing mutations than cells
which dont divide (e.g. neurons). This is why cancer is
particularly common in breast, skin, colon and uterine
tissues. Small-scale mutations include point mutations,
deletions, and insertions, which may occur in the promoter
region of a gene or may occur in the gene's coding sequence,
both of which alter gene expression (how genes are
transcripted and then translated into proteins). Disruption
of a single gene may also result from integration of
genomic material from a DNA virus or retrovirus, and
Breast Cancer by Encyclopaedia Britannica found [Online] at
http://www.britannica.com/EBchecked/topic/78533/breast-cancer
191

108

SAINT OLAVES ACADEMIC JOURNAL


resulting in the expression of viral oncogenes in the
affected cell and its descendants.
Replication of the enormous amount of data contained
within the DNA of living cells inevitably results in some
errors or mutations. Complex error correction and
prevention is built into the process, and safeguards the cell
against cancer. If a significant error occurs, the damaged
cell can "self-destruct" through programmed cell death,
known as apoptosis. If the error control processes fail, then
the mutations will survive and be passed along to daughter
cells via normal mitotic division.
4.1.2 - Epigenetic alterations
Classically, cancer has been viewed as a set of diseases
that are driven by progressive genetic abnormalities that
include mutations in tumour-suppressor genes and
oncogenes, and chromosomal abnormalities. However, it
has become apparent that cancer is also driven by
epigenetic changes.
Epigenetic alterations refer to functionally relevant
modifications to the genome that do not involve a change in
the nucleotide sequence. Examples of such modifications
include DNA methylation and histone modifications (the
protein around which DNA is wrapped) as well as changes
in chromosomal architecture (caused by inappropriate
expression of proteins). Each of these epigenetic alterations
serves to regulate gene expression without altering the
underlying DNA sequence. These changes may remain
following cell division to last for multiple generations, and
can be considered to be epimutations (equivalent to
mutations).
4 . 1 . 3 - Ho w d o t u m o u r s d e v e l o p ?
The errors which cause cancer are self-amplifying and
compounding because a mutation in the error-correcting
machinery of a cell might cause that cell and its children to
accumulate errors more rapidly. Following a mutation
which causes the cell to divide very frequently, the cell
initially continues to look normal. This stage is called
hyperplasia and further damage leads to dysplasia where
cells begin to look abnormal in shape and orientation too.
Cells become less responsive to surrounding cells and body
signals trying to halt proliferation. If cells continue to grow
abnormally then carcinoma in situ (cancer at the site)
develops because tumour cells have broken through the
basement membrane - the capsule which restricts the
tumour. This can become invasive or life-threatening if it
occurs in vital organs and, crucially, it has the potential to
metastasise (spread through the blood and lymph nodes).
Metastatic cancer (secondary) is where cells from the
original tumour are able to re-establish themselves
elsewhere in the body to form new tumours.
4.1.4 - Susceptibility genes

ISSUE 2, SEPTEMBER 2014


Approximately 5% of breast cancers are related to the
inheritance of a genetic susceptibility. The two most
important genes here are believed to be the BRCA1 and
BRCA2 genes. These are tumour suppressor genes involved
in the repair of replication errors and so an inherited
mutation in one of these genes is believed to impair the
structure of the proteins which correct mistakes which in
turn reduces their efficacy.
4.1.5 - The influence of environmental stimuli
Some environments make errors more likely to arise and
propagate. Such environments include the presence of
disruptive substances called carcinogens, repeated physical
injury, heat, ionising radiation, and hypoxia (deprivation of
oxygen).
4.1.6 - Summary
All in all, cancer is a result of genetic alterations in the
regulation of cell growth, which sparks a chain reaction
that leads to more serious errors, each progressively
allowing the cell to escape the controls that limit normal
tissue growth. These alterations are caused by DNA
mutations in oncogenes and tumour suppressor genes as
well as by epigenetic deficiencies in DNA repair. These
epigenetic alterations often result in further mutations in
oncogenes and tumour suppressor genes.
4.2 - The benefits of screening
This section deals with the positives of mammographic
screening and although it also provides further scientific
explanations behind breast cancer, it is now done in the
context of screening unlike before. The discussion moves
from the reasoning behind the principle of screening to the
benefits of screening in light of improved cancer therapies.
4.2.1 - Stages and corresponding treatments of
b r e a s t c a n c e r 192
The stages of breast cancer convey the size of the tumour
and the extent of progression in the patient. Stage 1 is
where the tumour is very small remains almost entirely in
situ (it is possible for a very small tumour to have spread
and for the cancer to still be classed as stage 1. The tumour
is between 1mm and 20mm. Since the tumour is small, it is
treated with a lumpectomy or partial mastectomy where
only the specific area which has been affected is removed
from the breast.

Number Stages of Breast Cancer by Cancer Research UK [Online] found


at http://www.cancerresearchuk.org/cancer-help/type/breastcancer/treatment/number-stages-of-breast-cancer. Last updated October
2012.
192

109

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

Fig.3
Stage 2 is where the cancer is slightly larger (between
20mm and 50mm) and has spread to a few nearby lymph
nodes.

Fig.6

Fig.7 (left)

Fig. 8 (right)

Stage 4 is where the cancer has metastasised and spread


beyond the breast and lymph nodes. Breast cancer most
commonly spreads to the bones, liver, and lung. As the
cancer progresses, it may spread to the brain but it can
affect any organ. Surgery plays a less significant role here
because adjuvant therapy is more effective now, so
surgeons operate to treat specific areas affected by the
metastatic cancer.

Fig. 4 (top) and Fig. 5 (bottom)


Stage 3 is where the cancer is still mainly localised to the
breast but it may have spread to the skin over the breast or
the muscle cells beneath it and to spread to multiple lymph
nodes in the vicinity. The cancer is larger too, now greater
than 50mm. Mastectomies are the most common surgery in
stage 2 and 3 breast cancer due to the increased size of the
tumour, rendering lumpectomies insufficient to combat the
malignancy.

Fig. 9
The detailed information above is very important to the
issue of screening because it is the scientific basis for
screening. The implementation of screening suggests the

110

SAINT OLAVES ACADEMIC JOURNAL


importance of preventing the cancer from progressing too
much before treatment because once the tumour is very
large (stage 3 onwards) and especially if it has
metastasised, it becomes particularly difficult to treat. This
is conveyed by the increasingly radical natures of the
surgeries above and the higher doses of the therapies
described below. Metastatic cancers will also cause a range
of new problems depending on where the tumour
fragments re-establish. Importantly, cancers in stage 1
when tumours can be treated best are highly unlikely to
detected by any means other than screening. This is
because symptoms normally arise once the tumour has
exceeded 20mm (stage 2) when the tumours become more
difficult to treat and have most likely spread (albeit to a
very small extent) to the lymph nodes. This is because the
milk ducts and lobules where breast cancer most
frequently originates, are located relatively far beneath the
skin. Breast tissue, particularly in younger woman, can
itself be quite dense which can make breast examinations
less effective in detecting tumours (abnormally dense
growths). Here, the sometimes high density of breast tissue
supports ultra-sound screening but this property can works
against mammographic screening because it makes
interpretations of the mammogram more difficult (as there
is a less distinct difference in the strength of the shadow
caused by healthy breast and potentially cancerous tissue).
4.3 - Breast cancer treatment
Adjuvant therapy is treatment given to patients following
the primary treatment (surgery). It consists of radiation
therapy, chemotherapy, immunotherapy and hormone
therapy. The dosage and length of adjuvant treatments
vary according to the stage of the cancer.
4 . 3 . 1 - R a d i o t h e r a p y 193
Radiation is also administered because it significantly
reduces the probability of the breast cancer returning in
the future. Radiation therapy provides photons or charged
particles which directly damage the DNA present in
cancerous cells, preventing them from successfully
dividing. Radiation may also be indirectly ionising by
heating the aqueous solutions surrounding malignant cells
producing highly reactive free radicals which in turn
damage DNA.
Although radiation is beneficial overall, there are some
notable side effects. Healthy cells are exposed to radiation
too and this can lead to fibrosis (reduced cell elasticity), an
increased risk of heart disease, swelling of tissue resulting
in oedema, damage to epithelial cells, lymphedema
(localised fluid retention due to lymph damage when
axillary lymph nodes are being treated) and possibly cancer
(if DNA damage occurs to oncogenes or tumour suppressor
genes in healthy cells). The possibility of induced cancer is
Radiotherapy by NHS Choices [Online] found at
http://www.nhs.uk/Conditions/Radiotherapy/Pages/Introduction.aspx. Last
reviewed 13/05/2013
193

ISSUE 2, SEPTEMBER 2014


very small however and the damage is usually outweighed
by the benefits.
As mentioned, most radiation today is delivered by X-rays
or electrons (photons). Another approach is to use protons.
Proton beam therapy has the advantage that the proton
gives up its energy only when it hits its intended target - in
this case the tumour. It does not continue through the
tumour and damage normal cells on the far side. So it
allows for the delivery of very high doses of radiation to the
tumour with minimal side effects. And since essentially
any cancer can be eradicated with high enough levels of
radiation, it follows that proton beam might prove very
useful because one can give a much higher dose without
fear of adjacent normal tissue damage.
4 . 3 . 2 - C h e m o t h e r a p y 194
Chemotherapy is the use of cytotoxic (toxic to cells) agents
to eradicate the breast cancer or palliate symptoms. These
agents work by damaging the DNA in rapidly dividing cells
(including cancerous cells), which in turn impairs the
ability of the cell to divide. It is also believed that these
agents induce apoptosis in cancerous cells. One such
chemotherapeutic agent is an alkylating agent. Molecules
in this agent covalently bond with DNA (via their alkyl
group) in such a way that the helical structure of DNA
breaks upon replication. Small doses of chemotherapy are
often given before surgery, to shrink the size of the breast
tumour. Adjuvant chemotherapy is often given following
surgery or radiation when there is little evidence of cancer
in the body but a risk of relapse because it enables small
amounts of metastasised cells to be destroyed.
Chemotherapy cannot be considered a single treatment
because it almost always involves the use of multiple
cytotoxic agents. this is referred to as combination
chemotherapy and not only does it allow drugs acting by
different mechanism to attack the cancer but it also
reduces the risk of the tumour becoming resistant to a
particular drug. The drugs can also be administered in
lower doses, reducing levels of toxicity to cells.
Chemotherapy, like radiotherapy, has some significant side
effects. As the rapidly dividing cells are targeted, blood,
intestinal and stomach cells are particularly affected. The
effect on bone marrow and the blood cells which are
produced as a result leads to immunosuppression - where
the immune system is less effective at combating disease
because fewer white blood cells are produced. Diminished
red blood cell production can result in anemia and fatigue,
and there are risks of organ damage (particularly heart
and liver damage). As effective as chemotherapy is in
addressing the tumour, it is far from perfect. When
tumours grow, they grow further away from existing blood
vessels and conditions become hypoxic. The cancerous cells
signal for new blood vessels to form but these new vessels
Chemotherapy by NHS Choices [Online] found at
http://www.nhs.uk/Conditions/Chemotherapy/Pages/Definition.aspx. Last
reviewed 19/03/2013.
194

111

SAINT OLAVES ACADEMIC JOURNAL


are often less well integrated than existing ones. Hence
delivery of the cytotoxic drugs to these areas of the tumour
is less efficient. Furthermore, tumours are also known to
become resistant to particular cytotoxic agents but overall
their benefits in cancer patients tend to outweigh any
harms they cause.
An example of how chemotherapy has improved in recent
years is present in a report published in the Proceedings of
the National Academy of Sciences, which revealed the
development of senexin A, the first of a series of chemicals
that block secretory patterns of cells damaged by
chemotherapy, which is the key to lowering the cancerpromoting impact of chemotherapy.
4 . 3 . 3 - I m m u n o t h e r a p y 195
Immunotherapy is where the immune system is stimulated
and utilised to fight the cancer. Some types of
immunotherapy are specific, an example of which is
monoclonal antibody production. Antibodies in the body are
important in recognising foreign antigens (proteins on the
surface of cells) and so the artificial synthesis of antibodies
allows them to be tailored to specific types of cancerous
cells. Another type of immunotherapy is the administering
of cancer vaccines. These train the patientss immune
system to recognise cancerous cells so that they can be
targeted and destroyed. The most common type of breast
cancer immunotherapy drug for breast cancer is Herceptin.
This drug binds with specific proteins on breast cancer
cells to inhibit their growth and is usually prescribed for
tumours which display an over-expression of a protein
called HER2, which can also signal more aggressive
cancers. Immunotherapy may cause side effects such as
fever, chills, pain, weakness, nausea, vomiting, diarrhea,
headaches and rashes but these side effects generally
become less severe after the first treatment.
In May of this year, research published from the University
of Keio in Japan revealed that cancer immunotherapy
could be improved by combining molecular targeted
therapy.196 Tumour cells release substances which
suppress immune cells via the activation of various signal
molecules within the immune cells and so the inhibition of
such signaling molecules prevent immunosuppression in
tumour-associated microenvironments, thereby improving
the ability of the immune system to combat the cancer.

ISSUE 2, SEPTEMBER 2014


4.3.4 - Treatment in the context of screening
The analyses of each therapy above have purposely been
organised in three different sections. These convey the
three reasons why each therapy is important to the debate.
Firstly, screening itself is not a treatment and so it is
meaningless without the therapies above, meaning it is
important to understand how exactly doctors attempt to
cure breast cancer. Understanding how the therapies work
enables one to learn how and what side effects arise which
is vital in the context of overdiagnosis and overtreatment
which will be discussed later (see section 4.4.7). Secondly,
it makes it clear that all of the treatments essentially work
by destroying cancer cells (as opposed to converting them
to other types of cells which are less harmful) which
explains why a smaller tumour is easier to treat than a
larger tumour or a metastasised tumour. Quite simply,
there are fewer harmful cells to destroy and if they are
localised then only one location must be targeted. Hence,
the correlation between cancer progression and efficacy of
treatments explains why early detection would improve the
chances of successfully treating the breast tumour. The
third paragraph on each therapy gives examples of recent
developments and improvements in that particular
therapy. Besides this paragraph providing up to date
information on the science of such therapies, it conveys
how treatments have greatly progressed since the Forrest
Report of 1986. An awareness of the improving quality of
treatment must be acknowledged when evaluating the
efficacy of screening because this distorts our assessment
of improving breast cancer mortality rates. Although it can
be argued that early detection has improved mortality
rates, such rates would also have been improved by better
treatments, whether or not screening is operating. So the
real question is of what relative contribution screening and
better treatments have made to improving breast cancer
mortality rates. This is an extremely difficult question to
quantitatively or realistically answer because although it
can be argued that earlier detection increases the potency
of treatment, we do not know whether a cancer which was
detected symptomatically, at a later stage, would have still
been treated successfully or not. Thus the relative value of
screening vs better treatments in successfully treating
breast cancer is not entirely clear.
4.4 - The case against mammographic screening
4.4.1 - Understanding
h a r m l e s s 197

What is immunotherapy by the American Cancer Institute, [Online]


found at
http://www.cancer.org/treatment/treatmentsandsideeffects/treatmenttypes/i
mmunotherapy/immunotherapy-toc. Last Revised: 02/22/2013 and Last
Medical Review: 05/09/2012.
196 Improvement of cancer immunotherapy by combining molecular targeted
therapy by Yutaka Kawakami at Division of Cellular Signaling, Institute
for Advanced Medical Research, Keio University School of Medicine, Tokyo,
Japan. http://www.frontiersin.org/Journal/10.3389/fonc.2013.00136/abstract.
Article published 28.05.2013.
195

that

cancers

can

be

Up to now, we have been discussing cancer as a malignant


disease which consists of abnormal cells with
uncontrollable growth spreading throughout the body until
they kill the patient. This however, only refers to cancers
which are harmful, so it is not always applicable because
Why don't we get more cancer? A proposed role of the microenvironment
in restraining cancer progression by Mina J Bissell and William C Hines,
published in Nature 07.03.2013 found [Online] at
http://www.nature.com/nm/journal/v17/n3/full/nm.2328.html.
197

112

SAINT OLAVES ACADEMIC JOURNAL


some cells may meet the pathological definition of cancer
without actually causing harm to the patient. The
paradigm of cancer always being lethal has developed over
time, unsurprisingly, because historically we have tended
to only notice cancer when it has been lethal. However, we
know that cancer can develop without anybody (the doctor
or patient) knowing because it tends to cause obvious
changes in bodily function only when it has become so
advanced it is incurable. In other words, before effective
imaging technology, it was difficult to recognise harmless
cancers in the body. The variability of cancer progression in
the context of screening is neatly explained in the diagram
below. It has long been known that cancers progress at
different rates. Some grow faster and are more aggressive
whilst others are grow more slowly. This heterogeneity has
an unfortunate implication: namely, screening tends to
disproportionately miss the fast growing cancers because
they are only accessible to be detected for a short period of
time and these are the cancers that screening hopes to
catch most.

Fig. 10
The gradient of the arrows represents the rate at which the
cancer progresses. The fast growing cancers quickly lead
to symptoms and death but unfortunately they often
appear in the interval between routine screening tests.
Hence, these are known as interval cancers. The slow
growing cancers which only lead to symptoms and death
after many years, are the cancers for which screening has
arguably the greatest beneficial impact. The very slow
growing cancers are those which do not progress fast
enough to produce symptoms in the lifetime of the
individual and so the patient dies of some other cause. The
non-progressive cancers represent those cancers which
are halted by certain biological mechanisms that are not
fully understood. Current theories point to the possibility
that the cancer has outgrown its blood supply or is
contained by the immune system. The issue of
overdiagnosis occurs when very slow cancers and
sometimes non-progressive cancers are detected, resulting
in the illusion of disease or pseudodisease.
4.4.2 - The
screening

p r o b l e ms

with

mammographic

The problems with mammographic screening can be


divided into three main groups. These are false-negatives,
false-positives and overdiagnosis and the importance of
these problems increase in this order.

ISSUE 2, SEPTEMBER 2014


4.4.3 - False Negatives
False negatives are simply the rate of missed tumours - an
inevitability when an imperfect technology is employed in
diagnosis. Accurate data regarding the number of false
negatives is very difficult to obtain, simply because
mastectomies cannot be performed on every woman who
has had a mammogram to determine the false negative
rate accurately. Estimates of the false negative rate depend
on close follow-up of a large number of patients for many
years. Estimates for the proportion of missed cancers vary
from 10 to 30% but the most commonly quoted figure is
20%. One source of variation is the ages of women screened
because younger women typically have denser breast
tissue, resulting in smaller differences between the
densities of shadows of cancerous tissue and regular tissue.
4.4.4 - False Positives
The goal of the mammographic screening procedure is to
examine a large population of patients to find the small
number most likely to have a serious condition. These
patients are then referred for further, usually more
invasive, testing to confirm the diagnosis. Thus
mammographic screening exams are not intended to be
definitive, rather, it is intended to have sufficient
sensitivity to detect a useful proportion of cancers. The cost
of higher sensitivity is a larger number of results that
would be regarded as suspicious in patients without
disease. Approximately 7% of the patients without disease
who are called back for further testing from a screening
session are referred to as "false positives". There is a tradeoff between the number of patients with disease found, and
the much larger number of patients without disease that
must be re-screened.
4.4.5 - Overdiagnosis
Overdiagnosis is defined as the detection of cancers that
wouldnt have been identified clinically in someones
remaining lifetime because symptoms would never arise.
For cancers to be considered overdiagnosed, a patient who
has been diagnosed with breast cancer must have refused
treatment and they must go on to live without symptoms
ever arising before they eventually die of an unrelated
cause. This means cancers can only be called
overdiagnosed retrospectively. In order to understand what
significance overdiagnosis has to screening, we must
dissect its definition above and the reasoning behind it.
It is worth noting the distinction between the two concepts
of false positives and overdiagnosis. A false positive result
suggests the presence of disease, but is ultimately proved
to be in error (usually by further more precise testing
which may include ultrasounds or MRI scans). Patients
with false positive results are eventually told they do not
have breast cancer and so they are not treated whereas
overdiagnosed patients are told they have disease and they
do receive treatment.

113

SAINT OLAVES ACADEMIC JOURNAL


4.4.6 - Uncertainties in interpreting biopsies
and mammograms
In common usage, the term cancer means being ill with
cancer but this cannot be the definition in a screening
setting where the pathological definition of cancer may be
met by changes in cell growth which never end up causing
any symptoms and are harmless in the patientss lifetime.
There is even data to suggest that some cancers are
spontaneously regressing, a phenomenon most likely
attributable to the immune system. As we cannot know
which cancers go on to be harmful or harmless, the
standard is to treat all of them, which means treating
patients with radiotherapy, chemotherapy, immunotherapy
etc, all of which have severe side affects and potential
dangers highlighted in section 4.3. These treatments are
necessarily harmful to the body because cancers are
themselves a product of body cells as opposed to being
acquired from an external source. However if the patients
so called cancer never produces symptoms then these
treatments provide no benefits whatsoever (it is these
benefits which justify the severe side effects treatments
cause in true patients). Hence overdiagnosis might also be
called overtreatment because unnecessary and damaging
therapies are being given to patients.
People mainly attribute the eventuality of overdiagnosis to
misinterpretations of mammograms and tissue biopsies.
Attributing overdiagnosis to poor histopathological
examination however, is a misconception because the
diagnoses are correct yet the cancers are inconsequential
(see Fig. 10), conveying the highly subjective nature of
detecting cancers from biopsies, a central limiting factor to
the efficacy of screening but one which theoretically can be
largely eliminated if our understanding of the distinctions
between consequential and inconsequential cancers
improves. A very large observer variation study was based
on the NHS Breast Screening Programme involving 200
pathologists and 17000 readings. It showed that when a
pathologist had decided a patient had invasive breast
cancer (cancer which has spread to nearby tissue or lymph
nodes), the consensus was that is was not breast cancer in
3.1% of the cases and that it was carcinoma in situ
(localised and not spread) in another 4.7%. Disagreements
among pathologists are particularly pronounced for
carcinoma in situ as in another study which analysed 17
cases of epithelial proliferations of the breast, five expert
pathologists diagnosed 0, 1, 2, 3 and 9 cases as carcinoma
in situ. The same argument applies to mammographic
screening where there is often even less consistency
between independent evaluations of mammograms,
suggesting the subjective and sometimes uncertain nature
of diagnosing breast cancer. Furthermore, overdiagnosis of
carcinoma in situ and invasive cancer can occur, contrary
to the belief that metastatic cancers must be harmful.
Hence, it seems clear that overdiagnosis is an inevitability
of the screening programme and this will be confirmed by
results from various trials and reviews in section 4.5.

ISSUE 2, SEPTEMBER 2014


4.4.7 - Overdiagnosis bias in survival statistics
There are two fundamental biases in breast cancer
epidemiology which have led to misleading results about
the efficacy of screening because overdiagnosis always
inflates survival statistics. The first is called the length
bias. This occurs because overdiagnosis inflates both the
numerator (number who have survived) and denominator
(number who are diagnosed with breast cancer) of the
survival statistic even when the actual number of deaths is
stable. The example below (see Fig. 11) refers to
overdiagnosis in lung cancer screening but exactly the
same principle applies to breast cancer screening.

Fig. 11 - Diagram to show the effect of overdiagnosis on


survival statistics
In other words, as screening increases the number of
people diagnosed with pseudodisease, the number of breast
cancer survivors increases. The diagram above shows how
this can severely warp statistics in favour of screening.
The second bias is called the lead-time bias. The purpose of
screening is to detect cancers earlier than without
screening; therefore even if there were no overdiagnosed
cases (no length bias), comparisons of regions with
screening against without screening are skewed in favour
of screening if we are to use the number of years the
patient has survived from the date of diagnosis as the
outcome. The simple fact that patients have been
diagnosed at an earlier point (through screening), say at
age 69, as opposed to at age 71 via the detection of
symptoms, means that it will appear as though screening
has granted the patient an extra two precious years of life
even if earlier detection had no effect on the treatment of
the cancer. This is why mortality rates rather than
survival rates are most reliable in assessing the
effectiveness of breast cancer screening.
More specifically, all-cause mortality rates should be
looked at instead of breast cancer mortality rates. If

114

SAINT OLAVES ACADEMIC JOURNAL


screening saves or extends lives then we would not only
expect the number of people who died from breast cancer to
be lower but also the number of people who died full stop
should be lower too. Breast cancer mortality rates are the
number of people in a population who have died from
breast cancer but this fails to factor in harms caused by
overdiagnosis and overtreatment because it rests on the
assumption that causes of death can be determined
accurately (and that these deaths were from breast cancer).
All-cause mortality rates - the number of people in a
population who have died - is not affected by bias in
classifying the cause of death.198 This bias was revealed in
statistical analyses, when disease-specific mortality rates
were found to be lower in the screened group than in the
control group, whereas all-cause mortality was the same or
higher. Another bias which is not as fundamental as the
length and lead-time biases but is important nevertheless,
is the so called selection bias. This is where those who
actually attend their routine screens tend to be healthier
than those who are invited to screening but do not attend.
However, the extent of this bias is not fully understood but
it is most likely far less influential than that of length and
lead-time bias. These results suggest the statistical impact
and presence of overdiagnosis in mortality rates.
4.5 - Evidence for Overdiagnosis vs Reviews
concluding screening should continue
Now that the science behind early detection and the issue
of overdiagnosis has been discussed, I can analyse the most
recent and significant pieces of literature debating whether
breast screening should continue. The fact that the
national screening programme still stands today suggests
the majority of experts believe it is significantly beneficial
to women between the ages of 50 and 70, who are invited to
screening every three years. However a significant
minority have raised serious criticisms of screening and
some call for it to be stopped completely. Such contrasting
opinions are largely driven by conflicting bodies of
evidence.
4.5.1 - Criticisms of the Forrest Report
The Forrest Report concluded that screening should begin
in the UK but Peter Gtzsche writes about some of his
notable reservations of the evidence used. Although the
report acknowledged the possibility of unnecessary
treatment it referred to this only in the context of
carcinoma in situ and not invasive cancer. The report also
dismissed the possibility of overdiagnosis in the New York
trial, which along with the Two County trial in Sweden,
the decision to begin screening in the UK was based upon.
This is because the number of cancers detected in the
screening group was the same as the number of cancers
detected in the control group (where no screening
All-Cause Mortality in Randomized Trials of Cancer Screening by
William C. Black, David A. Haggstrom and H. Gilbert Welch published in
the Journal of the National Cancer Institute, found [Online] at
http://jnci.oxfordjournals.org/content/94/3/167.full.
198

ISSUE 2, SEPTEMBER 2014


occurred), but this does not necessarily mean no
overdiagnosis occurred. The Forrest report also uncritically
accepted evidence from the Two County trial which claimed
that 20% more cancers were detected in the screened group
compared to the control group. Gtzsche highlights that
firstly, this figure failed to include carcinomas in situ and
that the only way to truly know if these excess cancers
were really harmful would be to follow-up the randomised
women during their lifespan to see if the excess cancers
persisted. However this would have been meaningless
because the control group was invited to screening. Most
important however, is the lack of consideration of all-cause
mortality in the New York and Two Country trials and
hence in the Forrest Report. So although claims that breast
cancer mortality rates were significantly less in screened
populations may have been true, the impact of
overdiagnosis has been largely ignored. I feel that the
absence of all-cause mortality rates undermines the
evidence in favour of screening which underpins the
Forrest Report which in turn calls into question whether
the decision to begin screening was justified. This supports
the idea that perhaps screening shouldnt have begun in
the UK or should now no longer be implemented.
4.5.2 - Nordic Cochrane Review(s)
The Nordic Cochrane Centre is well-respected for
conducting independent reviews of healthcare policies and
in 2001 a review of breast screening was carried out. They
concluded that screening reduces breast cancer mortality
by 15% and that overdiagnosis and overtreatment is at
30%, meaning that for every 2000 women invited for
screening throughout 10 years, one will avoid dying of
breast cancer and 10 healthy women, who would not have
been diagnosed if there had not been screening, will be
treated unnecessarily. Furthermore, more than 200 women
will experience significant psychological distress including
anxiety and uncertainty for years because of false positive
findings. This corresponds to an absolute reduction in
breast cancer mortality of only 0.05%. They stated that due
to the improvements in treatments and increased
awareness of breast cancer (people are now quick to see
their GP if symptoms arise unlike before), screening has a
relatively smaller impact on mortality rates. The review
stated that when all-cause mortality was used, they could
see no impact of screening present. This result has been
corroborated by findings in the Journal of the Royal Society
of Medicine published in June 2013. The authors wrote:
We permuted the data in a number of different ways, over
an observation period of 39 years, but the data show that,
at least as yet, there is no evidence of an effect of
mammographic screening on population-level breast cancer
mortality. The research conducted at Oxford University
analysed mortality trends across England before and after
the introduction of the NHS breast screening programme
in 1988 and concluded that to date, population-based
mortality statistics for England do not show a past benefit
of breast cancer screening. This does not mean that
benefits are not seen at the level of the individual because

115

SAINT OLAVES ACADEMIC JOURNAL


there will undoubtedly be numerous cases where screening
has detected cancer earlier and consequently made
treatment more effective but effects are simply not large
enough for them to be detected at the population level. The
Nordic Cochrane Centre review was the first major review
to really challenge the efficacy of screening and they have
since published further reports in 2006 and 2012
reinforcing their claims of the 2001 review. Thus it seems
to me that there is a significant body of reliable evidence
from reputable sources who have objectively studied the
impact of screening and have found it difficult to attribute
improvements in breast cancer mortality rates primarily to
screening and that when overdiagnosis is factored in, there
seems to be no net positive effect of screening. This
supports the notion that screening is far less effective than
it was envisioned to be - a hard truth which I feel has yet to
be accepted by everyone in the medical field.
4 . 5 . 3 - T h e M a r m o t R e p o r t 199
In October of 2012, an independent panel of experts, led by
Professor Sir Michael Marmot (UCL Epidemiology and
Public Health) concluded that routine breast cancer
screening should continue because it reduces risk of death
significantly enough to outweigh the harms of
overdiagnosis. The meta-analysis of eleven randomised
controlled trials assessing whether breast cancer results in
fewer deaths due to the disease, compared to when no
screening takes place were published in The Lancet.
Overall they estimated women who are invited to breast
cancer screening have a relative risk of dying from breast
cancer that is 20% less than those who are not invited to
screening. However, the panel acknowledged that there
were several limitations to the review - not least that most
of the studies took place more than twenty years ago,
nevertheless they felt the 20% relative risk reduction was
still valid. The panel estimated that for 10,000 women
invited to screening from age 50 for 20 years, about 681
cancers will be found, of which 129 will represent
overdiagnosis and 43 deaths from breast cancer will be
prevented. This equates to 4000 cases of overdiagnosis for
the 1300 breast cancer deaths deaths prevented annually.
Given the uncertainties though, the panel stated that the
figures give a spurious impression of accuracy and further
research is required to more accurately assess the benefits
and harms of breast cancer screening. Most importantly,
Marmot concluded that clear communication of these
harms and benefits to women are essential, and the core of
how a modern health system should function. The Marmot
Report is the most recent and thorough review to date,
which concludes that breast cancer screening should

The Marmot Breast Cancer Screening Review by M G Marmot (Chair,


UCL Department of Epidemiology and Public Health, UCL, London), D G
Altman (Centre for Statistics in Medicine, University of Oxford), D A
Cameron (Edinburgh Cancer Research Centre, University of Edinburgh), J
A Dewar (Department of Surgery and Oncology, Ninewells Medical School,
Dundee), S G Thompson (Department of Public Health and Primary Care,
University of Cambridge), Maggie Wilcox (lay member). Published in The
Lancet on 30th October 2012, found at [Online]
http://press.thelancet.com/breastcancerscreeningreview.pdf
199

ISSUE 2, SEPTEMBER 2014


continue. It is the reason why the national screening
programme continues to exist.
4 . 5 . 4 - C r i t i c i s m s o f t h e M a r m o t R e p o r t 200
However there are some important criticisms of the
Marmot Report to be aware of. These have been made by a
number of critics but perhaps most notably by Michael
Baum, professor emeritus of surgery in the Division of
Surgery and Interventional Science at UCL. He is widely
accepted to be an expert in the field of breast cancer and
has pioneered adjuvant therapies which have dramatically
reduced breast cancer mortality rates. Baums first
criticism is that the report made no effort to account for
reduced quality of life from overdiagnosis. Here, Baum
claims the hazard ratio for mastectomy of 1.2 favours the
unscreened population. Baums second criticism is that the
Marmot Report avoided truly answering whether screening
extends lives because it resorted to measuring diseasespecific mortality instead of all-cause mortality, which is a
huge cause for concern (see 4.4.7). Baum also says that as
systemic therapy improves, the window for the impact of
screening narrows substantially, and as overdiagnosis
rates increase then the importance of the relatively rare
lethal toxicities of treatment increase (because there are no
benefits to treating a healthy patient, only harms). In other
words, the Marmot Report has failed to factor in
improvements in systemic adjuvant therapies thereby
inflating the beneficial impact of screening. Hence,
although the report says 180 people would have to be
screened to save one breast cancer death, Baum claims the
figure is closer to 2500. Following the Marmot Report, a
comprehensive review of overdiagnosis was carried out by
Bleyer and Welch who concluded that approximately 50%
of all breast cancers detected by screening are
overdiagnosed in the USA, a figure which corroborates the
findings of the Nordic Cochrane Review. 16 Crucially,
Baum is skeptical of the reliability and utility of the trials
over twenty years old employed in the Marmot Report and
says this makes it extremely difficult to use these to
estimate the benefits and harms of screening. Baum then
offers some crude estimates of his own, namely that 80% of
overdiagnosed cases would result in radiotherapy and
consequently, a 1.33% increased risk of dying of myocardial
infarction and a 2% increased risk of lung cancer. Michael
Baums criticisms of the Marmot Report highlight that
arguments in favour of screening have still failed to
account for overdiagnosis, quality of life and improved
treatments yet the screening programme continues to run.
I feel this suggests that the proponents of screening have
yet to combat the main criticisms of the screening
programme which raises serious questions about how
much conclusive evidence actually supports the argument
for screening. This is why I feel the most referenced pieces

Harms from breast cancer screening outweigh benefits if death caused


by treatment is included by Michael Baum, published in the British Medical
Journal on 23.01.13 found [Online] at
http://www.bmj.com/content/346/bmj.f385.
200

116

SAINT OLAVES ACADEMIC JOURNAL


of evidence in favour of screening are insufficient to truly
justify the screening programme.
4.5.5 - Summary of Evidence
All in all, despite the benefits of screening identified by the
Forrest Report and the Department of Healths decision to
continue the programme reached by the Marmot Report,
there have been some significant rebuttals to this
argument too. How the arguments and literature have
shaped my opinion is something I will address next, in my
conclusion and evaluation.

Conclusion
Having spent a lot of time researching the highly
controversial issue of breast cancer screening, I feel I have
now earned the right to express how this literature has
influenced my own opinion, in a much more direct fashion
than I have done up to now.
Cancer screening is something I knew very little about
before I embarked upon the reading of material for my
research review. I was aware of the key aim of early
detection and I quickly learnt that this meant screening
asymptomatic people. Having relatively more knowledge
about cancer progression and knowing that generally the
larger the tumour, the more difficult it is to treat a cancer,
the benefits of early detection were simple yet powerful.
About this, my opinion is unchanged. It was upon studying
the practicalities of screening - how it occurs in real life that the issue became so much more complicated (see
section 4.4). This is because doctors are using imperfect
technologies to detect cancers which are not necessarily
harmful yet they must prescribe necessarily harmful
treatments quickly if they are to save the patient, or rather
those individuals they believe to be patients. On the
surface, the principle of screening is wonderfully
convenient to the problem of cancer but healthcare
operates not on the theoretical but the practical.
Lurking beneath the surface of much of the literature I
read - on both sides of the argument - was an acute
awareness of the political implications of screening or the
lack of it. Attacks sometimes became personal and so I
made a concerted effort to avoid referring to these in the
dissertation because the credibility of points are usually
particularly poor. Experts such as Gtzsche often criticise
the ideologically drive held by screening advocates, an
issue which in the dissertation, I purposely avoided.
Nevertheless, it would be unwise not to be receptive to the
where this seemingly irrational drive might originate from.
Screening adopts a better to be safe than sorry attitude
which is understandably so in some ways. Our fear of
cancer as a catastrophic and often lethal disease is
justifiable, a feeling which has made its way into policies
perhaps best exemplified by Halsteds radical mastectomies
of the 19th and 20th centuries. A case could be made for
saying that the same eagerness to purify our bodies from

ISSUE 2, SEPTEMBER 2014


cancers is present in the screening programme today.
Hence, we begin to understand why such faith in early
detection of tumours to allow treatment to start
immediately is maintained. Many women, if given the
choice, would prefer to undergo harmful surgeries and
therapies to try and ensure a harmful cancer never occurs,
instead of waiting to see if the harmful cancer arrives
before undergoing the harmful treatments. This distinction
is fundamental and it must be remembered how highly
emotional such issues are.
However, as doctors or scientists, the most logical and
rational decisions must occur. When policies are being
formulated, although a receptiveness to how peoples
emotions might be affected is important, it is evidence
which must ultimately shape healthcare policy. Most
surprising to me was that the evidence for and against
screening is essentially statistical, as opposed to statistics
simply adding clarity to the evidence. If a policy is to be
implemented, then its efficacy must be determined. Here,
efficacy itself is determined by weighing up the benefits of
screening against the harms caused by it. Thus evidence of
benefits and harms must be evaluated not independently
but in conjunction with and in relation to each other.
Unfortunately, it is in this most crucial area, where
screening did not live up to my expectations. Sections 4.4
and 4.5 highlight the multitude of issues with screening,
the two most important of which are overdiagnosis and all
cause mortality. The lack of randomised, large-scale trials
with significant differences in all-cause mortalities
between screened and unscreened groups is most
problematic and largely the basis of why there is very little
evidence of screening improving mortality rates at the level
of the population - which is what you would expect to see
upon implementation of a public health programme. Yet in
the same trials, breast cancer mortality rates were
significantly better in screened populations than in
unscreened populations. This suggests overdiagnosis has a
significant statistical impact and demonstrates that when
evidence for benefits and evidence for harms are evaluated
in conjunction - as all-cause mortality does - a different
result to considering the evidence for benefits and harms
independently is achieved.
In the argument against screening I have been highly
aware and critical of the lack of conclusive evidence
justifying the continuation of the screening programme.
Those defending screening have often used very old trials
which dont account for improvements in treatments; they
have sometimes allowed statistical biases (length, leadtime and selection) to skew results, whilst others have been
improperly randomised; they have sometimes failed to
account for pseudodisease by using survival statistics
which inflate survival rates; and most disappointingly,
they have made little attempt to counter the arguments
made by critics of the screening programme. This is not to
say that screening has been a complete waste of time or
that it must be stopped immediately. In fact, I feel to
suggest this would be to overcompensate beyond the point

117

SAINT OLAVES ACADEMIC JOURNAL


at which evidence against screening correctly stands. If I
am to be highly critical of trials concluding screening
should continue then I must be equally rigorous in
evaluation of those suggesting screening should stop. The
problem is that there are very few large-scale randomised
trials which reliably demonstrate the harms of
overdiagnosis. Firstly because overdiagnosis affects quality
of life which is extremely difficult to quantify and secondly
because a case can only be confirmed as overdiagnosed
once the patient has died, and died not due to breast
cancer. It therefore requires long-follow ups which have
rarely been done. This is partly because the screening
programme is only 25 years old. If overdiagnosis didnt
occur then the initial increase in the total number of
cancers in the screened age groups (50-70) would be fully
balanced by a similar decrease in the number of cancers as
the group aged and was no longer offered screening (after
they reached 70). This is because these cancers would
already have been detected meaning they no longer
contribute to the total number of cancers detected by
screening for the given age range which the previously
screened group now exceeds. Gtzsche has conducted
systematic reviews of overdiagnosis in observational
studies based on expectations such as the one above, but
these are simply not as reliable and telling as randomised
trials.
Ironically, I feel the one thing that I can be sure about is
that it would be unwise to simply say yes or no to the
question of whether screening should continue in the UK.
The answer to this question is far more complicated than I
imagined and requires a lot more data to be accumulated,
particularly to establish the true extent of overdiagnosis.
Reviews conducted should use all-cause mortality as
opposed to survival rates or breast cancer mortality rates
as this measure is most free from errors in the form of
statistical bias, misclassification of the cause of death and
it factors in overdiagnosis. Had the Forrest Report been
fully aware of such issues, they may not have
recommended the implementation of screening, but this is
difficult to know for sure. I feel that the reasoning which
applies to whether a policy should or should not be
implemented must be the same as that which is applied
when deciding whether a policy should or should not
continue. Investment into the breast screening programme
should last as long as the evidence suggests that the
benefits outweigh the harms and financial costs of the
programme. It seems to me to be unwise to hastily come to
any conclusions until a greater and more complete body of
evidence, upon which a better decision can be made, is
compiled. There are some exciting possibilities which might
refine screening to be more effective. The move toward
increasingly personalised care, an example of which is the
sequencing of our genomes, points toward screening those
who are most susceptible to breast cancer. Perhaps gene
therapy provides a solution to replacing these harmful
mutations in tumour suppressor and oncogenes with
normal DNA base sequences thereby preventing the cells

ISSUE 2, SEPTEMBER 2014


shift from a healthy to a cancerous state. Most pertinent
however, is the challenge of determining the extent of
overdiagnosis and this must be done through randomised
trials on a large-scale and should be verified by
independent reviews. Only then will we be better placed to
decide whether screening for breast cancer should continue
in the UK. What does this mean for now? Well, here I agree
wholeheartedly with the Marmot Report that it is most
essential, that the benefits and harms of screening are
clearly communicated to women invited to screening so
they can make an informed yet independent choice.

Skanda Rajasundaram (Year 13)


_______________________________________________________

What was the impact


on classical scholarship
of Michael Ventris
decipherment
of
Linear B?

Abstract
Classical scholarship concerning the Mycenaeans dates
back more than a century, to the first serious
archaeological digs, undertaken by Heinrich Schliemann,
famous in particular for his excavations at Troy.
Schliemann shed the first serious light on the Mycenaean
civilisation, but the Linear B script was only deciphered,
revealing the language of the period, in 1952, by Michael
Ventris.
This abridged study looks specifically at the effect on
Classical Scholarship of the decipherment of the script and
its tablets. To do this, it explores the process of the
decipherment; as its main focus, explores information
gained following the decipherment, particularly in terms of
the geography, people, their administration and social life,
warfare and the end of the civilisation; compares this with
the information it had previously been hoped would be
revealed; and additionally considers the criticism of the
decipherment itself.

118

SAINT OLAVES ACADEMIC JOURNAL


In spite of minority views concerning the decipherments
veracity, and multiple possible interpretations of tablets on
account of the scripts formation, the study concludes that
Ventris work had a great effect on the relevant areas of
scholarship and that this could have been greater, but for
the paucity of material, and the genius involved in the
decipherment itself.

Introduction
Did you say the tablets havent been deciphered, Sir?201
In 1900, Sir Arthur Evans began excavations at Knossos
which were to reveal so much archaeologically about the
Minoans, and revealed for the first time evidence of
multiple scripts in use during the Aegean bronze age. Of
the three main scripts, by far the most extant was the
latest one, dated to between 1600 and 1100 BC202, which
Evans named Linear B. But it would not be for a little over
fifty years before the script was finally deciphered,
revealing that Evans theory of continuing Minoan
supremacy in Linear B, could not be true, the script in fact
writing Greek. This astonishing decipherment one of only
a handful to take place without the presence of a
polylingual inscription203 was all the more interesting for
the fact that it was not a Classicist, but rather an architect
by profession, Michael Ventris, who carried out the
decipherment.
Mycenaean Civilisation had only really been studied since
the excavations of Schliemann in the 1880s, and, until the
decipherment, was based purely on archaeological
discoveries. By deciphering the script, Ventris opened a
new door in studies of the civilisation. This project sought
to investigate precisely how great the effect of the
decipherment was on the relevant branches of Classical
scholarship. It has been argued by a few that the
decipherment is completely invalid, and, as such, has no
value and should have no effect whatsoever. Certainly it is
true that the nature of the script, combined with the lack of
a polylingual inscription, has prevented an irrefutable
decipherment, but many, including Ventris, would argue
that the arguments for the script writing Greek are so
great as to equate to being very near a proof.
As the greatest part of the effect of the decipherment, we
must consider the known areas of Mycenaean life, and how
much of this information has been gained solely from the
deciphering of the Linear B tablets. Certainly, quite apart
from anything else, the extent and complexity of the
Mycenaean administration system revealed by the tablets
decipherment is worthy of Sir Humphrey Appleby204, with
over 800 tablets at Knossos detailing seemingly every flock

Michael Ventris as a schoolboy aged 14, to Sir Arthur Evans at his


exhibition on Minoans, having been shown the Linear B tablets.
202 Known as the LH (Late Helladic) period
203 Such as the Rosetta Stone, used by Champollion to decipher Egyptian
Hieroglyphic
204 Permanent Secretary in Yes Minister (London: BBC, 1980-84)
201

ISSUE 2, SEPTEMBER 2014


of sheep on Crete at the time of writing with the total
number of sheep running to near 100,000205. The tablets
evidence of a complex social hierarchy has been
fascinating; the decipherment has also had a limited effect
on Homeric studies, showing in places anachronisms used
by Homer, not that these are hugely surprising given the
time elapsed between the events of the Iliad and its
writing.
Certain key terms must be defined, in order to ensure the
clarity of the main body of the text. These split into two
categories, the first of which is linguistic. Linear B is
essentially a syllabic script, which is to say that it largely
comprises signs denoting syllables206. This is the opposite
of alphabetic scripts, such as our Roman alphabet, which
comprise letters, instead of syllabic characters207. Linear B
therefore has around 89208 phonograms 89 symbols, each
representing a spoken sound209 (e.g. da). Of these 89, some
symbols are pictographic they are pictorial210 in basis,
such as - a pictogram of a double-headed axe which
represents the pure vowel a. As well as these 89 symbols,
however, Linear B also has many logograms symbols
representing words211. Of these, many but not all are
pictographic. Thus is clearly the symbol for an equid,
but is less obviously the symbol for olive. For the sake of
brevity, this paper also presumes knowledge, on the
readers side, of the Classical Greek alphabet.
Interesting though the many Greek dialects through the
ages are, for the purposes of this project, three types of
Greek will be referred to. Mycenaean Greek we define as
the Greek written by Linear B, and presumably spoken by
its writers. Homeric Greek is the Greek dialect found
within the Iliad and Odyssey, technically an old Ionic
dialect. Classical Greek will only be used for occasional
comparison to show the link between Linear B and Greek,
and what this project will label as such is more correctly
known as Attic Greek, the Greek used by the Athenians
and found in the plays of Euripides and Sophocles, the
speeches of Pericles and much more.
The second category of definitions is historical. In terms of
classifying time periods, the Aegean Bronze Age is split
into three sections: Early, Middle and Late Bronze. These
are then split into E/M/L Helladic for Mainland Greece
and E/M/L Minoan, for Crete. Thus shortenings such as LH
(Late Helladic) will be used throughout the project. For the
late periods, a further division of LH/MI, II & III is used,
J. Chadwick The Mycenaean World (Cambridge: Cambridge University
Press, 1976) p127
206 http://www.oed.com/view/Entry/196132?redirectedFrom=syllabic& - last
accessed 8/ii/14
207 http://www.oed.com/view/Entry/5698?redirectedFrom=alphabetic& - last
accessed 8/ii/14
208 The number is still disputed on account of a few which are debatable; E.
L. Bennett originally arrived at 89.
209 http://www.oed.com/view/Entry/142653?redirectedFrom=phonogram& last accessed 8/ii/14
210 http://www.oed.com/view/Entry/143491#eid30574045 last accessed
8/ii/14
211 http://www.oed.com/view/Entry/109829?redirectedFrom=logogram& - last
accessed 8/ii/14
205

119

SAINT OLAVES ACADEMIC JOURNAL


where LH/MI is earlier than LH/MII and so on. It should
be recognised that these divisions are based on
archaeological evidence primarily based on evolving
pottery styles and so cannot be dated exactly.
Precisely what defines something as Mycenaean or Minoan
has been an area of great dispute, which this project will
indeed touch upon briefly on account of Linear Bs playing
a part in this, but, in general, purely within the bounds of
this project, Mycenaean shall be taken to mean the
civilisation which existed in Mainland Greece throughout
LH, and to that on Crete which used Linear B (in LM),
whereas Minoan will refer to the Cretan civilisation using
Linear A which existed before the change of power at
Knossos.
As a result of Ventris decipherment of Linear B, we now
know Greek to rival only Chinese in the length of time of
use some 33 centuries so far. This project considers the
scholarship resulting from a discovery which brought
Greek as far back as beyond the mythological Trojan wars,
as classicists realised that the Heroic Age of Greece [was]
no longer illiterate212.

Discussion
Introduction
So as to enable a logical progression to the conclusion, this
analysis considers three sections, in the following order:
the background to Ventris and his decipherment;
information/changes from the decipherment which
has/have affected classical scholarship; and problems
reducing the effect of the decipherment on classical
scholarship. This allows us to come to a balanced
conclusion at the finish.
It must be acknowledged that the ordering of the subsections in the section relating to items from the
decipherment which have affected classical scholarship
follows to a great extent that used in Chadwicks The
Mycenaean World. This is, to a certain extent, inevitable
it is necessary to order the myriad facts revealed by Linear
B into groups for them to be understandable, and certain of
these groups must then be placed in a particular order for
the explanation, and conclusions, to follow logically. A debt
is most definitely owed to Chadwick for such organisation.
Background
Michael Ventris, an incredible linguist who spoke four
languages fluently from a young age, had been interested
in the Linear B decipherment ever since learning about the
undeciphered script, aged 15. By the age of 18, Ventris had
already abandoned one theory as to the decipherment and
seriously developed a second theory, that the language was
based upon Etruscan, to the point of writing, and having
L. R. Palmer Mycenaeans and Minoans (London: Faber & Faber, 1961)
p74
212

ISSUE 2, SEPTEMBER 2014


published, a paper213 on the subject. Nominally an
architect, Ventris fascination with the subject came to a
climax in 1951. With the impending publication of E. L.
Bennetts transcription of the Linear B tablets discovered
by Blegen at Pylos (the major horde found outside Crete),
Ventris chose to leave his job and to work solely on Linear
B. Within two years, he would have deciphered the script
which had been un-readable for a little over half a century.
Sir Arthur Evans, the excavator of Knossos and the first
Linear B tablets, had carried out the first work on Linear
B, establishing the numerical systems working ( | = 1; =
10; O = 100 etc) and also recognising that some of the
pictograms in Linear B were logograms, as well as
recognising that some of these were different for male and
female objects. Unfortunately, he then fell into the
pictographic bear-trap of looking for pictographic elements
in characters, and then, having found them, deciding that
they had to be logograms which is unfortunately not true
for some of the most clearly pictographic characters in
Linear B. Additionally, Evans, excavating Knossos, had
come to the conclusion that the Minoans dominated the
Aegean world, and that the mainland Greeks were merely
an offshoot of the Minoan culture. This meant that, when
he noted similarities between Linear B and the deciphered
Cypriot script, tested Cypriot character values on similar
Linear B characters, and came out with po-lo (which bears
a resemblance to , horse or foal in Classical Greek),
on account of finding very little presence of an s ending
(common in the Cypriot script and Classical Greek) on the
tablets, he was very ready to dismiss this out of hand, and
state that the tablets were not written Classical Greek,
this fitting well with the Minoan domination theory.
Blegen excavated the palace of Pylos supposedly that of
the reminiscing King Nestor of the Iliad from the 1930s
onwards, and came across vast hordes of Linear B tablets,
which were then investigated by US scholars E. L. Bennett
and A. Kober. Bennetts work analysing the script led to
the conclusion that Linear B contained 89 phonograms and
a host of logograms, some of both of which were
pictographic. This sign list was key to Ventris work.
Bennett also proved that Linear A and B used different
numerical systems, meaning that they were unlikely to
write the same language. Kobers most important
scholarship looked at endings. She established that the
word for total had a masculine and feminine form,
dependent on the final vowel important as this is a
characteristic solely known to the family of Indo-European
languages, of which Etruscan, Ventris then-proposal, was
not a member. More importantly, she identified inflections
in groups of three in sign-groups which seemed likely to be
nouns, possibly Proper. This indicated a language which
declined, and additionally laid the way for Ventris to carry
out work on values for the phonograms.

M. Ventris Introducing the Minoan language (Boston: American Journal


of Archaeology, 1940) 44
213

120

SAINT OLAVES ACADEMIC JOURNAL


Ventris took a leading role following Bennetts publication
of the Pylos tablets transcriptions. For each set of ideas
and steps taken, the details were written up in a Work
Note and a copy was mailed round to all relevant scholars
for comment and discussion. Its important to note that we
cant read from 1-20 and see how Ventris deciphered
Linear B some of the steps were just genius. This was to
a certain extent, problematic later on. Ventris then started
to create a syllabic grid (where in any column all the
phonograms end with the same vowel, and in any row are
begun by the same consonant), key to solving the puzzle.
Three techniques were then employed by Ventris in
approaching Linear B. Statistical analysis was carried out,
but this time with the aim of identifying particular
letters/syllables214. He also considered words with scribal
variation215. Such words must have a shared vowel but a
different consonant, sometimes due to spelling mistakes on
the part of the scribe, which again helped to add characters
to the syllabic grid. The tablets on which Linear B is
written are also such that it is possible to see if a scribe
wrote one character but then rubbed it out and wrote
another over it. When the original character can be
identified, and if such a correction occurs frequently, the
two phonograms are likely to have similar sounds216. But
perhaps most importantly, Ventris looked at inflections. He
investigated Kobers proposal that the triplets were in
three different cases but concluded that this might not be
true for all the triplets, which turned out to be correct.
The break came suddenly. Ventris and other scholars had
agreed that certain tablets might well contain Cretan place
names. It was time to do a little testing. By frequency,
Ventris guessed that the double headed axe () was in fact
a. He also compared Linear B characters with the Cypriot
script, though not without reservations, given that as
Evans did it is easy to make false links, and proposed
that two phonograms ( and ) were the same as similar
phonograms in Cypriot ( and ), and therefore indicated
the syllables na and ti respectively. The syllabic grid
system meant that a chain reaction of phonogram
translation could then occur. A word now appeared on the
partially deciphered tablets whose characters ( ) read
ani. He proposed that this would be the Mycenaean
spelling of the Cretan town Amnisos, spelt a-mi-ni-so. This
provided two more phonogram translations, and so on.
Reassuringly, multiple other Cretan place names then
appeared on the same tablets, indicating that the
decipherment had a sound basis. This was just the start.
cf. Sherlock Holmes - in The Adventure of the Dancing Men Holmes
employs precisely this method to determine which figure translates as an e,
and so on, and thus deciphers the code written on the wall. It is, however,
considerably easier to do this with an alphabetic script than a syllabic
script.
215 e.g. An equivalent in English would be recognise and recognize without
a knowledge of English, by finding the words clearly used in the same
context, one can deduce that se and ze (at least in this word) have a similar
sound.
216 e.g. An equivalent in English would be a younger childs habit, when the
complex spelling rules of English are not yet fully understood, of
interchanging ph and f, on the basis that seraf and phuture sound the same
as the correct seraph and future.
214

ISSUE 2, SEPTEMBER 2014


The details of the following stages, while fascinating, must
be left for now. Worthy of note, however, is Ventris 1952
talk on the BBCs Third Programme which would become
famous, following which he met with John Chadwick, who
became his partner in the decipherment. Within a
surprisingly short space of time, most accepted the
decipherment (though acknowledging that it still had some
way to go).
Ever since Schliemanns excavations at Troy and Mycenae
had first brought the Aegean bronze age to light, scholars
had tried to reconstruct the key points of the Mycenaean
World and life. To a certain extent this could be done with
the archaeological evidence, but many other points of
Mycenaean civilisation were unknown (as of 1952)
including the extent to which the inhabitants of such a
world were literate. The decipherment of the Linear B
tablets was hoped to fill many such gaps in the knowledge
of the era, and also to open new areas of classical
scholarship that had perhaps not even been considered
previously. Many questions remain unanswered, and
indeed many new ones have been posed as a result of the
decipherment, but a vast amount has been discovered. The
extent to which classical scholarship has been affected
must be gauged from the analysis below.
New
information/changes
from
the
decipherment affecting classical scholarship
1. Geography
A discussion of the tablets contribution to Aegean
geography in the Late Bronze Age must come first, as the
geographical points informed much research based on
other information from the tablets. Mycenaean geography
had been discussed for two millennia and more, and thus
the location of the palace of Pylos was a work in itself. In
the 19th and 20th centuries, by analysis of passages within
both the Iliad and the Odyssey, scholars had come up with
two locations. It was in fact the more-southerly location, at
Ano Englanos, which Blegen eventually excavated,
revealing a palace of such magnitude as befitting only a
capital and administrative centre. While the location
agreed with the Odyssey, it did not with the Iliad, and thus
it remained, to a certain extent, a cause of argument.
Ventris breakthrough allowed the geography of the
kingdom of Pylos to be reconstructed painstakingly by
several methods. That such a horde of records was present
indicated that Blegens Pylos must have been the
Mycenaean capital, agree though it might not with Iliadic
geography. Such an administrative centre was expected to
be somewhere near the centre of the kingdom, for
convenience. To go further than this, however, required
analysis of the tablets. The place names listed were largely
not similar to Homer or those known in classical
geography. Two frequently occurring lists appear, one of
nine names, one of seven. These were accompanied by
titles which revealed that Pylos was divided into two main

121

SAINT OLAVES ACADEMIC JOURNAL


provinces, the (Western) Hither Province using the list of
nine; the (Eastern) Further Province that of seven. Said
names were evidently districts (each named after the main
town in the district).
Importantly, the nine always appeared in the same order.
It was clear that they were not listed in order of
magnitude, nor in alphabetical/syllabic order, allowing
scholars to deduce that they were instead ordered
geographically. Thus analysing the list revealed by the
decipherment would allow a rough geography to be
deduced rough because geography is in two dimensions,
whereas a list is only in one. The fourth district on the list
was Pa-ki-ja-ne and this was clearly very close to, perhaps
including, the palace of Pylos itself thus confirming the
expectation that it would be vaguely central. The value of
the decipherment was shown, however, in the name of the
ninth district, Ri-jo, which was likely to translate as Rhion,
the promontory, which corresponded perfectly with a site
of this name found slightly east of the southern-most tip of
the peninsula (considerably south of Pylos).
The tablets also revealed that the first two of the nine
districts were in communication with some of the seven
districts of the further province. Geographically it was that
a mountain range, with only a few passes, east of the
palace of Pylos divided the Hither Province from the
Further Province; for the first two of the nine to be in
contact, it was therefore necessary for them to be relatively
far north, where the Kiparissa valley of the Hither
Province connects with the Messenian valley of the Further
Province. Further north, a perfect physical barrier which
would form the boundary of the kingdom exists - a high
mountain at the northern end of the Kiparissa valley
narrows the flat area to a small pass just south of a river.
This river may therefore reasonably be considered, on the
basis of archaeological evidence too, the northern frontier.
While establishing the locations of the seven Further
Province districts was more difficult, the decipherment
again played a significant role. The seven districts were
always arranged in four groups, and it seemed likely that
this would be a geographical organisation. A river runs
north-south through the Further Province, while a hillrange runs east-west, dividing it into four, which gives
further strength to this idea. It was possible to see from the
decipherment that two of the four groups had connections
to the Hither Province (i.e. were the western two) and that
one of these was clearly on the coast, and therefore had to
be in the south-western quarter. More along these lines
helped establish the eastern border of Pylos, and the likely
locations of the seven districts.
Tablets from the Knossos horde also revealed a certain
amount about Cretan geography. The Cretans clearly had
been a dominating power on the seas and abroad, but the
Linear B tablets show no evidence of overseas place names,
confirming the view that such dominance took place under
the Minoans in EM/MM.

ISSUE 2, SEPTEMBER 2014


Crete is both large, and hilly and mountainous. Far less
work has been able to be done than with the kingdom of
Pylos in terms of linking Mycenaean sites and place names,
but some comparison of place names has taken place. A
small number of place names on the Knossos tablets only
occur rarely, even in the sheep tablets which (see section
five) are otherwise comprehensive. Natural barriers
practically cut off the eastern and western-most parts of
the island from the centre, and the suggestion thus is that
these small number of places were in these areas, probably
not under direct control from Knossos, but still
maintaining some link.
Thus while the decipherment of the tablets has directly
provided little information about the geography, careful
analysis of them has provided a vast amount of indirect
information with which scholarship on the relevant areas
has been able to be furthered.
2. Administrative s ystem
All administrative centres used Linear B on clay tablets for
records, and, in the kingdoms of Pylos and Knossos, the
records were then collated at the royal palaces; similar
systems are likely for the other Mycenaean kingdoms, but
only at Knossos and Pylos have sufficient tablets been
discovered to provide strong evidence on which scholars
can make deductions.
Many scribes were involved analysis of the handwriting
suggests thirty to forty at each of Knossos and Pylos. It is
also clear that officials and those scribes who made the
records for them were in charge of particular
departments, with one scribe appearing to write only
records concerning chariot wheels.
The tablets have also provided scholars with an outline of
the many layers of what has been described as a
meticulous and efficient bureaucracy217, with indications of
an administrative hierarchy comprising a governor of each
province, two seconds-in-command, an official in charge of
each of the twelve districts (the ko-re-te), twelve junior
officials, and more. It must be emphasised that scholars
are not able to be utterly certain of this pyramid, but
neither is it mere speculation.
Much of the administration falls into the topics covered by
the sections below, and is thus discussed at the relevant
point. The chief item which, however, does not really do so ,
is the extremely impressive land registry system found
practically intact at Pylos, for the district (which appears
to have included the palace itself) of Sphagines ( pa-ki-jane). The register comprises lists of those, using feudal
terminology, we would call tenants, sub-tenants and so on.
The deciphered texts show us that the land itself is always
described as one of two types: ki-ti-me-na, which is
privately owned by a noble or other major personage or,
L. R. Palmer Mycenaeans and Minoans (London: Faber & Faber, 1961)
p100
217

122

SAINT OLAVES ACADEMIC JOURNAL


to continue the feudal comparison, possibly held from the
all-powerful king as a fief (which would explain why such
detailed records were being kept by the palace in some
cases) and ke-ke-me-na, public land which appears
essentially to belong to the district (as represented by a
group of men forming the dmos). In both cases, the land is
then leased to tenants, though again the usage of leased is
troublesome, as the tablets give us no indication as to what
was expected in return for the land.
The details of land-sharing are vastly complex, and some
points of the tablets are still not understood, but perhaps
the most interesting point which has come out of these
tablets as a result of the decipherment is the appearance of
remarkable similarities in format between these records
and the Doomsday Book, compiled at William the
Conquerors request a little over two millennia later. Both
list the estates of important nobles and the tenants and
sub-tenants, and the public land and its holders similarly;
most intriguingly both also note but make no attempt to
solve at the time disputes of ownership. The tablets tell
scholars that Eritha the Priestess holds and claims that
the deity holds the [obscure word probably referring to land
concerned] but the community (dmos) says that he/she
(referring to deity) holds a lease of public plots appears on
a deciphered Pylos tablet. Edward holds this land in the
estate of Wiltshire, unjustly as the County alleges, because
it does not belong to any estate comes from the Doomsday
Book, and the similarities are striking. Such a similarity
also means that, while it is not thought to be the case that
the Mycenaeans employed a system identical to the
Mediaeval feudal system, scholars have been able to
question whether their civilisation may well have had very
significant similarities in some areas. It hardly needs
saying that the decipherment has thus opened a door
whose existence was barely considered previously, thus
affecting classical scholarship on Mycenaean civilisation.
3. People & Social Structure (incl. slavery)
It was initially questioned, following the decipherment,
whether Greek might merely be the language used for
court administration, with the Mycenaeans in fact
speaking another language as in mediaeval England,
where the vernacular tongue (Anglo-Saxon) was considered
not to be suited to keeping records, and so Latin employed
instead. Such an event can only occur when the language
employed instead has a great history and has stayed in the
knowledge of those who are literate. Given that Linear B is
an adaptation of Linear A so that it can write Greek, it is
thus highly unlikely that such a modus operandi can have
been in practice here the only language with the heritage
which would conceivably have been kept would be Minoan,
in which case Linear A could have continued to be used.
Additionally, the tablets, deciphered, speak for themselves,
recording, as they do, a vast number of peoples names. The
majority of these are clearly Greek: some are typical Greek
compound names; others are noun-derivatives, such as the
bronzesmith known as Khalkeus, Smith; and some are

ISSUE 2, SEPTEMBER 2014


mere colours, or indeed positively rude. That the majority
of names found at all levels of society are Greek is an
irrefutable proof that Greek was the spoken language as
well as that written by the Mycenaeans.
The Mycenaean social class system is anything but simple,
but does have parallels with other such systems. The
deciphered tablets allow a rough reconstruction of this
system. Most important is the king, the wanax, a word
whose significance is therefore discussed in section 9. The
importance of men is indicated on the tablets by the size of
their estates (or rather the estates yield see section 5).
The king is never named in the Pylos tablets, but a
personage by the name of E-ke-ra-wo who has an estate far
larger than that of anyone else, and has forty rowers for a
presumed fleet of ships, seems so exalted that it is hard to
believe he is not the wanax218.
After the king, we come next to the Lwgets, with the
next biggest estate. Some have proposed that he was the
commander of the army, or indeed that he was the heir to
the throne - both are possible, but it is at present
impossible to tell. The tablets then indicate the presence of
a class of aristocrats below, known as hequetai, Followers.
These are rather reminiscent of many societies with rulers
and their companions. The Followers are shown by the
tablets to own slaves and have large amounts of land, wore
a specific form of dress with white fringes, and also had
chariots.
The officials mentioned in section 2 such as the koretr
then form the next level down, being the district governors
and chief officials. The tablets are still not entirely clear,
and the position of the major landholders known as telestai
mentioned is not at all certain. Chadwick has proposed
that these come immediately below the koretr and his
fellow governors/officials. Based on the points made in
section 2 on land holdings, the dmos as a body seems to be
on a similar level to the telestai, but the level of the
individuals who comprise it possibly one lower those who,
though we know many are skilled craftsmen, we in fact
know very little about.
The tablets do, however, supply large amounts of detail
concerning the large numbers of slaves in Mycenaean
society. An elaborate index of slaves exists at Pylos,
relating to royal slaves sorted by labour specialism.
Rations of slaves give an indication of the large numbers
involved. It is clear that individuals can own slaves
mostly these are Followers, though there are also
bronzesmiths owning slaves who clearly work for them.
Slaves linked to religion are very different to others
indeed servants might be a better term for them.
We may therefore conclude this section with the
observation that the deciphering of the tablets has revealed
many details about the social structure of Mycenaean
J. Chadwick The Mycenaean World (Cambridge: Cambridge University
Press, 1976) p71
218

123

SAINT OLAVES ACADEMIC JOURNAL


society something whose reconstruction archaeological
evidence alone could not have allowed.
4. Religion
Gods names appeared not infrequently on Linear B
tablets, and their contexts and those named have, in
some areas, had a significant effect on classical
scholarship. Part of the issue with any previous religionrelated deductions based on archaeological evidence alone
was the possibility that, on Crete, Minoan and Mycenaean
religion were confused. Additionally, some 600-700 years
are present between any Mycenaean gods found on the
tablets and the classical gods found in the Iliad, so there
was no starting expectation that the two would be identical
by any means.
Problematically, Linear B tablets, being lists, do not
include theological texts, dedications of temples or other
such useful resources. Nevertheless, a number of tablets
have revealed fascinating information. A tablet discovered
relatively early at Knossos revealed four Classical Greek
deities presence in the Mycenaean world - yet another
confirmation that the tablets were indeed Greek, as well as
demonstrating the presence of some of the classical deities
at this point.
The most startling, and rather spooky, evidence provided
by the decipherment of the tablets, comes from a single,
large, hastily-written and not entirely legible tablet. If we
agree that the tablets are written in the last days of the
Mycenaean world, and show awareness of an emergency
and impending disaster (see section 7), then we cannot be
anything but astonished by the evidence on this tablet. To
various gods, gifts are given and po-re-na is brought. The
use of the verb bring gives an idea of what po-re-na might
be, and this is then confirmed when we read the dread
sentence [for] Potnia, 1 gold vessel, 1 woman. This cannot
indicate anything other than human sacrifice, and another
tablet at Thebes has now used the same word with a link to
wool, which, given that Greek sacrificial victims are
frequently wrapped in wool, adds strength to the
deduction.
By no means all the deities mentioned on the Linear B
tablets correspond to the twelve aforementioned classical
deities, and Potnia (Mistress) is a most important deity
who evidently did not exist under this name in postMycenaean civilisation. Naming a goddess by a title seems
odd, but an exact parallel exists in the Roman Catholic
tradition of referring to the Virgin Mary as Our Lady. The
likelihood is that Potnia was the Mycenaean version of the
Earth Mother, a figure worshipped throughout the Aegean
Bronze Age in various guises and later morphed and split
into the two goddesses Demeter and Persephone. Potnia is
also associated with smiths; she has vast power - and
scholars would not have known about her but for the
decipherment of the Linear B tablets.

ISSUE 2, SEPTEMBER 2014


Curiously, despite the fact that other Linear B tablets
appear to show that Poseidon clearly extremely important,
and receiving the most tribute, he is not mentioned on the
last-chance sacrifice tablet at all. The other god who
requires especial mention is Dionysos. His name appears
just a few times, and on fragmentary tablets, and thus not
knowing whether it is in religious contexts makes it
infuriating that more is not known. Until the discovery of
the Linear B tablets, classicists had proposed that
Dionysos was a very late addition to the deities, an
argument strengthened by various pieces of evidence
including Euripides The Bacchae which presents him as a
young god. This cannot be the case if his name appears in a
religious context around a millennium earlier than
Euripides another example where classical scholarship
has been affected majorly by the decipherment of Linear B.
The vast number of tablets concerning offerings to the gods
is indicative of just how large a role religion took in the
Mycenaean world. An extreme example is given by a Pylos
tablet for an initiation ceremony, where the amount of
barley offered might have provided a months ration for 43
people. More commonly, though, individual offerings were
made, which were generally perfumed oil, honey, grain or
wool. The perfumed oil is produced in such vast quantities
that Palmer has suggested that it was a major export
product. All this is just a very small amount of the
scholarship and argument over interpretations that has
come out of the deciphered texts on religion.
5. Measures & Farming
It has already been mentioned above that Bennett had
compared the measurement systems of Linear A and B
prior to the decipherment, and had found a significant
difference Linear A uses a fractional system for smaller
units, whereas Linear B a system similar to the European
metric system. Such a difference gave key evidence that
the two scripts did not write the same language. Having
completed the decipherment, the use of weights and
measures in texts on rations, tribute and so on, was of key
importance. Scholars have, following much argument, been
able to use the context of the tablets to establish possible
values for the measures (the relation between certain
logograms indicating measures was already clear predecipherment, but the actual values in terms of modernday measures (e.g. = ? kg) were unknown). It has also
proved possible to identify logograms relating to foods, and
thus consider rations, daily meal contents and more.
Three measure types have been identified from Linear B:
weights, and two volume measures dry and wet, just as
we today have oz. and also fl. oz. By analysing tablets
whose grand totals must have been calculated using
addition of fractions, scholars had, pre-decipherment,
largely established the relations between the major unit for
each of the three types and their subunits. For instance,
the major unit of weight is .The next unit down is and
is 1/30 of ; follows, being of .

124

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

After Ventriss decipherment, scholars became able to use


non-archaeological evidence to help build up theories
originally based on painstaking archaeological analysis in
terms of these units relation to later units. The weight
sub-units revealed a sexagesimal system was in use,
clearly derived from other civilisations of the Near East,
and that the main unit was the talent, as it was later in
the Classical world. Regrettably, archaeological evidence
for weights is still not entirely clear. More progress was
been made by scholars on volumes. The classical word for
the smallest unit was kotyl, cup, and the logogram used
for the main unit is clearly a cup (), leading scholars to
find it highly likely that the word was the same in the
Mycenaean world. The Classical kotyl ranged from 270388ml, and thus it was likely that the Mycenaean value(s)
would be somewhere within this range. Based on careful
analysis of many vessels and ratio calculations, scholars
established two possible values for , Palmer and Lang
proposing one, Ventris and Chadwick the other. While both
are feasible, Chadwick has been able to employ the
deciphered texts to give support to his argument, as they
give evidence that it might be more likely than Palmers
proposal, though the values will remain uncertain until
more conclusive archaeological evidence (as well as more
tablets) is discovered.

while some also produced olives perhaps, it has been


suggested, in the Classical Greek practice of growing wheat
in the earth between olive trees. (Surprisingly, figs are
evidently important in the Mycenaean world, too, with
slavewomen astonishingly appearing to receive an
equivalent volume of figs to wheat.)

Despite allowance needing to be made for the fact that the


Linear B tablets must only relate to crops on the royal
estates, or which are organised centrally, a significant
amount has been learnt about Mycenaean agriculture, and
food more generally, from the deciphered texts. The tablets
frequently listed grain with one of two different logograms
- and and thus there were clearly two main grains
which were staple Mycenaean foods. Classical scholars,
knowing that barley and wheat were the two staples later,
assumed that these would be the same. Previously it had
been impossible to establish which logogram corresponded
to each grain. However, after the decipherment and the
understanding of the different weights, which revealed
that the ration per month for was almost double (2 : 3.75)
the ration of , on the basis that the two amounts had
similar nutritional value, scholars were then able to
conclude that must indicate wheat, of which less was
issued, and be barley.

Spices are also mentioned indeed, they appeared in no


small number on the tablets excavated in at Mycenae,
coriander, koriadnon, being particularly common, and
appearing, in Chadwicks interpretation, to be used in large
quantities. This is not necessarily just for cooking Palmer
has proposed that many of the spices mentioned (other
spices include cumin, fennel, mint and sesame) may mostly
have been used for perfumes and unguents, which would
explain the vast quantities in some cases; there is one clear
example of units being issued to a perfume maker. It is
quite possible that spices are mentioned far more
frequently on the tablets than it has been able to ascertain
for certain, as scribes may well have used single
syllable/phonogram shortenings, with coriander possibly
also listed under ko - and similarly fennel, marathwon,
as ma - .

Land is not, as a rule, measured in acres, but in its


productivity which thus allows for the difference between
an acre of stony mountainside and a similar space of a
fertile plain. Men appear to be able to hold more than one
plot, and size varies massively, with the smallest
encountered bearing only 1 (i.e. 1/1800 of the kings plot).
The important land registry Pylian tablets concerning
Sphagines discussed in section 2 give us a complete record
of the districts distribution. Knossos, however, unlike
Pylos, for reasons which shall be explained in section 7, is
the location to provide us with records relating to the result
the harvest. Scholars are now able to read the tablets to
find that towns produced amounts of wheat which, given
the land and farming techniques available, seem feasible,

Rations are still being debated hugely in terms of the


interpretation of the tablets, depending on the algorithm
used to establish individual rations given a list of a number
of people of different types and an overall total (e.g. 18
men, 8 boys, total 97.5 T). What does seem reasonably
clear, given the various opinions, however, is that men
received a certain ration, women probably received
significantly less, most children (not boys already working
with the men, who received the same ration as them)
probably half of what women received, and possibly very
young children half of this; slaves of any age/gender appear
to receive half of what their non-enslaved equivalents
would, though what constitutes a slave as opposed to a
worker is also debated. All this seems vague and rather
tenuous, but it must be remembered that until the
decipherment of the tablets, none of this was possible
allowing us to conclude that, here too, Ventris affected
classical scholarship in no small way.

Wine is also present; an interesting linguistic point has


emerged from the relevant tablets the Classical Greek
word for vine does not appear in Mycenaean Greek and is
thus thought to have been borrowed from another
Mediterranean language; Chadwick reports219 that the
Mycenaean word for vine was only identified because of its
being quoted, luckily for scholars, in an extraordinarily
ancient dictionary. Likely to have been a rarity, wine does
not appear in the regular ration lists, but rather in a few
individual tablets, and mostly in quantities which are
(relatively) small. Large jars with clay labels for wine do
appear in a palace cellar in the kingdom of Pylos,
confirming its existence at least for royalty.

J. Chadwick The Mycenaean World (Cambridge: Cambridge University


Press, 1976) p124
219

125

SAINT OLAVES ACADEMIC JOURNAL


Lastly, but by no means least, deciphered Linear B texts
have provided helpful information on Mycenaean livestock
and beekeeping, both in the contexts of the palaces and of
the common people. Given that one of the wine jars
mentioned above had the wine logogram followed by
honeyed, the keeping of bees for honey is obvious. The
tablets concerning land-holdings indicate two groups to do
with beekeeping and honey production, some officials
having the title me-ri-da-ma-te, which scholars suggest220
is formed from the words for honey and something along
the lines of overseer. On the extant tablets, honey largely
appears in offerings to deities, rather than being for eating,
and is also listed on a tablet of items to boil in the
unguent. We know221 that perfume producers did indeed
use honey, so this may well have been its other major use.
In terms of livestock, the tablets are mostly concerned with
bovines, of whom the vast majority are sheep and goats cows and bulls are rare. Oxen also only appear
infrequently and a few tablets at Knossos concerning
working oxen (a pair of oxen and the name of their driver)
have produced by far the most remarkable point, which has
given much support to the proposal that not only was
Greek written by the Mycenaeans (in Linear B), but it was
also spoken by the common people. How else would we find
oxen named (on the tablets) Stomargos Chatterbox,
Kelainos (Dusky), Aiwolos (Dapple) and more? Thus,
through a point fascinating in a series of ways it is also
interesting that, until the recent advent of vast herds
beyond easy count, such names have still been in use
consider mediaeval mentions, or indeed James Herriot222 we have yet another example where the decipherment has
aided classical scholarship.
But sheep, and to a lesser extent goats, are the most
mentioned, a wonderful tablet set (over 800 tablets) at
Knossos detailing seemingly every flock, with a total
number of sheep near to 100,000. Logograms for ram ()
and ewe () had already been established, but a number of
abbreviations had not been clear. This was complicated by
the fact that, as with the use of ils in the third person
plural in French if the group contains just a single
masculine figure alongside an infinite number of females,
grand totals of sheep are noted solely by the ram logogram.
Further reading of the deciphered texts revealed that the
age of the sheep (old or young) was also noted, as were the
numbers born which were compared with the previous
figures. The tablets also revealed that about two thirds of
the flocks were solely property of the king, while the others
were, while in some way linked to him, not solely his; the
suggestion has been made that the produce of these latter
flocks was given to those whom he had to provide with an
income, possibly including the powerful Potnia.

ISSUE 2, SEPTEMBER 2014


Goats are mentioned as well, and the presence of cheese on
the tablets makes it very likely that milk from both goats
and sheep was used for the production of this. Horns are
clearly a product, too, but what these were used for is not
clear: Evans suggested that they might be used for making
a wood-horn composite bow, examples of which had been
found in contemporary Egyptian sites, but a key set of
tablets is still unclear, with scholars unable to agree on
why the horn lists divide goats into she-goats and ra-goats,
given that a he-goat logogram also exists.
Finally, hunting, which was already present in Mycenaean
art showing game and also lions, also appears on the
tablets, which mention deer in particular. Scholars have
also been able to deduce that dogs were presumably used,
the word used for huntsmen (kungetai) translating
literally as dog-leaders.
Thus the tablet decipherment revealed a significant
amount which has had an effect on classical scholarship,
revealing for the first time the possible ration system used,
the farming carried out, the meticulous organisation of the
sheep, and the presence of livestock more generally, the
vast majority of which could not otherwise have been
revealed.
6. Craft, Industry and trade
The excavation of Mycenaean sites and shaft graves had
already shown there to have been a very high level of
craftsmanship present. This view was confirmed and
strengthened by the reading of the tablets, which also
revealed new information about trade in general, and
raised questions as to the Mycenaean economy.
Any who have seen the walls of the palace of Mycenae will
come to the conclusion that building was an industry which
was clearly highly developed in Mycenaean civilisation.
This is clearly true - of these fortifications - but
unfortunately, both for scholars and (as will become
apparent in section 7) the Pylians, Pylos clearly did not
have any fortifications, nor did Knossos. These being the
two sites from which the majority of the Linear B tablets
come, it is thus perhaps unsurprising that Linear B has
revealed very little about the building of fortifications.
Remembering, though, that the majority of Mycenaeans
did not live in such palaces, it has been of great note that a
Pylos tablet, clearly listing building materials which are
probably223 for a small-ish hall, a megaron, has allowed
(provided that this assumption is made) the recreation of
the design of such a place. More work has taken place on
this since, but this single example of scholarship is
indicative of the decipherments effect.
A small amount of what appeared to be the remnants of
furniture similarly had been discovered, but our view was
changed hugely by a set of tablets from Pylos listing the

J. Chadwick The Mycenaean World (Cambridge: Cambridge University


Press, 1976) p126
221 Theophrastus De odoribus
222 J. Herriot It Wouldnt Happen to a Vet (London: Pan Books, 1972)
220

J. Chadwick The Mycenaean World (Cambridge: Cambridge University


Press, 1976) p138
223

126

SAINT OLAVES ACADEMIC JOURNAL


belongings of a household. The astonishing list contained
on,the tablet is best summarised by Palmers comments
that from this [translation] it will be clear that no
archaeological findshad given us any idea that furniture
of this luxuriousness adorned the palaces of the Mycenaean
kings. The nearest parallel [to the luxuriousness] is offered
by the tomb of Tutankhamen224. High praise indeed, but
then this is a list which includes jugs decorated with
goddesses, bulls heads and shell patterns; a portable
hearth decorated with rosettes and a flame pattern; a table
made of stone and crystal, inlaid with kuanos (blue glass
paste), and more. To scholars trying to establish the level
of craftsmanship present in the Mycenaean world, being
able to read this tablet has been invaluable.
We do have tablets which have given information on quite
a few other points, but the ones concerning metal are
definitely worth a brief mention. Archaeological discoveries
had already revealed the use of the metals gold, silver,
lead, copper and tin (the last two required to make bronze),
some of which were imported. Certain tablets at Pylos
relating to bronze smiths have been discovered: each smith
and his location is listed, as are amounts of bronze issued
to them leading to the deduction that the palace kept
careful control of metal supplies. Given an incredibly high
total of around 400 smiths, Chadwick assumes that
normally there was a surplus of metalwork, which could
account for exports which the Mycenaeans must have had,
in order to exchange for raw metal imports. It is clear from
the tablets that bronze was employed in many different
things, and while, with limitless archaeological evidence,
much of this might have been known already, the
archaeological evidence that scholars do have is not such at
all, and thus the tablets decipherment not only provided
information which could not conceivably have been gained
in any other way, but also filled in gaps in the
archaeological evidence.
Additionally, deciphered Linear B has allowed scholars to
begin to question how the Mycenaean economy worked. It
is known that coinage did not exist until the 7 th Century
BC, and, whereas contemporary civilisations in the Near
East evidently valued objects using silver/gold, there is no
evidence on the tablets to suggest that this took place in
the Mycenaean world. Scholars including Chadwick have
therefore suggested a system of obligations on both
sides225 in which villages seem to have paid tribute to the
palace - items ranging from cereals to sheep - and the same
central body at the palace distributed amounts of various
items to the villages: a system where the concept of
payment is non-existent. Tablets have however also been
discovered suggesting (though this is by no means definite,
as the word in question (o-no) cannot properly be matched
to one in later Greek) a bartering system for of foreign

L. R. Palmer Mycenaeans and Minoans (London: Faber & Faber, 1961)


p149-50
225 J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge
University Press, 1958, 2nd 1967) p121
224

ISSUE 2, SEPTEMBER 2014


trade, with alum possibly being bought for textiles and
wool.
7. War & the end of the Mycenaean World
With the decipherment, scholars have uncovered many
facts about Mycenaean arms, providing us with tangible
evidence that helps to explain archaeological finds in grave
circles and elsewhere. It has also been possible to form
ideas as to the organisation of Mycenaean armies; most
interesting of all, as the tablets were destroyed and written
yearly, is that the Pylos tablets preserve a snapshot of
what seem likely to have been preparations for defending
the kingdom from an impending attack. That these were,
in the end, unsuccessful, is testified to by the fact that the
palace of Pylos was sacked and never rebuilt; excavations
have revealed arrowheads and human bones outside the
palace walls, probably the last remnants of the force that
strove, in vain, to defend an un-fortified (unlike Mycenae)
Pylos.
How did the Mycenaean armies move around? There was
probably infantry, but the question was whether there was
anything else as well. In the Iliad, chariots act rather like
taxi-shuttles, the various heroes being driven into battle
and then dropped off in the midst of the battle. Yet we are
drawn to a comment by the constantly-reminiscing Nestor
advising the war-lords to employ a formation of a large
number of chariots charging at a gallop226, and implying
that it is no longer usual. Thus scholars were not entirely
surprised to discover evidence on the Linear B tablets to
suggest that two-horse chariots, carrying two men, were
indeed used in groups in war (though details indicate that
not all chariots recorded were used for military purposes),
with one tablet set at Knossos appearing to be the register
of a chariot brigade, each tablet recording the mans name
and a chariot with wheels (wheels were normally kept
separately). The tablets are fragmentary, so the total is not
clear, but Chadwick suggests that there are at least 82
chariots, and suggests an the original force probably
greater than 100.
Much detail concerning Mycenaean armour has resulted
from the reading of the relevant tablets. While some bronze
body plate-armour has been found in a shaft grave, it is
clear that this is not indicative of the armour worn by
most. It seemed likely that the corselet-like protection
would comprise a thick linen garment (fragments of such a
thing were discovered in another shaft grave) with plates
attached. Thus a tablet that mentioned linen for a khitn
(Classical Greek chiton) (tunic) alongside 1kg of bronze,
and another kilogramme of bronze for epikhitnia (tunicfittings), could be read as (not proven, but still very strong)
evidence for such a garment being reinforced; the extra
bronze may perhaps, if not being for the fittings, Chadwick
suggests, have been for a reinforced cape. The tablets have,
through their detailed pictograms (e.g. ), also helped in

226

Homer Iliad IV

127

SAINT OLAVES ACADEMIC JOURNAL


suggestions as to how the bronze plates fitted onto the
linen garment, though they were not precise enough for
guess-work not to be required, and Chadwick and Palmer
have different reconstructions of this, depending on how
much the pictogram is relied upon. It should, however, also
be mentioned that there are some key items missing from
the tablets the well-known figure-of-eight Mycenaean
shields, and greaves (well-greaved being a Homeric
epithet) for which archaeological evidence is so strong that
we must simply assume that the relevant tablets have
either not yet been found, or have been lost forever. We
must also consider the fact that not all tablets may be
genuine Chadwick suggested that a group of tablets
written in different hands, but all with the same distinctive
style of writing, were therefore probably scribe-school
exercises.
Knowledge of weapons has also been extended by the
tablets decipherment, certain tablets making clear that
charioteers carried spears with wooden shafts but enkhea
khalkrea (bronze points). A group of tablets excavated by
Evans at Knossos had included an logogram for what
Evans suggested was a sword; now able to read pa-ka-na
(Homeric Greek phasgana) beside said logogram, we may
confirm the hypothesis. Arrows are also mentioned,
aligning nicely with a box of what appear to be arrowheads
at Knossos. However, two logograms were found: and
. The first is certainly an arrow; the second was assumed
to be by Evans, but Chadwick has made the suggestion
that this was a short throwing spear (N.B. not the same as
the heavy thrusting spear mentioned above for charioteers)
instead, and that some supposed Mycenaean arrowheads
may in fact be spearheads instead.
A clear suggestion of the dangerous situation evidently
occurring at Pylos comes from a previously-mentioned
tablet, stating contributions of temple bronze for the
making of weapons, to be made by local governors and
deputies, and also from another tablet which appears to
list (without a reason) gold received, again from district
governors and deputies. The total amount of gold on the
tablet is extremely high, given that gold seems to have
been very scarce in Mycenaean times, probably being
imported, and Chadwick has suggested that it might either
be required for the hiring of mercenaries, or possibly for
the buying-off of the attackers, Mycenaean Danegeld, as it
were. Given the yearly composition of the tablets, and
therefore that those excavated refer to the final months
before the destruction of Pylos, we may deduce that
military actions mentioned are preparations for the
impending attack. However, while agreed by most that this
seems likely, scholars cannot be certain as, if what we
assume that what we read are the actions in a state of
emergency, then we have no evidence for what actions were
taken annually in peacetime, and vice versa. Palmer
particularly carried out a lot of work on this, as did Ventris
and Chadwick, and the result is a series of intriguing
details on army units. The positioning of the units
indicates that the attack was expected to come from sea.

ISSUE 2, SEPTEMBER 2014


Stopping the attack as near to landing as possible was
essential for the Pylian kingdom, as, unlike Mycenae, the
palace of Pylos did not have fortified defences, and a fivetablet document has been found which indicates how the
coastline was divided into ten sectors with lookouts and
ships for defence.
We are, regrettably, no nearer on account of the
decipherment to knowing for certain who invaded and why
Mycenaean civilisation came to such an abrupt end, the
palaces never being rebuilt, Linear B falling out of use with
this and the arrival of the Greek Dark Ages. However,
because of our yearly tablets, we can in fact date the
attacks within the given year. At Knossos, the grain
harvest is already in the process of being gathered in;
additionally, the tablets appear to list seven month names,
one of which is probably the last of the previous year,
which would suggest that Knossos was in the sixth month
when burnt - everything appears to point to a late
May/early June attack.
At Pylos, the situation is very different. The sheep
shearing has not yet taken place: it must still be winter, or
very early spring. We have very little material with
months, but the notorious list of human sacrifices is titled
po-ro-wi-to-jo and Palmer has suggested that this is
Plowistoio, [the month] of sailing. Given that the sailing
season during this period began at the end of March, Pylos
seems to have been destroyed, based on these proposals, in
early Spring. We have said that we do not know the cause
of the destruction, yet one point is worthy of note - the
tablets indicate a clear fear of invasion by sea.
No one can say for sure as to reason for the destruction, but
we can be relatively sure of the details on weapons and
emergency preparations thanks to the decipherment,
without which scholars would, here too, still be speculating
merely on archaeological evidence.
8. Minoan or Mycenaean?
An extremely thorny issue which Ventris and Chadwick
had to overcome was that of which tablets and therefore
archaeological evidence and periods belonged to the
Minoans (those ruling from Crete), and which belonged to
the Mycenaeans (who presumably became the Achaeans).
When excavating Knossos, Evans expected that the results
would concur with those of Mycenae, supporting a theory of
a Mycenaean world. However, awed by what he discovered
at Knossos, he ended with a different theory of Minoan
supremacy, in which Crete controlled the mainland during
the Aegean Bronze Age, up to the destruction of Knossos at
around 1400 BC, bringing LMII to a close. This then left
around 200 years in which time things would reshuffle so
that the Achaeans in the Iliad appear led by Agamemnon,
High King of Mycenae, not Idomeneus, King of Crete.
Evans discovered three forms of writing at Crete, whereas
no writing had at the time been discovered on mainland

128

SAINT OLAVES ACADEMIC JOURNAL


Greece; the Minoan dominance argument therefore hinged
on the point that the tablets, could not write Greek. The
two non-purely pictographic scripts, Linear A & B, were
clearly such that B was a later, modified form of A. The
first warnings arrived with the discovery in 1939 of Linear
B tablets on the mainland, at Pylos. If Linear B were
exclusively used for Minoan writing, at their
administrative centre in Knossos on Crete, why should
there be such a large number on the mainland? Bennetts
analysis gave further evidence that Linear A and B did not
write the same language. Indeed, given that B had been
found outside Crete, it was possible that B wrote a
language brought to Crete from outside not Minoan.
Much to his surprise, Ventris who, until extremely late
on in the decipherment, maintained his theory that the
language would be a relation of Etruscan deciphered the
tablets and, in doing so, revealed that they wrote an
archaic, but recognisable, Greek. Thus the Linear B tablets
found at any location had to be Mycenaean, not Minoan227.
This suited archaeologists and classicists concerning
mainland Greece the palaces, destroyed much later than
Knossos were Mycenaean, and could even link to Homer.
The problem was that the other conclusion was therefore
that Knossos in LMII (i.e. at time of Linear B tablets) must
be Mycenaean, not Minoan. Evidence for a Minoan
civilisation before the Mycenaean was present in
abundance. The questions now raised were when the date
of the takeover took place, and why it occurred. It seemed
likely now that Linear A wrote Minoan, so the switch from
A to B should give this point. Unfortunately, while we can
date as precisely as E/M/L and I/II/III in each of these, it is
not possible to do so with a greater precision, even with the
advent of carbon dating. What is more, it appears that
Linear B tablets may overlap somewhat with Linear As
final examples.
In terms of the reasons for the takeover, scholars are,
similarly, not entirely clear, but it appears that Crete lies
on a small tectonic plate which has led to the area being
subject to earthquakes. Following an earthquake, proposed
to be around 1500 BC, on the small island of Thera, located
due north of Crete, there was a volcanic eruption of such
magnitude that, in an explosion, most of the island was
destroyed. The eruption was greater than that at
Krakatoa, and would have caused a tsunami with a height
of perhaps 100m (pumice presumably distributed from said
wave has been found 27km away at heights of up to 250m),
and, perhaps even more importantly, a vast ash cloud
would have been produced, which if a layer of 10cm depth
was left on much of Crete, as proposed would have killed
any vegetation present. After a few years, the weather
would have ensured the removal of the ash, allowing the
land to be fertile once more, but, immediately after these
events, Crete must have devastated, its fleet destroyed by
the tidal wave, and its land impossible to live on.

ISSUE 2, SEPTEMBER 2014


Thus the proposed theory is that the volcanic eruption
caused a decline in Minoan civilisation, and that the
mainland Mycenaeans took the opportunity to remove a
powerful, and dangerous, neighbour. At the end of LMIB
(perhaps around 1450), every Cretan palace was sacked
with the exception of Knossos, leading Chadwick to
suggest228 that, having had time themselves to recover, the
Mycenaeans then invaded Crete and destroyed all the
important centres bar Knossos, which they kept for
themselves.
Strangely, Knossos definitely seems to have been sacked in
around 1400 BC, though, as ever, the date is disputed. Yet
the rest of the Mycenaean world continued happily for
another 200 years; indeed its zenith in LHIII occurs after
Knossoss destruction. This seems bizarre. Equally so is the
fact that vases have been discovered at Thebes, appearing
to date to the LHIII period, which have been shown to have
been exported from Crete, suggesting that, although
Knossos, and its control over Crete, was destroyed,
Mycenaean life continued.
Further research continues to be carried out on the Linear
A tablets, in an attempt both to decipher them and
ascertain what language Minoan was. As yet, though,
there has been no success. Quite apart from anything else,
there is much less material on which to work than there
was for Linear B. Chadwick commented that it is equally
possible that the Minoan language died out without trace
and has no known cognates229.
9.
Ho m e r :
Anachronis ms

Names,

Linguistics

and

While the tablets have not revealed a vast amount about


Homer, they have nevertheless shown up some very
interesting points, furthering the advance of classical
scholarship and research in this area. The tablets contain a
vast number of proper names, and, while none of the
famous Iliadic characters may be identified on the tablets,
it appears that many of people have Homeric names,
confirming that the names used by Homer were genuinely
Mycenaean. That we come across identical names is not
wholly surprising as a look at the tablets as a whole
suggests that the Mycenaeans used a relatively limited
number of names. Stranger, however, is that some
Mycenaean names found are those given by Homer to
Trojans, with both Hektor and Tros found. The implication
is that Trojan names, too, come from Mycenaean, not that
they necessarily spoke Greek.
The tablets have also
understanding of the links
the various different Greek
when the tablets were first

helped scholars with the


between, and development of,
dialects. It is noteworthy that
deciphered, philologists views

J. Chadwick The Mycenaean World (Cambridge: Cambridge University


Press, 1976) p12
229 J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge
University Press, 1958, 2nd 1967) p156
228

J. Chadwick & M. Ventris Evidence for Greek dialect in the Mycenaean


Archives (London: Journal of Hellenic Studies, 1953) 73
227

129

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

had to be changed almost immediately: a word previously


expected to read as henweka (heneka in Classical Greek)
instead was written consistently as eneka. Evidently either
the etymologists were wrong, or there is a particular
reason in Mycenaean Greek for dropping the w (the letter
digamma, not found in Classical Greek) in this case.

classical scholarship, this time in allowing greater analysis


of the Homeric epics, establishing, in the minority of places
possible, similarities and differences between the verse and
the tangible records.

Meanings have been shown to change over time, a brilliant


example being provided by the tablets in the form of the
difference between wanax and basileus. In Classical Greek,
wanax does not exist, and basileus is a king. In Mycenaean
Greek, wanax is the only word for king, and it appears that
the word may also have divine implications, while basileus
is now merely a lowly local leader we have a basileus of a
group of bronze smiths present on the tablets. Chadwick
has suggested that the words may perhaps have risen in
terms of rank over the years as, during the Greek Dark
Ages, with no monarchies or major leaders present, the
term wanax went out of use, there being no use for it, and
the minor leaders, petty chieftains230 holding the title
basileus, were now the highest rulers in the land; in time,
they became kings.

Additionally, it is by no means the case that research has


come to a halt. For instance, a serious problem with many
of the tablets is that they are, from age and also bad
handwriting originally, illegible in part, making their
interpretation of relatively little value. However, a new
technique known as Reflectance Transformation Imaging
(RTI) is set to change this. Developed by Hewlett Packard
Laboratories232, the technology comprises an opaque dome
with 76 LEDs mounted inside; a camera is attached to a
hole in the top of the dome. Artefacts are placed beneath
the dome, in an environment free from any external light.
The camera then takes 76 pictures, each with a single
different LED lit.233

It is, however, also notable that nowhere in the Homeric


epics do we find mention of a lawagetas, the commander of
the kings hosts (nor an equivalent word), or of a single
hequetas. We may deduce that over the centuries they
were written out of the tales handed down orally, so that,
by the time of Homer, ignorant of the Mycenaean social
structure, he did not include them.
It is thus clear that Homer is often inaccurate (though not
always e.g. the catalogue of ships (Iliad Book II) clearly
relates accurately to the Aegean in the Bronze Age). Cause
for particular concern is Homers geography. Additionally,
anachronisms occasionally appear, with occasional iron
weapons in the Iliad. We would not expect to find, and the
Linear B tablets confirm this absence, ironwork in the
Bronze Age. Thus the tablets confirm what was suspected
to be anachronistic, and allow scholars to propose that
parts of the Iliad (including the iron weapons) are add-ins
which had become one with the epic at some point between
the war and Homers writing. Such comments make Homer
sound like a terrible destroyer of history, but it is essential
for us to remember that he was a poet, not a historian, and
was writing 500 years following the events. If we compare
the Iliad with the (relatively-speaking) modern example of
the Chanson de Roland which was written mid-12th
century (AD) but based on a battle in 778 AD231, and is
inaccurate to the point of having the wrong enemy
Homers mistakes seem positively reasonable. So in yet
another way the Linear B decipherment has affected

10. Continuing research - RTI

Computer software then creates a Polynomial Texture Map


file. The revolutionary point that the software uses the 76
pictures to produce a 3D model of the tablet. The computer
can then simulate the tablets appearance in light from any
given angle and can emphasise the differences in depth (i.e.
where there are inscriptions) on the tablet, permitting
reading of tablets previously illegible. The technology is
already being used for a comprehensive collection of the
extant cuneiform tablets, and was recently trialled on the
Ashmolean Museums small, but representative, collection
of Linear B tablets. The museum reports that The

application of RTI-technology to the Linear B tablets


appears to be very promising in the future, RTItechnology can enhance further the production of
drawings that could potentially alter our knowledge of
Late Bronze Age administration234. Thus it may well be
that the use of the decipherment is far from finished,
depending on the RTI results on the other Linear B tablets.
Problems
reducing
the
effect
decipherment on classical scholarship

of

the

1. Lack of material
Writing in 1958, Chadwick wrote that our chief hope must
be in the discovery of new texts235. Since then, more
tablets have been discovered at a variety of sites, but it is
notable that Robinson, writing in 2002, still notes that the
rate of discovery of tablets has slowed dramatically since
Ventris time236. It is not helped that certain recently
http://culturalheritageimaging.org/Technologies/RTI/ - last accessed
8/ii/14
232

233

J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge


University Press, 1958, 2nd 1967) p115
231 http://en.wikipedia.org/wiki/Song_of_Roland last accessed 8/ii/14.
Wikipedia can be edited by anyone, but, while this does leave it open to
possible inaccuracy, articles which are not controversial remain relatively
up-to-date and accurate through the system of peer review and correction
possible on account of free-access editing, meaning that the information
contained within this page is likely to be both accurate and reliable.
230

http://www.ashmolean.org/departments/antiquities/research/research/rtisad/
- last accessed 8/ii/14
234 http://sirarthurevans.ashmus.ox.ac.uk/collection/linearb/ - last accessed
8/ii/14
235 J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge
University Press, 1958, 2nd 1967) p137
236 A. Robinson The man who deciphered Linear B (London: Thames &
Hudson, 2002) p157

130

SAINT OLAVES ACADEMIC JOURNAL


discovered Mycenaean sites could not be excavated fully,
modern villages having been built on top of the sites,
instead of merely the ever-present Grecian olive grove (as
at Pylos), with its far greater capacity for digging. While
research of course continues on the current Linear B
tablets, one can only get so far with the material existing.
This is down to three main problems. Firstly, the tablets
themselves were not produced to last. Evidently the
reusable rough paper of Mycenaean Greece, when the
records they kept had been finished with, the tablets were
either thrown away, or pounded with water and reused they would not have survived the multiple millennia but
for the sacking and burning of the palaces.
Thus, ironically, this act of destruction in fact saved the
tablets by baking them and preserving them. Nevertheless,
when excavated they were friable at best. This is perhaps
best demonstrated by two examples: that of Evans, who left
a tray of excavated tablets in a storage shed, unknowing of
its leaky roof, and came back in the morning to find an
indecipherable muddy mess237; and that of Blegens initial
excavations at Pylos, which almost immediately revealed a
flattish object, but unfortunately its discoverer, an
inexperienced workman[,] picked it up and wiped it clean
with his handhe had obliterated the writing on a clay
tablet which had lasted over three thousand years238. Thus
it comes as little surprise that many of the tablets which
extant tablets indicate must have been present simply do
not appear to have been preserved. This is, however,
infuriating for scholars who have estimated that, in
places, we have only a third of the tablets originally
present. Tablets were sorted in boxes and filed with a
contents tablet; thus if we have such a tablet but only e.g.
three out of the twenty-four tablets belonging to its
collection, certain questions cannot be answered without
making great assumptions about the missing tablets.
The tablets also broke extremely easily. The vast majority
consist of at least two fragments, often more. While we can
interpret the bigger fragments, the interpretation can often
be incorrect; the smaller fragments are generally illegible
or contain too few characters for translation and
interpretation. This leads to scholars trying to assemble
vast numbers of jigsaw puzzles simultaneously, where the
pieces of all the puzzles are mixed together, and where
many are also missing. This can be carried out, with great
difficulty and some scholars have had great success,
piecing together hundreds of fragments but many
fragments remain orphans. Particularly infuriating is the
fact that, because the tablets comprise almost solely
inventories in a very brief form, the title at the top of the
tablets is crucial. It should, however, be mentioned that, in
one case, the finding of a tablet in multiple pieces, the
outside pieces being found earlier, was actually extremely
useful and was used by Palmer as an added proof.
A. Robinson The man who deciphered Linear B (London: Thames &
Hudson, 2002) p12
238 L. R. Palmer Mycenaeans and Minoans (London: Faber & Faber, 1961)
p45
237

ISSUE 2, SEPTEMBER 2014


2. No literature
It now seems inevitable that there is no extant literature
written in Linear B. Unlike the Cypriot and Cuneiform
scripts, the simplification of the symbols of both of which to
a few heavy lines and dotes we can trace over time, Linear
B symbols on all of the extant tablets have fine lines and
delicate curves239 which, scholars concluded, could only
have been written on such clay tablets with skill and a
stylus with an extremely sharp point. (Compare , , and
). The first is cuneiform, the second Cypriot; compare
these crude two with the carefully drawn equine pictogram
from Linear B.) If Linear B were only written in clay, we
would expect much simpler symbols; that its signs
remained so complicated indicates its also being written
with pen or brush, on paper or something such.
Given that papyrus was used by contemporary Egyptians,
this is an option; another is vellum, Herodotus mentioning
that the Ionians once used skins for writing on240.
Unfortunately such a material would long since have
perished both on account of the sacking of the palaces and
the time between this and the excavations. Additionally,
Chadwick suggests that the relatively clumsy orthography
of Linear B makes it uncertain as to how far a
documentwould be readily intelligible to someone who
had no knowledge of the circumstances of its writing241
similar to shorthand, the writer might easily read it back,
but a stranger might find considerable difficulty in doing
so. Thus the existence of written prose or verse is
extremely unlikely we are highly unlikely to have lost
Mycenaean literature owing to its being written on a
medium other than clay. (There is however, hope that we
might find written messages, these probably being written
at the time in a standard format, essentially as an
instruction to the messenger, with some Persian letters
beginning To the king, my master, say.)
Hence the hope of recovering either written pre-Homeric
epic, or written prose relating to the events of the Iliad, is
now generally agreed to have been extinguished. This thus
reduces the effect on scholarship from what it was hoped to
be, by indicating that the chance of being able to consider
pre-Homeric literature is approximately nil.
3 . C r i t i c i s m o f t h e d e c ip h e r m e n t

That the Greek of the [Mycenaean] time, by a kind of


shorthand, left out the endings and wrote so to speak only

J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge


University Press, 1958, 2nd 1967) p130
240 Herodotus Histories V:58 Herodotus tales are known to be unreliable in
places and economical with the truth, but, at the same time, certain points
made by him which were previously thought to be fictitious have since been
shown by newer archaeological evidence to be true. We must thus take his
agreement with the general idea as convenient and backing it up, but
cannot base our argument on his comment, which may or may not be
accurate.
241 J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge
University Press, 1958, 2nd 1967) p131
239

131

SAINT OLAVES ACADEMIC JOURNAL


the stem of the word is the most inconceivable of all
possibilities. 242
The vast majority of scholars accepted the decipherment
relatively quickly much more quickly than the time taken
for Champollions hieroglyphic decipherment to be
acknowledged correct but a few, as the above quote,
written five years after the decipherment, shows, held that
it could not be correct and criticised it.
Such criticism was led by A. J. Beattie. As a reply to
Ventris & Chadwicks original article243, Beattie quickly
published his first counter-article, written in a most hostile
tone. Taking the hypotheses that Linear B was indeed
syllabic and wrote Greek, he then discussed various points
made by Ventris & Chadwick. Chief amongst these was the
syllabic grid, on the discussion of which Chadwick
subsequently wrote that it is clear that he did not
understand how it was constructedhis whole account of
this stage is distorted244. Quite how faulty Beatties
argument was as far as those who accepted the
decipherment were concerned is shown by Palmers
comment that at a crucial stage of his argument [Beattie]
made twelve mistakes in reading the [Linear B] signs in
half a page245. It does not therefore seem surprising that
Beattie was not able to reconstruct the decipherment
stages.
Beatties next stage in the article was to admit that many
words and phrases make sense, but argue that this is
worthless, as it was not (to him) known whether such
words were used by Ventris to produce the syllabic values
to start with. But the fact that so many Linear B words
and phrases make sense including on virgin tablets not
present at the time of the decipherment proves that this
cannot be so: if Ventris had just used the relevant words to
construct the syllabic grid and the language were not
Greek, then any other words would read as gibberish,
something which they patently do not do.
Additionally, Beattie argued that not all words could be
translated, but Chadwick and others have pointed out that,
given that subject matter, proper names in Linear B, and
the archaic dialect were all unknown, it is unsurprising
that not every word is immediately comprehensible.
Chadwick cites as a comparison a person reading Chaucer
for the first time, never having seen comparable stories,
names or Middle English, who would also be unlikely to
understand every word.
Professor Grumach of Berlin also argued against the
decipherment, proposing (on a similar line) that the Linear
B orthography is merely a convenient way of producing
W. Eilers (Berlin: Forschungen und Fortschritte, 1957) 31, pp326-32
(Quoted in J. Chadwicks The Decipherment of Linear B)
243 J. Chadwick & M. Ventris Evidence for Greek dialect in the Mycenaean
Archives (London: Journal of Hellenic Studies, 1953) 73
244 J. Chadwick The Decipherment of Linear B (Cambridge: Cambridge
University Press, 1958, 2nd 1967) p90
245 L. R. Palmer Mycenaeans and Minoans (London: Faber & Faber, 1961)
p66
242

ISSUE 2, SEPTEMBER 2014


Greek from words written in a different language.
However, Chadwick has pointed out that the large number
of pictographic logograms whose interpretation is
unmistakeable (e.g. is clearly a deer) accompanied by
words which transliterate to the correct word in Greek,
effectively disproves this argument. The argument was
also made that there would be no need for both word and
pictogram. However, not does the word sometimes give
description to the ideogram (e.g. ideogram of pot and then
word indicating height of 6m), but this practice also
apparently still occurs occasionally today in Japanese
newspapers, where a rare ideogram is accompanied by the
words reading in syllabic form and we use a similar
concept on cheques, writing both Two Pounds and 2.00.
The fact that all the words transliterated from Linear B
also follow the inflexions found in Greek also essentially
proves that they cannot be a foreign language.
Perhaps the best evidence that such criticism could not be
correct comes from the Missing Link tablet translation. At
Pylos, the outer two parts of a tablet broken into three
equal parts had been found. By translating these parts,
using Ventris decipherment, scholars were able to
hypothesise as to the contents of the centre section.
Following publication of such translations, the middle third
was discovered. Palmer, amongst others, had proposed that
the tablet referred to various cooking and hearth utensils,
based on the extant parts. The middle section had the word
for hearth, and the adjectival description on an outer
section agreed grammatically with this word. Other
proposals based on the outer two sections were also proved
correct, as could only occur from a correct decipherment
having been used.
Thus a considerable amount of space has just been used in
emphasising that the vast majority of the criticism was
unfounded and, as such, cannot be considered to have had
an adverse effect on the overall effect of the decipherment
on classical scholarship. However, two points made by
several, chief amongst whom was inevitably Beattie, do
have some (though, it would be argued, still very little)
justification.
Firstly, the many ambiguities in any transliteration of the
Linear B script on account of its orthography, do, it is true,
indeed leave options open. Beattie argued that such
ambiguities which mean that even now, scholars have
different interpretations of some words written in Linear B
would have made it impossible to read the script even in
Mycenaean times. However, it has been pointed out that,
while some individual phonograms can represent more
than 30 different syllables in Greek, when considered in
combination with the other phonograms used in any word,
the number of possibilities, merely by comparing with
known words, drops to relatively few somewhat similar to
completing a Codeword, where anyone with a reasonable
knowledge of the languages vocabulary will be able to
eliminate extra possibilities merely based on what is a real
word in the dictionary. In any case, Chadwick emphasises

132

SAINT OLAVES ACADEMIC JOURNAL

ISSUE 2, SEPTEMBER 2014

that one recognises what one often sees and knows, and
thus the proposal that a Pylian would not recognise pu-ro,
Pylos, is comparable with the proposal that Scots will
have no idea of the meaning of Eboro on road-signs.
Nevertheless, it is true that the multiple translations
possible in some cases do leave scholars interpretations of
the texts insecure, thus reducing the effect of the
decipherment. The insecurity is not, however, great,
provided that the translation has been carried out by an
expert philologist who can evaluate the likelihood of each
possible translation though this could, in itself, be
considered a requirement which reduces the effect of the
decipherment on classical scholarship by limiting true
access to the texts.

ideas which, in some cases, had practically become


accepted as truth on relatively little basis.

The other point with a certain justification is Beatties


complaint that not every stage of the decipherment was
well documented, and that it is not possible to get from
beginning to end merely by reading through the worknotes,
or indeed the information given in Ventris and Chadwicks
original article, meaning that the authenticity of the
decipherment process cannot be verified as absolutely true.
To a certain extent, space constraints were to blame for the
shortening of the section in the article concerning the
decipherment process, but it is more down to Ventris
himself. Robinson comments, tellingly, that the
decipherment was indeed not a triumph of logical
deduction...the decipherment was an inextricable
combination of intuition and logic...this is why Ventris was
a genius246. It is thus impossible to write down precisely
how the decipherment took place, and thus, once more,
through the lack of absolutely assured verification, the
effect of the decipherment on the relevant classical
scholarship was diminished. Yet, without a Rosetta Stone
equivalent, we can never be more sure than this. The
decipherment, while unverifiable in Beatties sense, is
strengthened by the fact that Ventris did not begin with
the idea of the language being Greek, and only came round
to it most reluctantly, as perhaps summed up in a quote by
Chadwick The most interesting fact about his work [on
Linear B] is that it forced him to propose a solution
contrary to his own preconceptions247.

_______________________________________________________

There also seems to be no doubt that the effect would have


been even greater if it were not for the terrible paucity of
the material - the tablets not being intended to last three
years, let alone three millennia - which has hampered the
scholarship based on them; and the undoubted genius of
Ventris which enabled the decipherment but, in doing so,
also reduced its effect by making it impossible to prove
purely to be series of logical steps which would allow
irrefutable verification.

Peter Leigh (Year 14)

Conclusion
Taking into account all these different areas, there seems
to be no doubt that Michael Ventris decipherment had a
very great impact on classical scholarship. It did this on
account of its managing to reveal areas of classical
scholarship whose existence had not really been considered
previously, permit scholarship in areas such that
archaeological evidence alone would not have allowed it,
increase the extant scholarship in areas already informed
by archaeological evidence, and definitively prove wrong

A. Robinson The man who deciphered Linear B (London: Thames &


Hudson, 2002) p155-6
247 Chadwick, quoted in Robinson The man who deciphered Linear B p156
246

133