You are on page 1of 32

Julia Kristeva and thought in revolt

JOHN LECHTE

Footnotes to Plato is a TLS Online series appraising the works and legacies of the great thinkers and philosophers

Julia Kristeva (b. 1941) is not a philosopher in the formal sense. She was not educated in the discipline (although
coming from communist Bulgaria, she did, albeit unsystematically, absorb the ideas of Hegel and Marx). Her work does
not pertain exclusively to any of the commonly accepted domains of philosophical inquiry, although her projects
continue to have an impact on them. Rather, she has frequently been called a linguist (even though her oeuvre
includes studies in semiotics and literary theory), a psychoanalyst (her approach is of a very particular Freudian bent
that includes a focus on the feminine and maternity), or a novelist (which is true, but the point to note is that there is a
crossover between novelistic content and theoretical and analytical work).

In 2004, Kristeva was awarded the first Holberg International Memorial Prize for innovative work on issues, “at the
intersection between linguistics, culture and literature”. And the chairperson of the Holberg selection panel noted
that: “Julia Kristeva . . . demonstrates how advanced theoretical research can also play a decisive role in public social
and cultural debate in general”. Regarding the latter, Kristeva has applied her psychoanalytic insights to the issues of
young children and language acquisition, to adolescence and the importance of the need to believe that self-harm and
violence (“radical evil”) can be ameliorated, as well as to the way psychoanalysis can transform existing approaches to
disability.

SUBSCRIBE TO THE WEEKLY TLS NEWSLETTER

But how did it all start? Having been awarded a French government bursary Julia Kristeva left communist Bulgaria –
where her Russian Orthodox family continued to live – and arrived in Paris during the Christmas of 1965. Her
education had been both francophone and Francophile, and the young scholar duly embarked on a doctorate
supervised by the sociology of literature specialist, Lucien Goldmann (1913–70).

Although this doctorate on the emergence of the “text” of the modern novel (see Le texte du roman, 1970) would be
widely acclaimed, the initial interest of French academics and intellectuals in Kristeva related to her knowledge of the
work of the Russian formalists (including Roman Jakobsen’s theory of language) and, in particular, to her knowledge of
the work of the Russian literary theorist, Mikhail Bakhtine (1895–1975) and his concept of the dialogical novel
(exemplified by Fyodor Dostoyevsky) and his reinvigorated theory of carnival. Roland Barthes invited Kristeva to speak
on the subject at the École des Hautes Études in Paris, as did the literary theorist, René Girard, who had recently
moved to America.

At the same time Kristeva began attending Roland Barthes’s seminars, and it was he who recommended that she
introduce herself to the writer, Philippe Sollers, then the Editor of the avant-garde journal, Tel Quel. The pair hit it off
on both intellectual and personal fronts and married in 1967 (the marriage, which might have perplexed some of the
’68 generation, also enabled Kristeva to remain in Paris after the expiration of her student visa). Sollers’s novels have
been a constant point of reference in Kristeva’s literary analyses, especially in the 1970s. Like Jean-Paul Sartre and
Simone de Beauvoir, Kristeva and Sollers made their relationship exemplary of two individuals maintaining their love
for each other and for their difference and independence – or, as they put it, for remaining “foreign” to each other.

Kristeva’s work may not have started out being philosophical, but it has had profound philosophical effects. Apart from
her novels (which deserve a separate study), Kristeva’s intellectual and philosophical innovation is evident in her work
in semiotics, poetic language, feminism and in commentaries – inspired by psychoanalysis – on society and politics. In
her approach to semiotics, Kristeva coined the term “semanalyse”. This refers to a theory of the sign coupled with the
study of the emergence of the text of the modern novel in the fifteenth century. Given the diversity of novelistic
writing, how could we find a universal characterization? Kristeva’s answer is that before the novel, fictional texts
tended to be closed – a closure exemplified by the epic, the myth, folk or fairy tales. Modern scholars of a structuralist
bent (such as Vladimir Propp) had already argued that these earlier texts can be described as a series of symmetrical
transformations, which gives them an air of predictability. Kristeva defined this as “disjunctive” symmetry, where one
half of the tale (good, for example) is symmetrically opposed to the other half (evil), and the two halves, structurally
speaking, mirror each other.

The modern novel, by contrast, is “non-disjunctive”: opposites – the positive and the negative – are of equal value and
can flow into one another so that the text becomes open-ended and heterogeneous. More could always be added.
The novel is thus a segment of a continuum – perhaps the ultimate example being Finnegans Wake. The novelistic text
is called the text of the sign, whereas the epic is the text of the symbol. While the meaning of the sign is contextual
and is constituted in its relation to other signs, the text of the symbol tends to have a fixed meaning, one that evokes
(what passes for) an eternal verity, a moral idea or a snippet of wisdom.

The innovation in Kristeva’s approach to the novelistic text of the sign is twofold: firstly, it is not only open-ended, but
it is also open to the outside – to existence in all its complexity. Indeed, it incorporates the outside within itself, which
could take the form of references to current historical events (Joyce, for example, evokes the history of Ireland in
Finnegans Wake). It is this fact of incorporation that Kristeva marked with the concept of “ideologeme”.

Secondly, Kristeva recognizes that there is a carnivalesque aspect to the novel, by which she means that it is not
reducible to the law of contradiction. Its logic, or principle of operation, is not that of the “either/or” of non-
contradiction, but of the “both/and” – of “one and the other”, thus defying contradiction as traditionally understood.
In short, its “logic” is equivalent to “ambivalence”: for example, Dostoyevsky’s novels, as “dialogical” (to use Bakhtin’s
term favoured by Kristeva), contain multiple points of view, some of which are clearly contradictory; it is thus neither
one thing nor the other, but can be both simultaneously, something that implies that it cannot be explained by an
either/or logic. This is also – or even primarily – the law of poetic language. A text exemplifying the structure of both
“symbol” and “sign” would be an instance of “intertextuality”, another key term in Kristeva’s early work.

Consequently, literary criticism, to the extent that it, like philosophy, has been traditionally beholden to the law of
contradiction (also the law of identity: for identity is strictly one thing and not anything else) is called upon to reinvent
itself. This means, too, that literary theory is called on to account for the text as (dynamic) productivity, not just as a
(static) product. The critic must now ask not only: “how might I describe this?”, but: “where am I located in this?” For
the critic comes to realize that he or she is also ensconced in language and is therefore part of the context of the text
being analysed. A disturbance of some magnitude has thus occurred in the landscape of thought – all the more
profound because the tools (linguistic, philosophical) used to analyse communicative language (the domain of
traditional linguistics) are no longer adequate for analysing the novel and poetic language. Many a critic has not been
able (or willing) to keep pace with this.

Perhaps because Kristeva delves in depth into the workings of affect and emotion, more than any other, her name has
been associated – outside France, if not within – with the concepts of the “semiotic”, the “symbolic” and the “abject”
(the latter being a concept popularized in the 1980s and 1990s in American art and has to do with the visceral being of
the subject, not just its symbolic being).

Kristeva’s version of the semiotic and the symbolic was developed in her doctorat d’état, La Révolution du langage
poétique (1974) (partially translated as Revolution in Poetic Language, 1984). The semiotic is evocative of pre-oedipal
“language” – the holophrastic utterances (one-word utterances, relating to childhood demands) and rhythmical
articulations observable in childhood (recall Kristeva’s attention to language acquisition). In psychoanalytic terms, it
evokes bodily drives and is thus at odds with the suppression of such drives observable in the language of
communication, which is based in the symbolic order. To demonstrate the working of poetic language, Kristeva
engages in a detailed study of nineteenth-century avant-garde poetry (notably that of Stéphane Mallarmé and the
Comte de Lautréamont). The supreme irony, Kristeva argues, is that the most challenging (and to many, the most
obscure) avant-garde writing (Joyce, for example) is indebted to childhood experience, an experience of universal
scope.

Moreover, the notion of the semiotic is developed in conjunction with another: the “maternal chora” – a term
borrowed from Plato’s Timaeus. For Plato, the chora is a relatively static place, or receptacle, prior to the origin of
things. Kristeva gives this notion a dynamic twist, so that it becomes a mobile and provisional articulation. Rather than
being an origin as it is developed in conventional history, the chora defies being thought or represented because it is
the (hypothetical) basis of thought and representation. The most influential aspect of Kristeva’s appropriation of the
chora is that it is called “maternal” – it is evocative of the mother, and not the father, the key element of the symbolic
in psychoanalytic terms. Much debate has taken place within feminist criticism over the years about whether the
notion of a maternal origin that strictly speaking cannot be represented is a progressive or a regressive move.

In 1980, Kristeva published her book on abjection, Pouvoirs de l’horreur (Powers of Horror, 1982). The implicit starting
point for understanding the abject is Jacques Lacan’s famous psychoanalytic theory of the “mirror stage” – the point
where the subject, between the ages of six and eighteen months can recognize itself in the mirror and thus shows
itself ready for language acquisition. This is also, according to the analyst, a stage crucial to the formation of the ego
and the notion of an object. Indeed, the mirror stage could be described as the instauration of the subject/object
relation.

Kristeva’s originality here stems from the proposal that there is a crucial pre-mirror and thus pre-object phase in
language acquisition, characterized by drive activity. This phase, typified by what is taken into or expelled from the
body, is an insight partly inspired by Kristeva’s reading of anthropology and the notion of ritual associated with “purity
and danger” (impurity), sacred and profane. Because it evokes what is expelled from the body (both social and
individual), it has inspired art depicting detritus, excrement and whatever brings boundaries into question.
With her work on love, her analyses of Proust and her courses on revolt in the 1990s, Kristeva has sought to show that
the human psyche – not unlike the truly literary sphere – is an “open system” that benefits from continual
reorganization. The psyche, in short, is a dynamic not a static entity. It is capable of adaptation and change. To treat it
as static is to risk having it atrophy and die. An exterior perturbation, therefore, can have beneficial effects for the
enrichment of psychic space. Love can be this perturbation.

Similarly, our society (which, Kristeva, after Guy Debord, does not hesitate to call the “society of the spectacle”, largely
because, she believes, people are held in thrall by the image in all its avatars) tends to threaten dynamic psychic space,
the consequence of which is that life experience becomes increasingly banal and monotone. Increased levels of
violence can be a symptom of this. Kristeva therefore promotes the importance of revolt – not necessarily revolt as
violence (in fact she argues against this), but “intimate revolt” that can take the form of new forms of thinking, of art,
of different ways of living, of innovative education – indeed, of any innovation that might result from irrepressible
human creativity. Revolt must pit itself against the existing “society of the spectacle” – against, that is, a thoroughly
mediatized, technologically dominated, consumerist society. The latter has brought new “maladies of the soul” to the
attention of the analyst, maladies such as an incapacity to activate imagination. One consequence is that the subject
becomes unaware of the implications of what he or she is saying or representing; the capacity for interpretation
(which requires a vibrant imagination) has broken down.

In another phase of her work, Kristeva has studied three women of high intellectual achievement: Hannah Arendt,
Melanie Klein and Colette. The choice of subject here is partly due to personal interests and the series carries the
general title of “Female Genius”. On the one hand, Kristeva wants to avoid any cause and effect relation in her
interpretation: this is not a matter of discerning traits of femininity in the thinkers’ intellectual work. Kristeva’s aim is
to distil their individuality and their evident intellectual achievements, while at the same time doing justice to the fact
that it is indeed women who are the centre of attention. Whether or not Kristeva succeeds is only one aspect to
consider. The other is the supreme subtlety of the attempt – which, as ever, is Kristeva’s great achievement.

John Lechte is Emeritus professor at Macquarie University


The revolutionary ideas of Thomas Kuhn

JAMES A. MARCUM

Thomas Kuhn’s influence on the academic and intellectual landscape in the second half of the twentieth century is
undeniable. It spans the natural sciences, and the historical and philosophical disciplines that examine them, through
to the fine arts and even to business. But what did Kuhn espouse? In brief, he popularized the notions of the paradigm
and the paradigm shift. A paradigm for Kuhn is a bundle of puzzles, techniques, assumptions, standards and
vocabulary that scientists endorse and employ to undertake their day-to-day activities and thereby make remarkable
advances in understanding and explaining the natural world. What Kuhn unintentionally achieved, however, was to
open the epistemic floodgates for non-scientific disciplines to rush through. Justin Fox, in a 2014 Harvard Business
Review article, to take a single example, queries whether economics is on the verge of “a paradigm shift”. Kuhn has his
detractors and critics, of course – those who charge him with almost every conceivable academic failing, especially the
promotion of relativism and irrationalism.

Kuhn was born on July 18, 1922 in Cincinnati, OH. After a progressive education, he matriculated in 1940 to Harvard
University – majoring in physics – and graduated summa cum laude in 1943. He participated in several war-related
projects, and after VE day he returned to Harvard to carry out research on theoretical solid-state physics, for which he
was awarded a doctorate in 1949. A year earlier, Kuhn had been selected – through the patronage of Harvard’s
president James Conant – as a Junior Fellow in the Harvard Society of Fellows; and he took this opportunity to move
from physics to the history and philosophy of science. In 1950, Kuhn was appointed an instructor to teach in Conant’s
inspired case-history science course; however, he was denied tenure in 1956 because the committee deemed his
scholarship too popular and insufficiently academic.

In 1956, Kuhn accepted a position at the University of California at Berkeley to establish a history and philosophy of
science programme. He was promoted to full professor in 1961, but only in the history department. In 1962, The
Structure of Scientific Revolutions – the book in which Kuhn set out his ideas on paradigms and scientific development
– was published, as the final monograph in the International Encyclopedia of Unified Science. In 1964 he joined
Princeton University’s history and philosophy of science programme; and in 1979, he left Princeton for the department
of Linguistics and Philosophy at MIT. In 1991, Kuhn became professor emeritus; and he died on June 17, 1996 in
Cambridge, MA.
In Structure, Kuhn’s main aim was to criticize the widely accepted view – promoted by the Logical Positivists – that the
accumulation of scientific knowledge across time is incremental and contiguous.

He attacked, for example, the notion that Newtonian mechanics represents simply a special case of Einsteinian
relativism. For Kuhn, the two theories are incommensurable; that is to say the terms and concepts of one are
completely incompatible with the other. According to Kuhn, when the Newtonian discusses mass, for instance, she is
referring to something entirely different than the Einsteinian. Rather than being the next phase in a continual process,
Einsteinian relativism represents a paradigm shift, involving a radical break from Newtonian mechanics and the
introduction of a wholly new set of standards, puzzles and vocabulary. Kuhn also rejected the Logical Positivists’
verification principle. Rather than operating within an objective and a mind-independent language, scientific terms
and concepts, according to Kuhn, have references and meanings that are relative to specific conceptual frameworks. In
other words, theories cannot be verified by simply observing phenomena and articulating them directly; those
observations are already unavoidably embedded in the theoretical framework. Hence, no theory can ever be verified
with certainty – either logically or empirically. Kuhn also dismisses Karl Popper’s falsification principle. Just as empirical
evidence cannot verify a theory, so too it cannot falsify one.

No conceptual framework is flawless in terms of its predictions; there is simply the best one available for guiding
normal scientific practice. According to Kuhn’s vision of historical scientific development, new theories do not
converge on the truth; rather, they shift from one paradigm to another, and each one directs contemporary scientific
practice.

In Structure, Kuhn developed a historical philosophy of science that comprises three major conceptual movements.
The first is from pre-paradigmatic science, in which several paradigms compete for a scientific community’s allegiance,
to normal science, in which a consensus paradigm guides scientific practice. Unfortunately, paradigms do not fit or
match up perfectly with natural phenomena, and anomalies eventually arise between what a paradigm predicts and
what is observed empirically. If the anomalies persist, a crisis generally ensues – leading to the second movement –
and the community enters a state of extraordinary science in the hope of resolving it. If a new, competing paradigm
resolves the crisis, then a paradigm shift or scientific revolution occurs – the third movement – and a new normal
science is established. This cycle recurs with no clear end point as science advances.

Kuhn articulated several important notions concerning scientific practice. Probably the most significant is the
incommensurability thesis. As noted above, two paradigms that compete during a scientific revolution are
incommensurable when their contents are completely incompatible; that is, when no common measure or mutual
foundation exists between them. The reason for such incompatibility is that one of the paradigms resolves the crisis
that the other paradigm produces. How then could the crisis-resolving paradigm have anything in common with the
crisis-producing paradigm? Associated with this thesis is the assertion that paradigm shifts are not completely rational
affairs: community members who switch to a crisis-resolving paradigm, must believe, beyond the available evidence,
that it can lead the way forward for a new normal science. In other words, community members are converted
through faith, but a faith – as Kuhn emphasizes later in defence of incommensurable shifts – that is not antirational.

Kuhn’s critics attacked many aspects of his theory. They argued that the very idea of the paradigm is simply too
ambiguous to support a robust critical analysis of scientific practice. Moreover, in their view, Kuhn’s
incommensurability thesis was too ambitious. Competing paradigms are obviously incompatible with one another in a
limited way, since one solves the crisis that the other engenders. But some overlap must nonetheless exist between
them, they argued, or else no intelligible exchange among members of a scientific community about the competing
paradigms is possible. Finally, Kuhn’s critics claimed that his ideas led to relativism, as he yoked the standard for
scientific truth to a particular, and changeable, paradigm, and not to the mind- and theory-independent world that
scientists investigate.

This accusation of relativism is closely associated with the charge of irrationalism: according to Kuhn’s account, the
choice of a new paradigm among members of a scientific community in crisis is made in part on faith; and not entirely
on reason. As the philosopher Imre Lakatos claimed, if Kuhn were correct, science would advance through a type of
“mob psychology” rather than rational assent. Moreover, because the paradigm dictates scientific practice with
respect to expected outcomes, normal scientists would mindlessly follow its dictates and predictions. According to
another critic, Karl Popper, if Kuhn’s story of scientific development were true, normal scientists would not be
celebrated champions, who drive the pylons of science into the swamp of ignorance until they get closer to bedrock
truth; rather, they would merely be “applied” – in contrast to “pure” – scientists.

Although Kuhn responded to his critics on various occasions, he principally addressed them in a Postscript that
appeared in Structure’s revised edition. In response to the charge of ambiguity, he introduced the notion of the
disciplinary matrix to replace that of the paradigm. A disciplinary matrix represents a diversity of elements, including
symbolic generalizations, models and values. These elements direct normal science. One that Kuhn singled out was the
notion of the exemplar. Exemplars serve the scientific community as solved puzzles for both pedagogical and research
purposes, and each disciplinary matrix has its own set.

Responding to criticisms of the incommensurability thesis, Kuhn developed a more fine-grained and nuanced
definition, distinguishing between local and global incommensurability. The former represented partial, but still
substantial, differences among competing paradigms, such that rational comparison between them is possible. Yet
global incommensurability still obtains between the most significantly divergent paradigms, such as those surrounding
the Copernican Revolution.

Kuhn found the charge of relativism frivolous: a paradigm that solves another paradigm’s crisis is obviously better
suited to guide normal science, he argued. Whether that paradigm is true or objectively correct is beside the point;
normal scientists do not possess an Archimedean platform from which to justify, either absolutely or objectively,
scientific knowledge. They work with the best standards of evidence and confirmation available to them.

And responding to the charge of irrationalism, Kuhn agreed with his critics that rational and empirical reasons are
necessary to choose between paradigms – but they are also insufficient. He argued that values are also required. For
example, simplicity in theoretical statements and natural laws is preferable to those working with them: a paradigm
with simpler theories is much more appealing, and thus more likely to be adopted. Personal factors, beliefs and
relationships may also guide a scientist to prefer one paradigm over another.

Although Kuhn attempted, in this Postscript, to salvage Structure from its critics, he later underwent a paradigm shift
of his own. In the 1980s, Kuhn exchanged the historical philosophy of science – as promulgated in Structure – for an
evolutionary one. Indeed, he originally acknowledged in Structure that Darwinian evolution best epitomized his
perspective of scientific advancement Specifically, he claimed that just as speciation is the target of biological
evolution, so too specialization is the target of scientific evolution. In other words, the target of scientific evolution is
not truth per se but finer articulation of the natural world, especially with respect to the proliferation of scientific
specialities. For Kuhn, scientific advancement is the gradual evolutionary emergence of scientific specialities. So as
members of a scientific speciality practise their trade, a new speciality evolves or emerges from the older one – often
in response to anomalies encountered under the older speciality.

Kuhn planned to write a sequel to Structure, outlining this “evolutionary turn”, which he entitled Words and Worlds:
An evolutionary view of scientific development. He began by proposing the notion of the lexicon to replace that of the
paradigm. A lexicon comprises a scientific speciality’s collection of terms and concepts to chart the world
taxonomically. So, when a scientific speciality evolves, its lexical terms change to reflect a new world and, as such, it is
incommensurable with the parent lexicon. Instead of the incommensurability of paradigms entailing that there be no
common meaning, Kuhn now argued that incommensurable paradigms had no common taxonomy. But a universal
translating language, Kuhn argued, is not the solution to understanding these incommensurable terms; rather, the
historian must enter the past world of science and become multilingual. Kuhn also changed incommensurability’s role
to isolating lexicons of various scientific specialities, so that a new speciality can evolve from its parent as its own
independent speciality. In sum, as scientific specialities evolve, their “words” capture more of the “worlds” open to
scientific investigation.

Unfortunately, Kuhn did not complete Words and Worlds before he died. The question that arises is whether the
sequel would have had a significant impact on contemporary philosophy of science, which is more pluralistic in its
perspective than when Kuhn wrote Structure. Today’s philosophers of science have no need for a consensus
framework, since each natural science is studied by its own philosophical sub-field. Kuhn’s evolutionary philosophy of
science, however, might afford a possible candidate for reviving such a framework – but not in the conventional sense.
Normally, the framework depends on a reduction of the non-physical sciences to the physical sciences. Physics is the
model for what denotes a science; and the non-physical sciences must kowtow to physical terms and concepts. But
this effort to provide a consensus framework for the sciences eventually fizzles out towards the end of the twentieth
century.

Kuhn’s evolutionary philosophy of science, however, provides a possible consensus framework that outlines the
relationships of the various natural sciences as they evolve and specialize. Thus it accounts for contemporary
philosophy of science’s pluralistic stance, by clarifying the evolutionary relationships between the sciences – especially
in terms of their common ancestry. Its goal is not to force the various sciences into a single scientific mould, such as
the physical sciences, but to account for how these sciences progress like a branching tree of proliferating specialities.
Although the full impact of Kuhn’s evolutionary philosophy of science may never be realized, the marriage between
Structure and academic discourse remains sacrosanct, as is evident from the recent celebration of Structure’s golden
anniversary – with no divorce imminent.

James A. Marcum is Professor of Philosophy at Baylor University. His books include Thomas Kuhn’s Revolutions, 2015

Karl Jaspers and the language of transcendence


GUY BENNETT-HUNTER

A new series from the TLS, appraising the works and legacies of the great thinkers and philosophers

In January 2015, after the massacre of twelve people at the Paris offices of Charlie Hebdo, the words “Je suis Charlie”
became a ubiquitous collective expression of solidarity. Inside Nazi Germany, as his Jewish wife Gertrud began to
despair of the fate of her beloved homeland, the psychiatrist and philosopher Karl Jaspers tried to console her with a
similar phrase: “Ich bin Deutschland”.

Jaspers lived an extraordinary life, of which his experiences in the Third Reich were formative. He was born in 1883,
with an incurable disease that was expected to kill him by the age of thirty – the same age at which he published his
monumental psychiatric textbook, General Psychopathology. Remarkably, Jaspers lived until the age of eighty-six,
which allowed him to pursue a second, philosophical career.

As a couple in what the Nazis called a “mixed marriage”, Karl and Gertrud became uncomfortably familiar with anti-
Semitism. They bravely decided to remain together in Germany, surviving by restricting their lives and social circle.
Although dismissed from his professorship at Heidelberg and banned by the Nazis from teaching and publishing his
philosophy, Jaspers kept writing. As he would later reflect, “Germany under the Nazi regime was a prison”, but “the
hidden life of thought” remained.

SUBSCRIBE TO THE WEEKLY TLS NEWSLETTER

Philosophically, Jaspers can be viewed as the first of the great German existentialists, but his approach was more
scholarly, responsible and historically informed than many of his colleagues’. Like all existential phenomenologists
(students of the structures of lived experience), he was deeply influenced by the Kantian distinction between the
world as it is in itself and the world as it appears to us. It follows from Kant’s insight into our imprisonment in
appearance that we have no means of comparing reality as it appears to us with reality itself, so the “phenomena” of
lived experience are what phenomenologists like Jaspers study. Jaspers wrote in a somewhat Hegelian, systematic
form, but the content of his philosophical work strains against the limitations of such formal systems. His French
colleague, Jean Wahl, described this “struggle” between form and content in Jaspers’s philosophy, which “always
stands outside the system and breaks it”. As Jaspers’s restricted life inside Nazi Germany was a form of resistance to
dogmatism, so too was his hidden life of thought.

“What we are accustomed to call Karl Jaspers’s philosophy,” wrote the Polish philosopher Leszek Kołakowski, “is in fact
a description of the acutely and incurably painful human condition.” Reality as a whole, which Jaspers calls “the
Encompassing”, has three modes: the empirical world, existence and transcendence. Human life spans two
interdependent modes: existence and transcendence, neither of which are objects of knowledge. Together, they
“encompass” the empirical world. Existence and transcendence are essential for understanding that world but, since
they are not objects, they don’t explain it scientifically. Rather, in Kołakowski’s phrase, they “confer legitimacy on it”,
give it meaning.
Jaspers defines human beings as displaying “possible existence”. Existence is what we evince when we define
ourselves in terms of our radical existential freedom to decide on who we are and the nature of our engagement with
the world. We could say that existence belongs to the “subjective” side of the Encompassing, whereas transcendence
belongs to the “objective” side. But Jaspers insists on their interdependence: there is no existence without
transcendence, and, as examples of “possible existence”, we realize ourselves only in the presence of transcendence.
The subject–object split is a useful distinction, not a dichotomy.

It is impossible to provide a complete empirical explanation of the world, and there is no Supreme Being to help us
explain it. In Jaspers’s thought, transcendence supplants the idea of God as a being. Unlike the god introduced by
some philosophers to provide naive explanations for the existence and nature of the observable world, transcendence
is not an entity among others. We cannot prove its reality scientifically, nor deduce it via logical arguments. The
presence of transcendence is necessary to confer meaning on the human world, but we encounter transcendence
beyond the limits of knowledge. The word “transcendence” evokes the ineffable. It refers to the concept of what, like
the smell of coffee or the experience of seeing green, cannot in principle be captured or fully expressed in words. Yet
we must keep trying to evoke it because the attempt is essential to human self-realization.

If transcendence isn’t an object and cannot be known or spoken of, how can we encounter it? Jaspers suggests several
ways, but most importantly transcendence “speaks” to us – not like Yahweh out of the whirlwind and the burning
bush, but in code, a system of signs called “ciphers”.

Ciphers are not symbols. Symbols are objects that represent other objects, objectifying them in a symbolic
representation even though they may be fictional objects that don’t exist outside the symbol. An objective depiction of
a skeleton symbolically represents another objective reality: death. But a cipher evokes transcendence, which lies
beyond the subject–object distinction and is no representable object. A cipher can be quite mundane, serving as a
point of focus revealing some inexpressible aspect of transcendence: a work of art, a religious myth, a ritual
performance, a guttering candle. Ciphers make transcendence accessible to us the only way it can be, but they don’t
reveal it the way it “really is” – since transcendence isn’t purely objective, there’s no such way. So, despite being made
accessible, transcendence remains hidden.

This implies that literal interpretations of religious mythologies, for example, block us from reading them as ciphers of
transcendence. Ciphers are available to everyone, but superstitious and dogmatic ways of thinking blind us to them. It
misses the point to read the four (very different) Gospels as historical fact in the same way that it’s wrong-headed to
view a painting as an accurate representation of a real event. Although paintings and biblical texts may be more or less
factually accurate, their import lies elsewhere. Symbols can be be translated into a non-symbolic language, but this is
not so with ciphers. It’s always possible to state, in other terms, what a symbol “really means”, ciphers are
untranslatable. As Jaspers argued in his published debates with the New Testament theologian Rudolf Bultmann,
religious myths are ciphers, not symbols. It’s impossible to “demythologize” religious myths (translate them into
secular terms, as Bultmann attempted) without hollowing out their religious meaning, leaving only an empty shell
behind. Ciphers can be experienced, but they remain indecipherable. It is precisely by remaining indecipherable that
ciphers guard transcendence from all kinds of dogmatic misreading.

Ciphers, then, embody transcendence in the only way that it can be embodied. Jaspers provides two metaphors to
help us better understand how this happens: one of “language”, the other of “physiognomy”. Ciphers are the language
of transcendence (not transcendence itself). This metaphor stresses the intimacy and immediacy of the relationship
between ciphers and transcendence. It’s not that a cipher is transcendence, any more than the phonemes of a
language are what a sentence of that language means. But transcendence needs ciphers to be realized, just as
linguistic meaning needs concrete phonemes. Unlike spoken languages, however, the language of a cipher is
untranslatable. What it embodies doesn’t exist outside it and is not independently accessible.

The physiognomic metaphor corrects the balance. Jaspers describes how a person’s involuntary gestures express
something of his or her being. Similarly, with ciphers, he writes, “all things seem to express a being … we experience
this physiognomy of all existence”. Whereas human physiognomy arguably expresses something that’s accessible in
other ways (through empirical psychology, say), transcendence is accessible only in and through its cipher
physiognomy. Jaspers writes:

This transparent view of existence is like a physiognomic viewing – but not like the bad physiognomy aimed at a form
of knowledge, with inferences drawn, from signs, on something underneath; it is like the true physiognomy whose
“knowledge” is all in the viewing.

But human physiognomy is arguably just the same. Is what an angry gesture expresses located in some separate
mental shrine beyond the angry person’s body? Or is the anger inescapably bound up with, and realised through, the
body and its gestures? Philosophers sympathetic to Jaspers (like Maurice Merleau-Ponty and Hans-Georg Gadamer)
argue, in Gadamer’s words, that “what a gesture expresses is ‘there’ in the gesture itself … [it] reveals no inner
meaning behind itself”. Either way, Jaspers’s point remains: ciphers are significations without there being any object
signified. As he puts it, “Signification is itself only a metaphor for being-a-cipher”.

Like poetic language, to which form and meaning are equally integral, the ciphers’ content is inseparable from the
form of the ciphers themselves. Ciphers reveal something of the transcendent in the physiognomy of the human world
itself. By reading the empirical world as the cipher-script of transcendence, we are transformed from merely possible
existence into existence. And transcendence is, as it were, created in the same moment that it “speaks” to us in its
cipher language. For, like languages, ciphers are cultural phenomena that are both created and appropriated by us.
Without us, there would be no ciphers. Yet ciphers must be appropriated from cultural and intellectual traditions that
are older and greater than we are. In the terms of the subject–object distinction, Jaspers says, ciphers are subjective
and objective at once. Through this cardinal ambiguity, ciphers embody transcendence, which outruns the subject–
object distinction, eluding our cognitive and literal linguistic grasp.

Although, in his thought, “Transcendence or God” replaces “the God of the philosophers”, Jaspers’s theory of ciphers
provides a way of philosophically accommodating the ancient, mystically inflected religious idea that we humans are
necessarily answerable (in our lived experience, thought, language and rituals) to an ineffable, transcendent God – not
a being among beings, but the ground and source of being itself. This idea parallels Jaspers’s axiom that there is no
existence without transcendence, though we can grasp transcendence only imperfectly, through its coded language or
physiognomy – “through a glass, darkly”, to cite St Paul. Unlike “religious symbols”, which must objectify what they
aim to symbolize (and can therefore be decoded into purely secular terms), ciphers are the signs of our inevitable
failure to grasp transcendence objectively – though our desire to keep trying to grasp it persists. As Kołakowski puts it,
we’re compelled to affirm our humanity “through a hopeless search for something we know we shall never find”.
Jaspers strongly denied that there could ever be a definitive, exclusive system of ciphers, but he allowed that a system
of thought (religious or otherwise) can itself be a cipher. It’s tempting, today, to view religious systems, in their
manifold forms, as just such affirmations of humanity. In the contemporary world, where dogmatic and inhumane
voices of all kinds are growing ever louder, Karl Jaspers’s anti-dogmatic voice urgently needs to be heard.

Jaspers has left us a rich and wide-ranging intellectual legacy encompassing psychiatry, intellectual history, and politics
as well as philosophy. After the liberation of Heidelberg, where he and his wife sat out World War II, Jaspers was
reinstated as professor of philosophy at the university and became among the first to reflect publicly on the collective
guilt felt in Germany. It’s in this late work that his voice speaks to us today with a rare power and poignant urgency.

In The Question of German Guilt (1946), Jaspers directly addresses the most uncomfortable questions surrounding
collective guilt that weighed on all Germans. His idea of “metaphysical guilt” refers to a person’s co-responsibility, for
atrocities that were committed, on the grounds of shared humanity. Crucially, metaphysical guilt applies to all human
beings, not to subgroups. Jaspers agreed with his former doctoral student, Hannah Arendt, that the collective
condemnation of subgroups could easily become a disturbing inversion of Nazi racial theory that similarly erased
people’s humanity. Metaphysical guilt transcends the moral duty to risk one’s life for the sake of fellow humans if
something could be gained by that risk. It arises from lack of solidarity with one’s fellow human beings:

It is not enough that I cautiously risk my life to prevent [a crime]; if it happens, and if I was there, and if I survive where
the other is killed, I know from a voice within myself: I am guilty of being still alive.

So Jaspers’s concept also describes a psychological feeling, anticipating the now common idea of “survivor’s guilt”.

In the contemporary world, where writers and cartoonists are shot, neither the philosophical question of collective
guilt nor the psychological phenomenon of survivor’s guilt is about to disappear. Jaspers’s legacy is as relevant today
as it was in Germany in 1946. But most urgent today are his thoughts on the task of renewal that lay ahead. Recalling
the phrase that he had uttered to his wife in their darkest moments, Jaspers suggested that what was demanded of
Germany as a nation was demanded of each individual. For Jaspers, to be German and to be human were not inherited
conditions, but abiding tasks. He was hopeful that an uncomfortable awareness of metaphysical guilt would aid with
these tasks, rather than lead to despair. On the future dangers, even in 1946, Jaspers most disconcertingly predicted
that, if a Hitlerian régime were to grow up in the USA, all hope would be lost “for ages”: “Should the Anglo-Saxon
world be dictatorially conquered from within, as we were, there would no longer be an outside, nor a liberation”.

Jaspers took a long view of his place in the history of philosophy. He saw himself as a link in a long chain of
philosophers, custodians of the idea of transcendence, who protect transcendence itself from being lost to us. The
idea continues to be passed like a torch from one generation to the next, sometimes only as a “glimmering spark”,
until the next, greater thinker can rekindle it to a brighter flame. But the threat remains that the flame may one day be
extinguished. With Hannah Arendt, we may be thankful that, inside Nazi Germany, Jaspers survived the deluge “like
Noah in his ark” and think it incumbent on future generations to carry his ideas forward. In her address at his
memorial service in 1969, Arendt acknowledged the irreplaceability of Jaspers’s voice. But, with a couplet from
Goethe’s Faust, she also sounded a note of optimism that there will always be those who hear and understand the
language of transcendence – Karl Jaspers’s language: “For the Earth will bring them forth again, / As she has always
brought them forth from time immemorial”.

Jaspers has left a rich philosophical vocabulary that can be used by people of faith today to articulate the central
importance of ideas like the Incarnation. With his help, Christians and others have the resources to make these beliefs
more compelling in wider discourse.

Guy Bennett-Hunter is a philosopher and writer based in London. He is the author of Ineffability and Religious
Experience (2014).

Erwin Schrödinger: a misunderstood icon

MICHAEL BROOKS

A new series from the TLS, appraising the works and legacies of the great thinkers and philosophers

Despite devising both the defining equation and the defining thought experiment of quantum physics, Erwin
Schrödinger was never comfortable with what he helped to create. His “Schrödinger’s Cat” paradox, published in 1935,
was an attempt to expose the flaws in the physics that flowed from his eponymous equation. And yet, that cat – both
dead and alive – has become an icon of quantum physics rather than a warning against its shortcomings.

Schrödinger was born in Vienna in 1887. He was an exemplary schoolboy, displaying a startling ability in all his classes.
He taught himself English and French in his spare time, and nurtured a love of classical literature. By the time he
enrolled at the University of Vienna in 1906 he was focused on physics, but still took the time to learn a great deal of
biology, which informed his later work – contributions that were cited as inspirational by the discoverers of DNA.

The work for which he is remembered requires some context. As with all science, an individual’s contributions to
physics rarely occur in a vacuum, and a host of other figures set the stage for Schrödinger’s entrance. His seminal work
began with his attempts to resolve a central mystery of the nascent quantum theory. Max Planck had discovered that
the precise nature of the radiation emitted by hot objects could only be explained if the energy of the radiation came
in discrete lumps that came to be known as quanta. Planck found this somewhat distasteful, as there was (and still is)
no explanation for why this should be so. Einstein subsequently proved this energy quantization to be real with his
discovery of the photoelectric effect, for which he won the 1921 Nobel Prize for Physics.

SUBSCRIBE TO THE WEEKLY TLS NEWSLETTER

The Danish physicist Niels Bohr built on Planck’s work, offering an appealing model of the atom in which the electrons
surrounding the atomic nucleus can sit only at particular distances from the nucleus. The energy involved in moving
between these orbits corresponds to Planck’s quanta of energy radiated by the atom, and the position of the atomic
levels are such that their circumference allows a whole number of wavelengths, when the electron is represented by a
wave. Though appealing, Bohr’s model has several shortcomings. No mechanism exists by which the electrons can
jump, for instance. There is also no cause for the jump; it occurs at random. Moreover, the jump does not take place in
ordinary space – the electron simply cannot occupy the physical space between the permitted levels.

In an attempt to resolve some of these issues, the French physicist Louis De Broglie formulated a mathematical
framework in which the photon (the quantum packet of light energy), the electron and all other forms of matter could
have a dual existence as both wave and particle. It was this framework that Schrödinger used to develop the ideas in
his paper “Quantisation as an Eigenvalue Problem” (1926) , which contains the wave equation that bears his name.

Despite its profound implications, which are still the subject of debate, the Schrödinger equation is relatively simple.
Its principal component is an abstract entity known as a wave function, denoted by the Greek letter psi. This is a
means of describing the properties of any quantum object such as an atom, electron or photon. The other significant
part is H, the “Hamiltonian operator”, which is a mathematical description of the situation under investigation. That
might be something like two colliding photons, or an electron trapped inside an electric field, as occurs in the
hydrogen atom. When applied to this case of the simplest possible atom, the Schrödinger equation reproduced the
energy levels of the electron that had been discovered by examining the radiation emitted by the atom. More complex
situations require a development of Schrödinger’s basic equation in order to correctly reflect reality.

The enduring influence of Schrödinger’s equation is in large part due to its conceptual appeal; it allowed physicists to
talk in terms of the physical attributes of the atom as a kind of ever-changing wave, whose character was determined
by the wave function. However, some – such as Werner Heisenberg – found this superficial and misleading.
Heisenberg had invented his own, equivalent quantum mathematics using the more abstract matrix representation,
and felt Schrödinger’s work was misleading. In a letter to a colleague, he labelled it “repulsive”.

Heisenberg had a point. Like Bohr’s model of the atom, the Schrödinger equation does not tell us anything about why
such quantization should exist, or what causes a change in the quantum state. Furthermore, any manipulation quickly
takes us beyond our physical intuition, and there is still an ongoing debate about whether the wave function is a
mathematical tool or something that has a physical manifestation in reality.

That is partly because of its strange nature. The wave function requires that each attribute of each particle exists in a
different set of abstract dimensions, almost as if no two photons exist in the same universe. It also allows particles to
be in “superposition” of several different physical states – position, momentum, energy and so on – at once.

It is easy to place quantum objects into a superposition, where a property such as position is undefined between
several different values. This is not a case of not knowing; in the theory, the object genuinely has all these positions.
Max Born inferred from this that the Schrödinger equation assigns each quantum state in the superposition a certain
probability of being exhibited when measured. Until measurement is performed, however, all possibilities are realized.
That means an object can simultaneously exist in two places or, as in the case of the famed cat, two states: dead and
alive.

Schrödinger published the thought experiment involving the cat in 1935, after a protracted exchange of ideas with
Einstein about the fact that any further system in contact with this quantum system is also described by the wave
function and – in the absence of measurement – subject to superposition. The setup is a little contrived: inside a
sealed box, a piece of radioactive rock is placed beside a Geiger counter that will register if any radioactivity is emitted.
Since radioactive emission is a spontaneous quantum process, with no cause, the rock is in a superposition of having
emitted and not having emitted radiation until there is some kind of measurement.

The next part of the setup is that the Geiger counter is connected to a hammer that will break a vial of cyanide if it
falls. Registering any radiation will cause the hammer to fall, smashing the vial and unleashing the cyanide.
Schrödinger next imagined a cat inside the box: until a measurement happened – such as someone opening the sealed
box and making an observation – the cat must logically be in a superposition of alive and dead.

Schrödinger’s paper containing the cat paradox was entitled “The Present Situation in Quantum Mechanics”. He called
the idea “quite ridiculous”, and Einstein’s response was that it showed the current theory “just cannot be taken as a
description of the real state of affairs”. However, the long reach of Niels Bohr’s influence over the philosophical
interpretation of quantum mechanics has meant that the notion of a dead-and-alive cat in a box – however
nonsensical – has indeed been widely accepted as a description of the real state of affairs in the quantum world.

The biggest problem with it by far is the notion of measurement. Bohr was never able to define what measurement
actually was: does the flick of a meter’s needle count as a measurement, if no one is looking at the meter? Must a
conscious mind be involved for the cat to live or die? What if the cat hears the Geiger counter click; will that “collapse”
the superposition?

For some, measurement is a distraction: something as simple as the release of information from a particle, such as
photons that might give away its position, are enough to collapse the superposition. The issue remains unresolved,
and has spawned a range of “interpretations” of quantum theory. These posit a variety of physical phenomena – some
of them happening in universes separate from our own, or involving hidden information or a clockwork universe in
which the experimenter has no free will – as the processes that explain fully the Schrödinger’s cat thought experiment.

Another baffling consequence of the Schrödinger equation is “entanglement”. The equation says that interactions
between quantum particles (or waves) put them in an entangled state, where some of the information describing one
particle can only be found in the other. For Schrödinger, entanglement was “the characteristic trait of quantum
mechanics, the one that enforces its entire departure from classical lines of thought”.

Entanglement creates a situation where a measurement on one particle can instantaneously alter the properties of the
other, no matter the physical distance between them. This seems to be in conflict with Einstein’s special theory of
relativity, which says that no information can travel faster than light through the physical universe. Resolving the
problem requires that some aspect of the entanglement correlation between the particles lies outside the physical
space and time we inhabit. However, technologists have not allowed this to stop them putting entanglement to work
in applications such as quantum cryptography. Messages encrypted using quantum physics are protected from
eavesdroppers by the delicate nature of entanglement; an eavesdropper cannot avoid detection because
eavesdropping counts as a form of measurement that changes the quantum state. Commercial quantum cryptography
systems are now in use around the world, and have been employed to protect data transfers of Swiss election results,
military communications and various financial transactions.
Computing based on Schrödinger’s equation has similar technological potential: it is the basis for “quantum
computing”, which uses superpositions of numbers to perform many calculations at once. Multinational companies
such as IBM, Google and Microsoft, along with many smaller academic and industrial efforts, are pushing ever closer to
this revolution in information processing.

Erwin Schrödinger himself seems to have been a self-centred man, with little consideration for others. In his
“Autobiographical Sketches”, he admits that he only ever had one close friend, Franz Frimmel. He also expresses
regret at not caring properly for his dying father: “I do not know whether my father had adequate medical treatment,
but what I do know is that I should have looked after him better”. The lesson was not learned by the time his mother
was in financial difficulties and unable to afford her rent. “Mother had to leave, where to I do not know”, he writes.

His relationships with women were largely self-serving. His long-suffering wife Anny complained when he took up with
a mistress who was brought into the family home, but she concluded that Schrödinger was worth the consternation he
caused her. “It would be easier to live with a canary than a racehorse, but I prefer the racehorse”, Anny once confided
to one of her husband’s colleagues. Though they remained married, they ended up barely talking to one another.
Schrödinger had a string of affairs and, in later life, admitted to what we would now call grooming of a minor. He took
on a fourteen year old tutee when he was thirty-nine, subjected her to a “fair amount of petting and fondling”, then
worked to develop the relationship for three more years before consummating the affair.

Schrödinger’s time in Vienna came to an end with the rise of Hitler’s regime. He left in September 1938, leaving his
Nobel medal and all other valuables behind, and eventually settled in Dublin. Here he worked on ways to marry
quantum theory with Einstein’s general theory of relativity. His efforts to promote his own ideas in this area caused a
rift with Einstein, who was working on similar goals, but using a different approach.

Both men’s attempts to formulate a single unified theory describing the entire universe were largely fruitless.
Schrödinger, though, found a more successful coda to his research career by rekindling his interest in biology. Here, his
contributions have been clear and significant, with Francis Crick and James Watson independently citing this foray into
the life sciences as a direct influence on their success in the hunt for the roots of heredity.

Schrödinger made his ideas public in a series of lectures given at Trinity College that were later published (in 1944) as a
book with the provocative title What is Life?: The physical aspect of the living cell. In these pages is Schrödinger’s
suggestion that inherited characteristics must stem from a molecule with a structure that does not repeat and
contains a “code-script”. This information, he said, will determine “the entire pattern of the individual’s future
development and of its functioning in the mature state”.

According to James Watson, reading this book was a “major factor” in Francis Crick leaving physics and developing an
interest in biology. For Watson’s part, Schrödinger’s contribution “very elegantly propounded the belief that genes
were the key components of living cells and that, to understand what life is, we must know how genes act”. With the
science that has resulted from this pair’s discovery of DNA, it is hard to argue that Schrödinger’s contribution to
biology has been any less than his contribution to physics. It also fulfilled his stated belief about the purpose of
scientific inquiry. “Who are we?” he asked in his essay Science and Humanism (1951). “The answer to this question is
not only one of the tasks but the task of science.”

Michael Brooks is the author of The Quantum Astrologer’s Handbook

The relentless honesty of Ludwig Wittgenstein

IAN GROUND

A new series from the TLS, appraising the works and legacies of the great thinkers and philosophers

If you ask philosophers – those in the English speaking analytic tradition anyway – who is the most important
philosopher of the twentieth century, they will most likely name Ludwig Wittgenstein. But the chances are that if you
ask them exactly why he was so important, they will be unable to tell you. Moreover, in their own philosophical
practice it will be rare, certainly these days, that they mention him or his work. Indeed, they may very fluently
introduce positions, against which Wittgenstein launched powerful arguments: the very arguments which, by general
agreement, make him such an important philosopher. Contemporary philosophers don’t argue with Wittgenstein.
Rather they bypass him. Wittgenstein has a deeply ambivalent status – he has authority, but not influence.

For the more general reader, Wittgenstein’s status in contemporary philosophy will be puzzling. The general view is
that Wittgenstein is surely the very model of a great philosopher. The perception is that he is difficult, obscure and
intense, severe and mystical, charismatic and strange, driven and tragic, with his charisma and difficulty bound up with
his character and his life. Wittgenstein saw philosophy not just as a vocation, but as a way of life he had to lead. This is
perhaps why writers and artists have found him an object of fascination and inspiration. He is the subject of novels,
poetry, plays, painting, music, sculpture and films. In the arts and the culture generally, Wittgenstein seems to be what
a philosopher ought to be.

SUBSCRIBE TO THE WEEKLY TLS NEWSLETTER

Born in 1889, Wittgenstein came from an extraordinarily wealthy but tragically dysfunctional Viennese family. He
made friends and enemies with equal alacrity. He travelled widely. As well as regular journeys between England and
Vienna, he visited and lived for periods in Ireland, Norway, Russia, the US and, in the UK, Cambridge, Manchester,
Swansea and Newcastle. At various times, he was an engineer, a sculptor, a photographer, a school teacher, a hospital
technician and, of course, a fellow in philosophy at Cambridge. He knew almost every great figure in the intellectual
culture of the first half of the twentieth century. He gave away his fortune and, several times, gave up philosophy. He
published only one book in his lifetime – the Tractatus-Logico-Philosophicus (1921) and claimed that this work solved
all the (essential) problems of philosophy. But his later work appears to disown much of it. His reputation is based on
the huge collection of manuscripts and notes known as the Nachlass, together with accounts made by others of
lectures he gave. Published in various forms, the central work is the posthumous Philosophical Investigations (1953).
But later edited collections of remarks such as Zettel, On Certainty and Remarks on the Foundation of Mathematics
and others are also of enormous importance.
Consisting of seven propositions, all but the last with multiple sub propositions, the Tractatus is austerely beautiful but
severe and technically demanding.

One way to approach it is to see the book as the ultimate distillation of a particular historically dominant conception of
ourselves: first and foremost, we are conscious thinkers. Only after are we active, embodied, speaking agents. Before
we communicate, we must first have something to communicate. We must first be capable of true and false thoughts
about the world: to be able to think about things, and combinations of things – what, in the Tractatus, Wittgenstein
calls “states of affairs”. Some of these states of affairs obtain and some do not. The actual world consists of all the
states of affairs – combinations of things – that obtain: the facts. (Hence “The world is the totality of facts, not of
things”.) But we can also represent to ourselves what does not obtain – the merely possible – and, as well as thinking
what is true, we can think falsely.

We can see Wittgenstein’s question in the Tractatus as: how is this possible? What must be the case if we are able to
have such true and false thoughts of the world? What must be the case if the world, is by us, thinkable?

His answer is that the world, language and thought must share a common form of elements and their possible
arrangements. Wittgenstein calls this “logical form”. Elements in our representations of the world, true or false, stand
in the same relationship to each other as the elements that constitute states of affairs. Reality, language and thought
mirror each other. It follows that if we think or say anything meaningful, then what we think or say must be capable of
being true or false. For only then will it picture or represent a possible fact. Otherwise what we say or think will be
senseless. There must also be a mechanism (which the Tractatus describes) for allowing more complex meaningful
thoughts and statements to be generated from more primitive ones.

In articulating his account of how it is that we can think and speak at all, the Tractatus gives expression, sublime and
exact but not wholly original, to a conception of ourselves that was arguably already latent in our intellectual culture.
A conception of ourselves as representing beings – minds – which can represent the world to ourselves, think and say
things that are true or false, and can have reliable means of acquiring truths about the world – which we call science.
This picture of the nature of mind, and hence of ourselves, continues to be the default conception in the cognitive
sciences. Minds are representational engines.

But what is most strikingly original about Wittgenstein’s account in the Tractatus is his drawing out of the implications
– which are to a degree disturbing – of this conception. One implication is for values. If I think or claim that the car is in
the garage, then, built into that claim is the idea that this may be true or false. But when I think that, say, slavery is
morally wrong, I think something that could not be otherwise than true (even if others should disagree). But then,
according to the Tractatus, in ethical thought, I am not representing how the world is one way rather than another. So
strictly speaking, ethical talk will make no sense. Still, according to Wittgenstein, we are ethical beings. The ethical is
real. Teaching us how to live in the light of that thought was, Wittgenstein believed, the true aim of the Tractatus.

This general constraint on what can be meaningfully said also applies to what philosophers have wanted to claim over
the millennia. For philosophers make claims not about what happens to be true, but what must be. But if the account
offered in the Tractatus of how thought is possible is correct, then such claims, not being capable of being false, are
strictly meaningless. We might think of it this way: I can use chess notation to describe the actual position on a chess
board. But I cannot use chess notation to say how chess notation represents any such chess position. That shows itself
in the way the notation works. Of course, I can use English (or any other natural language) to say or teach how the
chess notation works. But when we want to explain not chess notation, or any particular natural language but
language (and thought) itself, that recourse to another medium is not available. There is only the showing left.

With the relentless honesty that characterized all his thinking, Wittgenstein applies this thought to the Tractatus itself.
For the relationship between thought and the world that the Tractatus articulates is not one among all the facts there
are. It is a condition of there being any thinkable facts. Philosophy as envisaged by the Tractatus is therefore a futile
attempt to say what cannot be meaningfully said but which can only show itself. So, philosophy, insofar as it is possible
at all, cannot be a body of doctrines. It must be an activity. It must aim not, like science, at truth and knowledge, but
only at clarity and, with the achievement of that clarity, peace. This is why Wittgenstein claims that the propositions of
the Tractatus are like rungs on a ladder. We use them to climb up to a position where we can see things as they are,
where we can “see the world aright”. But then we throw the ladder away.

In the years that followed – which have been examined and documented in immense detail by scholars – Wittgenstein
came to abandon and replace much of this conception of language and thought while maintaining a great deal of its
spirit. Perhaps it was because Wittgenstein had been able to give such complete expression to the earlier conception
that only he was able to see, so deeply and so clearly, where it came from, how it failed, what should be kept and what
replaced.

This new conception of ourselves – of language and of mind – is articulated in his masterpiece, widely regarded as one
of the two or three greatest works of philosophy in the Western Tradition, the Philosophical Investigations.

The work consists of 693 numbered remarks of varying length (with a second part whose exact relationship to the
main body is a matter of scholarly controversy). In contrast to the Tractatus, the Philosophical Investigations, can,
indeed must, be read first hand. It contains almost nothing that might be called technical and mentions only a very few
other philosophers by name. But as Wittgenstein wrote: “It will be easy to read what I will write. What will be hard to
understand is the point of what I say”.

In this work, Wittgenstein thinks and writes with ruthless intellectual honesty. He pulls at every thread in his thought.
To read it is to have the palpable sense of a thinker in the act of philosophical inquiry. And yet, at the same time, we
cannot as readers be merely the passive audience for this drama. To read the Investigations as it should be read is to
participate in a shared, essentially democratic endeavour in which we must find our own place among the myriad
voices that enter, have their say, and exit, call out from off stage, return again in different garb with new parts. We are
invited and must accept to be one of these players. We have to try to read it as honestly as it was written.

As we struggle to follow the twisting lines of thought – the apparently abrupt changes of topic, the multiple voices and
changes in key and colour, we also have to try to pause and answer the hundreds of questions it asks. In fact, as
someone once counted, there are 784 questions asked in the Investigations. Of those only 110 are answered. And of
those answers, seventy are meant to be wrong. And more often than not, we find that the answer we want to give to a
question – that, if we pause for a moment, comes naturally to us, is then anticipated and forms the subject of a next or
near passage or remark.
Sometimes he more or less straightforwardly asks a question, makes an observation and answers it:

Is what we call “obeying a rule” something that it would be possible for only one man to do, and to do only once in his
life? – This is of course a note on the grammar of the expression “to obey a rule”.

It is not possible that there should have been only one occasion on which someone obeyed a rule. It is not possible
that there should have been only one occasion on which a report was made, an order given or understood; and so on.
– To obey a rule, to make a report, to give an order, to play a game of chess, are customs (uses, institutions).

Sometimes, he directly engages with the reader in order to end or start a new track of his investigations:

Make the following experiment: say “It’s cold here” and mean “It’s warm here”. Can you do it? – And what are you
doing as you do it? And is there only one way of doing it?

Elsewhere, anticipating our own first response, he offers further questions as misleading answers to the original
question, and then offers his own, sometimes sharp, put down:

What gives us so much as the idea that living beings, things, can feel?

Is it that my education has led me to it by drawing my attention to feelings in myself, and now I transfer the idea to
objects outside myself? That I recognize that there is something there (in me) which I can call “pain” without getting
into conflict with the way other people use this word? – I do not transfer my idea to stones, plants, etc.

Couldn’t I imagine having frightful pains and turning to stone while they lasted? Well, how do I know, if I shut my eyes,
whether I have not turned into a stone? And if that has happened, in what sense will the stone have the pains? In
what sense will they be ascribable to the stone? And why need the pain have a bearer at all here?!

And can one say of the stone that it has a soul and that is what has the pain? What has a soul, or pain, to do with a
stone?

Only of what behaves like a human being can one say that it has pains.

There are many other uses of questions in the Investigations (indeed, Wittgenstein once considered writing a work
that consisted entirely of questions). Responding to them, as we read, makes the experience of reading Wittgenstein
peculiarly intimate, and also, as very many have found, including Daniel Dennett, “liberating and exhilarating”. But
having gone through this process it is then very difficult to stand back and say, “Well, then what we learned was such
and such. I can use that idea here in relation to this current debate or issue”. In this respect, reading Wittgenstein is
very like engaging with works of art: it is a process deeply resistant to paraphrase. You have to experience it for
yourself. And it not just what but how you think that will change.

The Philosophical Investigations discusses the nature of language and mind, and the confusions about both to which
Wittgenstein thought we and our culture are inevitably prone. He seeks to explore the conception of ourselves he had
so completely articulated in the Tractatus: that we are fundamentally thinking, knowing, representing beings. And to
expose this conception as a deeply engrained set of mutually reinforcing illusions and confusions, mistakes and myths.
He attempts this not or not mostly by what philosophy traditionally regards as argument. For a picture is not the kind
of thing against which one can argue. Rather his aim is to break the grip of the pictures of mind and meaning that
“hold us captive”. Thought experiments, reminders of perfectly ordinary facts of life or ways of speaking, striking
juxtapositions, elaborate lists of examples and a host of disputing voices are all brought into play. All the time, he is
criss-crossing the same landscape in different directions, offering sketches, partial and incomplete, of what he finds
and trying to map how apparently distinct positions on the nature of mind and of language are connected together.
Just as in the Tractatus, in the Philosophical Investigations, the task of philosophy is not to advance claims or theories,
but to be a never-ending activity of seeking clarity about the ways that we think. One difference from the earlier work
is that the Philosophical Investigations gives us not a single ladder to climb. Instead it shows us the paths up a series of
hills and promontories, from which we may gain different overviews of the landscape and, with luck, see the light
gradually dawn.

A guiding theme is Wittgenstein’s attempt to wean us from the conception of intrinsically representational, intrinsically
meaningful, psychological states or processes and their non-psychological analogue in the form of meaning rules.

Central to this conception are two pictures or collections of pictures. One is a way of conceiving of the inner and the
outer: our subjective inner lives and our outer behaviour in a world of others. We think of our inner lives as being like
an internal space in which there exist various things, states and processes: thoughts, emotions, sensations. What we
do is merely the outward sign of this inner reality: behaviour.

The other picture or set of pictures is a way of conceiving of how language works. We think that language is primarily a
matter of naming things. And that all the other diverse uses to which we put language – detailed at length through the
text – are trivial compared to the primal, foundational act of naming things.

Wittgenstein shows how these two sets of pictures mutually reinforce each other in myriad ways. One way is this:
because we think language is fundamentally about naming things, we think that psychological concepts must also be
names of things, but of things in an inner space. So we model the reality of the inner on the existence of physical
things with the peculiar property that these mental objects are only visible to and nameable by their owner. But we
are also puzzled about how words can function as names at all. How they can reach out to what they name? Words
are, after all, just arbitrary sounds or squiggles. We think then that it must be something special indeed which enables
words to have meaning. It must be some special set of the psychological states and processes, a picture of which we
already have. Our words mean because we mean. And we can mean because we are in possession of inner, essentially
private psychological states that can intrinsically reach out to the world. Language is really a collection of private, inner
acts of meaning and naming, a collection of private languages that happen, more or less imperfectly, to overlap.
In this way, Wittgenstein seeks to trace the deep connections between our mistaken conceptions of mind and
meaning. In their place, he offers an entirely different vision. He insists that intrinsic meaning, on which
representational capacities depend, only gets going in and through the shared practices and interactions of living,
embodied beings and is only visible in and through the lives and activity of such beings. These activities operate in and
through language – in what Wittgenstein calls “language-games”. In the beginning is not the word at all. But the deed.
A consequence of that position is that we no longer think of the inner versus the outer in the same way. The idea of
public language as rooted in a prior private language is demonstrated to be an illusion. One that fails to recognize that
we are social, communicating beings and that we are so all the way down.

Say that we become puzzled about money. Here is something that people deeply desire, spend and risk their lives
acquiring. People are “worth” so much money and so on. But perhaps we are struck by the fact that coins and notes
are, in themselves just worthless bits of metal or paper. How can they have value? (Note that we have already slipped,
even at the moment we first become puzzled, into thinking of “value” as a kind of property something has.)

Imagine that someone replies like this: it is true that actual cash is arbitrary – just stuff. What matters is that cash is
backed by something that really does have value. The “promise to pay the bearer on demand” on UK notes. The gold
in the bank is what really has value. The money is just an outward sign of that true value.

But gold is also just a kind of metal. Why should it have value? The same question we asked about the cash can now be
asked about the gold.

Someone else might interject: gold is rare and hard to acquire. That’s why it has value. But lots of things are rare
without being valuable. And in any case, no one actually trades in their money for gold. Banks won’t even let you do
that. Yet we go on treating the money as valuable.

Here of course we will want to say this: actual money (coins and notes) isn’t intrinsically valuable. What matters is only
that it is in fact used in trade and exchanges. The value lies in the use of the money. It’s not that the exchanges use
money because the money has value. Rather the money has value because the exchanges have value. Or rather what
we mean by monetary value is made manifest in and through the activities of exchange and the myriad things we do
with money. And once we see things that way round, it will now seem rather strange to say that money is just
worthless stuff. It looks that way and we became puzzled in the first place only because we tricked ourselves into
separating out the notes and coins from their use in exchange. Our problem was how to explain how certain stuff –
notes and coins – had value. So we started looking for another kind of stuff to carry that value. That is, we already
committed to a particular view of what an explanation would look like. The solution was to change our view of what
would count as an explanation or indeed whether one was actually needed at all. We solve the problem when we
dissolve the source of our puzzlement.

The analogy is between the values of notes and coins and the meaning of words and sentences. We see that particular
sounds and squiggles in a particular language, say English, are in themselves arbitrary, having no intrinsic connection
to the things they stand for. So we think there must be something standing behind the words which gives them the
real meaning. What could that be? Well, we might suggest idea, thoughts and intentions. We mean things by our
words. Others understand us because they know what we mean by the sounds or marks we make. The words are
arbitrary but the thoughts are not. Their meaning is laid up in the vaults of the mind.

But just as gold was not the real explanation of the value of money, thoughts are not the explanation of the meanings
of words. It is not that gold or thoughts don’t exist. Of course they do. But if it’s a problem to explain how words have
meaning, it is equally a problem to explain how thoughts have meaning.

We see people using money and words in all their forms, buying and selling, speaking and listening. Nothing is hidden.
Nothing stands behind all the activity. The value – or meaning – lies in the activity. We might note too that, in our
analogy private money – a currency that one alone could use – would be nonsensical. Similarly, a private language, the
words of which only an individual could understand is equally senseless. And a philosophical theory of mind and
meaning, which implied the possibility of such a private language, would, for that reason, be mistaken.

This set of exchanges and twists of thought is, or is something like, what is going on in this passage from the
Philosophical Investigations:

How does it come about that this arrow ➔ points? Doesn’t it seem to carry in it something beside itself? – “No, not
the dead line on paper; only the psychical thing, the meaning, can do that.” – That is both true and false. The arrow
points only in the application that a living being makes of it.

This pointing is not a hocus-pocus which can be performed only by the soul.

We want to say: “When we mean something, it’s like going up to someone, it’s not having a dead picture (of any
kind).” We go up to the thing we mean.

“When one means something, it is oneself meaning”; so one is oneself in motion. One is rushing ahead and so cannot
also observe oneself rushing ahead. Indeed not.

Yes: meaning something is like going up to someone.

In this passage, the arrow serves of course as the paradigmatic meaningful sign. Wittgenstein’s opponent wants to
stress the passivity of the sign in itself and thinks that therefore some account must explain how it is that thoughts
(“the psychical thing”) actively reach out to the things to which they refer as if they were going up to someone to
shake his hand. Wittgenstein agrees with this opponent that meaning something is like going up to someone. But, he
suggests, this is not true, as his opponent intends, in a merely metaphorical sense. Rather, our meaning something is
literally like going up to someone. Meaning gets going because we move around and act on a world of other objects
and agents; pragmatic engagements in the world, which logically precede language. It is these practical engagements,
rather than the shared logical form of the Tractatus, that enable meaning. We do not mirror reality. We are enmeshed
in it.
Wittgenstein was hostile to modern philosophy as he found it. He thought it the product of a culture that had come to
model everything that matters about our lives on scientific explanation. In its ever-extending observance of the idea
that knowledge, not wisdom, is our goal, that what matters is information rather than insight, and that we best
address the problems that beset us, not with changes in our heart and spirit but with more data and better theories,
our culture is pretty much exactly as Wittgenstein feared it would become. He sought to uncover the deep
undercurrents of thought that had produced this attitude. He feared it would lead not to a better world but the
demise of our civilization. That perhaps explains his deep unpopularity today. It is for the same reason that Ludwig
Wittgenstein is the most important philosopher of modern times.

Jean-Paul Sartre and the demands of freedom

GARY COX

A new series from the TLS, appraising the works and legacies of the great thinkers and philosophers

In Existentialism and Humanism, Sartre wrote, “There is no genius other than that which is expressed in works of art”.
True to a central maxim of his existentialist philosophy – “to be is to do” – Sartre built his colossal reputation as a
philosopher, novelist, playwright, screenwriter, biographer, diarist, literary theorist, essayist and journalist out of
sustained hard work. He was gifted but preferred to attribute his achievements to perspiration rather than inspiration.
As he wrote in his autobiography, Words: “Where would be the anguish, the ordeal, the temptation resisted, even the
merit, if I had gifts?” From childhood his ambition was to be the great, dead French writer he became. He wrote for at
least six hours a day for most of his life. “If I go a day without writing, the scar burns me.”

Sartre’s prolific and often drug-fuelled output is now a part of the legend, along with his numerous love affairs (despite
his self-proclaimed ugliness), his wartime adventures and the post-war, hard-left political activism that led him and his
lifelong companion, Simone de Beauvoir, to fraternize with many dictators.

By the standards of most philosophers, Sartre led an exciting life. His adventures, his singular appearance, his
relentless radicalism, his eccentricity, make him an easy figure to caricature, and he was canny when it came to
crafting his image, but for all that there is a serious, systematic and inspiring philosophy behind the melodrama, a
grand theory rooted in the best traditions of Western thought.

Born in Paris in 1905, Sartre was fifteen months old when his father died, leaving him to develop free from the
oppressions of a paternal will. He was raised by his doting mother and her father, whose sizeable library was the
precocious child’s playground. Following a difficult period of exile in La Rochelle when his mother remarried, Sartre
progressed to the prestigious École Normale Supérieure where he met de Beauvoir and (on his second attempt)
graduated with top marks in 1929. The easy fame he had expected as an undergraduate eluded him and in 1931 he
became a provincial school teacher in Le Havre. He continued studying and writing, however, and by 1938 finally made
his name with the publication of his cult existentialist novel, Nausea.

Conscripted at the start of the Second World War, Sartre was taken prisoner by the German advance of 1940. He may
have been released on medical grounds, he may have escaped, but by spring 1941 he was back in Paris where he
founded the resistance movement Socialism and Freedom. All this time, invigorated by the war, he had been writing
his major work, Being and Nothingness: An essay on phenomenological ontology, published in 1943.

Often called “the bible of existentialism”, this dense 650-page book was the extraordinary distillation of everything his
monumental intellect had read, written, considered, experienced and discussed for more than twenty years. Today it is
part of the canon of Western philosophy.

Most philosophical analysis of Sartre’s existentialism has centred and continues to centre around Being and
Nothingness. At the heart of Sartre’s philosophy are four closely related phenomena: consciousness, freedom, bad
faith and authenticity. Being and Nothingness deals with the first three and promises a future work “on the ethical
plane”. Authenticity is central to Sartre’s ethics. Sartre never completed a book on ethics but he says enough on
authenticity elsewhere for his position to be clear. Importantly, Sartre’s view of authenticity makes sense only in light
of his view of bad faith, his view of bad faith only in light of his view of freedom and his view of freedom only in light of
his view of consciousness.

Sartre’s question in Being and Nothingness is the same as that of his major influences, Hegel, Husserl and Heidegger:
what is consciousness? What is the nature of a being that has and is a relationship to the world, that is an awareness
or consciousness of the world and which acts upon the world? Sartre’s answer is that the only kind of being that can
exist in this way is one that is, in itself, nothing; a being that is a negation, non-being or nothingness.

Following Husserl, Sartre argues that consciousness is always consciousness of something. Consciousness is not a thing
in its own right but entirely a relationship to the world it is conscious of. This is the theory of intentionality.
Consciousness always intends its object and is never merely a set of brain states. Expectation is expectation of
something, desire is desire for something and so on. Sartre refers to consciousness as being-for-itself in order to
highlight the fact that it does not exist in its own right, as the world (or “being-in-itself”) does, but must constantly
achieve its borrowed being for itself by being consciousness of being-in-itself.

Each of us is a being-for-itself in relation to being-in-itself. Human reality encompasses both the facts of our situation
and the not-yet-realized potential of the things we are not, but which we could be. It incorporates both the facts of
our lives and our past – our “facticity” – and the negating, nihilating force of consciousness by virtue of which we have
“transcendence”. We do not exist simply in our own right as chairs do, but always in relation to – and as the negation
of – our situation. A chair is, but we must constantly create ourselves over time through our actions. We have to
choose who we are each moment by what we choose to do, without ever being able to become a fixed entity. This is
what existentialists call the indeterminacy of the self. The self is not a thing, it is a being unavoidably caught up in a
constant temporal process of becoming.

Time or temporality is important for Sartre’s theory of consciousness. The dimensions of temporality so familiar to us,
past, present, future, are the same as the dimensions of consciousness, to the extent that past and future only arise as
features of the world through and for consciousness. We are never entirely what we were – the past – and not yet
what we will be: the future. As for the present, it is nothing but our presence to the world as a being constantly
moving forward in time. To reach the future is for it already to be past. Hence, tomorrow never comes. Sartre calls the
past past-future and the future future-past.
Closely related to being-for-itself is being-for-others. Certain features of a person’s consciousness are only made real
because of the consciousness of another person – the Other: shame, embarrassment and pride, for example. A person
is his being-for-others whenever there is an Other conscious of him, one who is free to evaluate his actions as he
chooses. The Other is free to look at me, judge me, form opinions of me that I cannot ultimately control. And since
each of us is Other to the Other, interpersonal relations are marked by conflict; each one of us casts everyone else as a
being-for-others in our eyes; we are free to judge and objectify them and they in turn constantly and inexorably judge
and objectify us.

Existentialism is best known as a philosophy of freedom. Sartre argues that freedom is limitless. This is often
misunderstood. He does not mean we are free to jump to the moon, or that we can radically re-invent ourselves from
scratch at any moment – but rather that there is no limit to our obligation to choose who we are through what we do
or not do. This is what he means when he says we are “Condemned to be free”.

Each person, then, is a futurising intention, a temporal flight from his present nothingness towards a future
coincidence with himself that is never achieved. It is in that open future – which defines him and at which he aims –
that a person is free. As essentially free, people must be free; any attempt to evade this responsibility by choosing not
to choose constitutes bad faith.

Bad faith is basically using freedom to deny freedom. It is choosing to say “I have no choice” – to reject the being-for-
itself of human reality, and identify falsely with the in-itself – then treating that choice as though it is not a choice. Bad
faith is sometimes described as self-deception but this is strictly inaccurate; I cannot lie to myself, as I can lie to
someone else, without catching myself in the act. Bad faith is rather self-evasion or self-distraction, the practice of
ignoring the meaning of my actions by desperately focusing on other matters as I move forward in time.

Bad faith is irresponsibility. It is often overlooked that responsibility is integral to Sartre’s theory of freedom, largely
because the claim that we are always free is far more appealing than the claim that we are always responsible. For
Sartre, authenticity is the overcoming of bad faith. Authenticity is taking responsibility for freedom; taking
responsibility for what we do in every situation. Authenticity is affirming all our choices and therefore all our past
without regret.

During the Second World War, Sartre formulated the concept of authentic-being-in-situation after encountering a
comrade who declared he was not a soldier but a civilian in disguise. It is true that his comrade was not a mere soldier-
thing, but he was nonetheless a soldier because he was acting as a soldier. His comrade was in bad faith, refusing to
confront the reality of his situation and the meaning of his choices and actions in that situation. Sartre chose the war,
decided that his situation was not merely happening to him. He chose his past, he claimed – even his birth. By his own
lights, he adopted the authentic attitude that his entire life had been leading to that time and place. He took full
possession of his situation without regret.

Post-war, Sartre developed his existentialism in an increasingly political direction. He placed his existentialist theory of
the individual at the heart of the Marxist theory of the historically defined collective. He also became increasingly
interested in biography, and his last major work, before his eyesight failed in 1973, was an exhaustive existentialist
psychoanalytic biography of Flaubert, The Family Idiot. He died in Paris in 1980 and over 50,000 people lined the
streets at his funeral. His remains lie in Montparnasse Cemetery alongside those of de Beauvoir.

Gary Cox is the author of a number of books on Sartre, existentialism and general philosophy, including How to Be an
Existentialist and Existentialism and Excess: The life and times of Jean-Paul Sartre

Jean-Jacques Rousseau and enforced freedom

DEREK MATRAVERS

A new series from the TLS, appraising the works and legacies of the great thinkers and philosophers

Jean-Jacques Rousseau (1712–78) is, perhaps more than any other philosopher, a contradictory figure. He is a
predecessor of liberalism and a theorist of fascism; a champion of the Enlightenment and its most severe critic; a
Classicist critic of Romanticism and vice versa; and an advocate of humane, child-centred education, despite giving up
his own five children to an orphanage and almost certain death. His reputation these days rests primarily on his
political philosophy (in particular, On the Social Contract), his autobiography (The Confessions), and a part novel, part
philosophical treatise, and part syllabus for progressive education (Émile).

Rousseau is very quotable – never more so than at the beginning of Book One, Chapter One of the Social Contract:
“Man is born free, and everywhere he is in chains”. This appears to capture the view with which he is most famously
associated: that man is born naturally good only to be corrupted by society. Another quotable opening sentence, this
time from Émile, seems to support this: “God makes all things good; man meddles with them and they become evil”.
This phrase is misleading on two accounts. First, Rousseau is clear that the situation of humankind in its pre-societal
state is not to be envied. Second, he did not think it our inevitable fate to be corrupted by society; indeed, the point of
the Social Contract is to provide a blueprint for a society in which people are able to flourish.

SUBSCRIBE TO THE WEEKLY TLS NEWSLETTER

It was fairly standard in the seventeenth and eighteenth centuries to compare humankind before societies formed (the
“state of nature”) to our condition in society. This was not an attempt to write history from the armchair, but rather a
thought experiment; a comparison between things then with how they are now, to shine a light on the advantages of
states. Rousseau had a weakness for the rhetorical flourish – and his powers of eloquence sometimes served to
highlight the notion that leaving the state of nature had been a catastrophe.

The first person who, having enclosed a plot of land, took it into his head to say this is mine and found people simple
enough to believe him, was the true founder of civil society. What crimes, wars, murders, what miseries and horrors
would the human race have been spared, had someone pulled up the stakes and filled in the ditch and cried out to his
fellow men: “Do not listen to this imposter. You are lost if you forget that the fruits of the earth belong to all and the
earth to no one!”
Nonetheless, further flourishes pull in a different direction:

The passage from the state of nature to the civil state produces quite a remarkable change in man, for it substitutes
justice for instinct in his behaviour and gives his actions a moral quality that they previously lacked. Only then, when
the voice of duty replaces physical impulse and the right replaces appetite, does man, who had hitherto taken only
himself into account, find himself forced to act upon other principles and to consult his reason before listening to his
inclinations. Although in this state he deprives himself of several advantages belonging to him in the state of nature,
but he regains such great ones. His faculties are exercised and ennobled, his entire soul is elevated to such a height
that, if the abuse of this new condition did not often lower his status to beneath the level he left, he ought constantly
to bless the happy moment that pulled him away from it forever and which transformed him from a stupid, limited
animal and a man.

Rousseau had identified the fundamental flaw in the state of nature argument; we are not comparing like with like.
The change from the state of nature to the civil state transforms us altogether – it changes us psychologically, and
hence morally and politically, from “stupid limited creatures” to those governed by justice, morality, duty, right and
reason. Furthermore (a point stressed more recently by Bernard Williams) there is no route back – attempts to turn
the clock back to an earlier politics, or even to a pre-politics, by romantics of both the Left and Right are doomed not
only to failure but to catastrophe.

Given that we are stuck in civil society, are we doomed to misery and horror? In the Social Contract he attempts to
show how we can avoid such a consequence. The problem is this: how can we organize society such that each
individual is “as free as before”, and yet is always free from the will of others? The solution lies in Rousseau’s difficult
notion of “the general will”. Each of us has a particular will: what we, as individuals, would like to do. Groups of people
have “the will of all”: something that the group wants to do that takes into account what individual members of the
group want to do. The general will is something different. It gets its content and its legitimacy from all of us, and thus
applies to each of us: “it must derive from all in order to be applied to all”. Yet it is not clear how Rousseau attempts to
square the circle. He sometimes writes as if the general will were a sort of cancelling out of the idiosyncratic elements
of various particular wills. However, that is not going to leave us “as free as before”; we are going to have to give up
whatever does not fit into the consensus. At other times, Rousseau writes as if the general will were a Platonic
abstraction of what is really good for us. But this is equally problematic. Is there one way for a society to be that will
provide what is best for each and every person – a way that leaves each and every person “as free as before”? Clearly
not – at least, on any reasonable interpretation of “free”.

Despite the difficulties, the promise Rousseau holds out is extremely attractive. Civil society would consist of moral
equals, who were free of any relationships of dominance and coercion. But there is simply no coherent case for
Rousseau that would deliver everything he promises. Indeed, he himself worries that he could be “accused of
contradiction” or that, although all his ideas are connected, “he could not expound them all at once”. However, one
can sort of see a way through. The change from the state of nature to the civil state changes us utterly into “a moral
and collective body” rather than simply a collection of individuals. Our wills will change to reflect this; we will be
focused on the corporate good rather than pursuing our own advantage. Furthermore, this will be what we, as
individuals, really want; hence, obeying the general will will leave us “as free as before”.
Not every commentator on Rousseau has been drawn into this vision. Writing in 1946, Bertrand Russell said of
Rousseau’s philosophy “Its first fruits in practice were the reign of Robespierre; the dictatorships of Russia and
Germany (especially the latter) are the outcome of Rousseau’s teaching”. Isaiah Berlin, in “Two Concepts of Liberty”,
also cited Rousseau as a source of the modern political abuse of positive liberty. The problem these critics find lies in
Rousseau’s view that what we actually want might not coincide with what we ought to want – and that there is a way
of finding out the latter. For this creates a problem: what to do with those who do not want what we know they ought
to? Looking back over the past 250 years, it is easy to find Rousseau’s solution chilling: “Thus, in order for the social
compact to avoid being an empty formula, it tacitly involves the commitment – which alone can give force to the
others – that whoever refuses to obey the general will will be forced to do so by the entire body. This means merely
that he will be forced to be free”.

What survives of Rousseau? He is historically interesting as a forerunner of figures as diverse as Kant (the only work of
art Kant owned was a picture of Rousseau) and Marx (although oddly Marx barely mentions him). He could be acutely
perceptive – although he could also be preposterously blinkered. He is excellent on the psychological damage done to
individuals by society – and it is not too much of a stretch to think of this as the damage done to individuals by
capitalism. He brings to the surface, even if he does not always solve, pressing problems in political philosophy –
including how exactly a civil society gets off the ground. Above all, his writing is invigorating because he rails
ceaselessly against tyranny and ugliness – he would have hated what we have allowed ourselves to become: beings
that are tyrannized by social and economic forces outside our control and who have laid waste to the environment. His
prose can possess a great lyric beauty – particularly the first half of The Confessions – and he resides in that long
tradition of disconcerting writers and thinkers, from Wordsworth to the Beats, who remind us of just how far we fail to
measure up to what a flourishing human life could be.

Arthur Schopenhauer: the first European Buddhist

JULIAN YOUNG

A new series from the TLS, appraising the works and legacies of the great thinkers and philosophers

With its stunning advances in science and technology, the nineteenth century was a century of optimism. Hegel’s
presentation of the history of the West as a Bildungsroman, a story of the ever-increasing realization of “reason” in
human affairs, captured the spirit of the times. Schopenhauer, however – the only major philosopher to declare
himself a pessimist – regarded Hegel’s story as a heartless fiction. Progress, he held, is a delusion: life was, is, and
always will be, suffering: “the ceaseless efforts to banish suffering achieve nothing but a change in its form”. Far, then,
from being the creation of a benevolent God, or of his surrogate, Hegelian “reason”, the world is something that
“ought not to exist”.

Born in Danzig (Gdansk) in 1788, Arthur Schopenhauer was brought up in Hamburg, the son of a cosmopolitan
businessman and a literary mother. Independently wealthy, he never held a paid academic post, and had indeed
nothing but scorn for those who live “from rather than for philosophy”, namely, “the professors of philosophy”.
Independence of means, Schopenhauer insisted, is a prerequisite of independence of thought. Accompanied by a
succession of poodles – he never married – he spent the last twenty-seven years of his life in Frankfurt. On the wall of
his study he had a portrait of Kant and, on his desk, a statue of the Buddha. For pleasure, he read The Times of
London, played the flute, and attended the Frankfurt opera. Unknown until his final decade, he died in 1860, the most
famous philosopher in Europe.
SUBSCRIBE TO THE WEEKLY TLS NEWSLETTER

Schopenhauer wrote only one work of systematic philosophy, The World as Will and Representation, which he
published in 1818. It is divided into four “books”. In 1844 he produced a second edition consisting of the 1818 volume
plus a second volume comprising four “supplements” to the four books of the first. This doubled the overall length of
the work to 1,000 pages. Given the opacity of most German philosophical writing, the “English clarity” of the work
(Schopenhauer was educated for a time in Wimbledon), its wealth of concrete examples and its wit makes reading it a
unique pleasure.

The starting point for all nineteenth-century German philosophers is the towering figure of Kant. The first sentence of
Book One of Schopenhauer’s 1818 volume (“the main work”) is: “The World is my representation”. This is intended as
a summation of Kant’s “transcendental idealism” according to which the world of space and time is not the “thing in
itself” but rather mere “appearance”. From a metaphysical point of view, the natural world is, as Schopenhauer puts it,
merely a “dream”.

Since transcendental idealism relegates the quotidian world to the realm of appearance, its truth – a given for
Schopenhauer and his contemporaries – raises the exciting question of how reality really is: how it is “in itself”. Kant’s
frustrating answer is that we can never know. Since space, time, causal connectedness and substantiality (thingness)
are the “forms” of the mind that shape all our experience, and since we can never step outside our own minds, reality
in itself can never be known. Together with his fellow “German idealists” Schopenhauer took this claim as a challenge
rather than a dogma. And although, in his maturity, he finally endorses it, he holds that progress can, nonetheless, be
made in digging beneath the manifest surface of things. Although philosophy cannot access the deepest truth about
reality, he finally accepts, it can at least provide a deeper account than that provided either by common sense or by
natural science.

According to this account, as Book Two of the main work tells us, the world which appears “as representation” is to be
understood, at a deeper level, “as will”. This is something disclosed to us, in the first instance, by the consciousness we
have of our own bodily actions. In external perception we are aware of, say, the appearance of an apple followed by
the appearance of a hand reaching towards it. Were this to be our only mode of consciousness, the connection
between the first and second perception would be utterly mysterious. But of course it is not our only mode. The
sequence of events is intelligible to us because inner experience reveals that the reason the second perception follows
the first is the desire to eat. Introspection tells us that what generates our actions is will – feelings, emotions and
desires that culminate in decisions, “acts of will”. Will explains human behaviour and the behaviour of the animals as
well. Even on the so-called inorganic level we find will at work: in, for instance, the conflict between centripetal and
centrifugal forces we find something similar to the conflict between one human will and another.

Schopenhauer’s discovery that the underlying “essence” of life is will is not a happy one. For, as the second of the
Buddha’s “Four Noble Truths” tells us, to will is to suffer. What follows, as the first of the “Truths” tells us, is that life is
suffering, from which Schopenhauer concludes that “it would be better for us not to exist”. He offers two main
arguments in support of the claim that to will is (mostly) to suffer, the first of which I shall call the “competition
argument” and the second the “stress-or-boredom argument”.
The world in which the will – first and foremost the “will to life” – must seek to satisfy itself, the competition argument
observes, is a world of struggle, of “war, all against all” in which only the victor survives. On pain of extinction, the
hawk must feed on the sparrow and the sparrow on the worm. The will to life in one individual has no option but to
destroy the will to life in another. Fifty years before Darwin, Schopenhauer observes that nature’s economy is
conserved through overpopulation: it produces enough antelopes to perpetuate the species but also a surplus to feed
the lions. It follows that fear, pain and death are not isolated malfunctions of a generally benevolent order, but are
inseparable from the means by which the natural ecosystem preserves itself.

It is true that with respect to the human species, civilization has somewhat ameliorated the red-in-tooth-and-claw
savagery of nature. Yet, in essence, human society, too, is an arena of competition. If one political party gains power
another loses it, if one individual gains wealth another is cast into poverty. As the Romans knew, homo homini lupus,
man is a wolf to man: “the chief source of the most serious evils affecting man is man”.

With his “stress-or-boredom” argument, Schopenhauer turns from social life to individual psychology. To live, we
know, is to will. Now either one’s will is satisfied or it is not. If it is unsatisfied one suffers. If the will to eat is
unsatisfied one suffers the pain of hunger; if the libidinal will is unsatisfied one suffers the pain of sexual frustration. If,
on the other hand, the will is satisfied then – after, at best, a moment of fleeting pleasure or joy – we are overcome by
a “fearful emptiness and boredom”. This is particularly visible in the case of sex: as the Romans again knew, post
coitum omne animalium triste est: everyone suffers from post-coital tristesse. Hence, life “swings like a pendulum”
between two forms of suffering, lack and boredom.

Book Three of the main work offers a detailed and comprehensive philosophy of art. Its importance for
Schopenhauer’s overall argument lies in its view of art as a brief intimation of the “salvation” that is the topic of Book
Four. Life is suffering. Everyday human consciousness is permeated by both present suffering and anxiety about future
suffering. But in aesthetic consciousness we are, as we indeed say, “taken out of ourselves”. Captivated by the play of
moonlight on gently rippling waves or by a great piece of music, we forget our ordinary will-full selves and hence the
pain and anxiety inseparable from ordinary consciousness. For a moment we achieve that “bliss and peace of mind
always sought but always escaping us on the path of willing”. Briefly, we inhabit the “painless state prized by Epicurus
as the highest good and the state of the gods”. And from this experience we can infer “how blessed must the life of a
man in whom the will is silenced, not for a brief moment, as in enjoyment of the beautiful, but for ever”.

But of course, since to live is to will, the will can never be entirely silenced in the “life of a man”. While the ascetic and
the thinker may have some success in transferring themselves from the vita activa to the vita contemplativa, as long as
one is alive one can never entirely escape the will. Only in death can the will be silenced “for ever”. And so, Book Four
tells us, only in death can we achieve final release, “salvation”.

But why should we regard death as salvation? Is it not absolute extinction, an abyss of nothingness to which one might
well prefer, for all its pain, life as a human being? One antidote to fear of death is transcendental idealism. Death is
something that happens to the self that exists within the “dream” of natural life. But since the dreamer of a dream
must be outside the dream, idealism assures us of the “indestructibility of our inner nature by death”. Depending on
circumstances, however, indestructibility could turn out to be a curse rather than a blessing. Why should we regard it
as the latter?
One of Schopenhauer’s criticisms of Kant is that he often speaks of “things in themselves”. Such pluralistic talk, says
Schopenhauer, is entirely unwarranted because it is only space and time that provide us with a principium
individuationis: only because we can identify two entities as inhabiting different regions of space-time can we speak of
them as two, as distinct individuals. But according to transcendental idealism, space and time pertain merely to
“appearances”, so it follows from Kant’s own position that reality “in itself” is “beyond plurality”.

Willing, however, requires plurality. At the very least, it requires a distinction between the subject of willing and its
object. Hence to be beyond plurality is to be beyond willing, and so to be released from the anxiety inherent in all will-
full consciousness. In the realm of the non-plural, one inhabits permanently Epicurus’ “highest good”, his “state of the
gods”. This is intuitively grasped by the mystics. The “pantheistic” sense of the gathering of all things into a divine unity
is the theme of all mystical experience. So, for example, Meister Eckhart’s disciple cries out in her ecstasy, “Sir, rejoice
with me for I have become God”. That the mystics come from all times, cultures and religious backgrounds means that
that their reports cannot be dismissed as delusional. And if we accept their veracity, we are assured that death really is
salvation.

Schopenhauer’s influence on later nineteenth-century and early twentieth-century artists has been greater than that
of any other philosopher: Tolstoy, Turgenev, Zola, Maupassant, Proust, Hardy, Conrad, Mann, Joyce and Beckett all
admired and were influenced by his work. Subservient to the Christian doctrine of a wholly powerful, benevolent
world-creator, the Western philosophical tradition has been compelled to conclude that we live in the best of all
possible worlds. In Schopenhauer, the artists found a philosopher who, for the first time, revealed how far this was
from the truth. The artist who engaged most deeply with Schopenhauer was Richard Wagner (himself a philosopher of
genuine ability). Originally a socialist-anarchist who narrowly escaped execution for his role in the 1848 Revolution,
Wagner discovered Schopenhauer in the middle of writing the Ring cycle. The result was a work that begins as an
argument in favour of utopian anarchism, and ends by advocating, as Wagner wrote to a friend, “the final negation of
the desire for life”. This, he wrote, is “the only salvation possible . . . freedom from all dreams is the only final
salvation”. Wagner’s ardent disciple, the youthful Friedrich Nietzsche, dedicated his first book, The Birth of Tragedy, to
Wagner and wrote it “in Schopenhauer’s spirit and to his honour”. The mature Nietzsche’s turn against Schopenhauer
and towards “life-affirmation” terminated his friendship with Wagner.

Schopenhauer was, I believe, the first European Buddhist (the first translations of the Hindu and Buddhist texts began
to appear as he was writing the main work). To live, he tells us, is to will, and to will is to participate in the anxious,
exhausting and endless Darwinian struggle that only the fittest survive. The pleasures of achieving a goal are either
fleeting or non-existent. And once achieved, we must rush on to the next goal in order to escape the ever-present
threat of boredom. Life is a treadmill; the “wheel of Ixion” never stands still. But this, Schopenhauer tells us, is a game
we do not have to play. We can withdraw from the life of willing into a life of contemplation – “mindfulness”, in
current jargon – a withdrawal which, for the enlightened, will complete itself in easeful death. At its deepest level, says
Schopenhauer, his philosophy, like Socrates’, is a “preparation for death”.

You might also like