You are on page 1of 7

Enclitics and other fascinations of a cunning linguist

I think the summer before my junior year in college was when I finally realized, with a clarity like the mountain
and the desert air I was soon to be breathing in such quantities, how much I loved to learn. That was the summer
I read Godel, Escher, Bach and a couple of linguistics books, among many others.

My mind was a wildfire, and I burned through dozens of books.

Godel, Escher, Bach was written by Douglas Hofstadter, a mathematician and philosopher who addressed the
question of the origins and limits of thought in an attempt to understand and explain artificial intelligence.
Perhaps the question seems outmoded today in light of the proliferation of computers, but perhaps it’s even more
appropos today than it was in the early 1980s. Computer capacity has progressed at astonishingly exponential
rates consistently over the past three decades. The question still remains: are these things actually intelligent? Is
what they are doing really thought? What is “thought” anyway? And at what point does emotion evolve out of
what the computers are doing (if at all)? Or is emotion uniquely a form of human–or at least animal–thinking? Is
thinking mechanical? Or could it be called “spiritual?” Is it at least biological?

Kurt Godel was a mathematician who devised a theory called the “theory of completeness.” In short, this theory
states that no “system” can be both complete and consistent. Only a sufficiently limited system can be free of
paradox, and no system can define its own limits. Limits are always defined with reference to something outside
of the system. He proved it mathematically, too. And if this reverberates among the scientists who read this, it
should, for it is eerily similar to the Heisenberg Uncertainty Principle, which established that it was impossible
to know both the speed and position of an electron at the same time. But it came before the Uncertainty
Principle.

“Oh, really?” you ask, stifling a yawn. Yes, really. And this theory was actually revolutionary since it condemned
us to ignorance about the most important questions regarding our existence. It should have ended all non-
theological discussions about the “meaning of life.” But of course it didn’t. We can’t know the meaning of life
because it implies an “other” who or which gives life that meaning. But we who are alive are trapped within life.
The theory of completion tells us that we can never escape. Cannot even pierce the veil with our thoughts. Dead
men indeed tell no tales.

Enter Escher, an artist whose optical illusions are familiar to everyone. What is it that is so compelling about
these optical illusions? Why are they so powerful? Well, they are beautifully executed, of course, with the sharp
precise lines and intensity of contrast typical of Ansel Adams’ photography or Frank Lloyd Wright’s architecture.
But both of these artists reached beyond, too. Escher’s work is so amazingly compelling because they are optical
illusions.

Let’s take a step back and look at them. They are flat, two-dimensional drawings. That means they have width
and length as every drawing must, but they lack height. They, like all drawings, are confined to two dimensions
on the page. And yet they strive ferociously to burst into a third dimension. Up until the Twentieth Century,
almost all “serious” drawings or paintings strove to simulate three dimensional objects. Artists used elaborate
color theory and drawing techniques (fore-shortening most notably) to create the optical illusions of depth in
painting and drawing. Piet Mondrian was one of the first simply to give up that struggle and celebrate two-
dimensionality. That was radical because he ceased trying to draw images from real, three-dimensional life. It
was “abstract.”

Escher’s works, on the other hand, strive more self-consciously for three dimensionality than any other. The
struggle to transcend dimensionality is at the heart of his work, and you can see the images reaching into the
third dimension only to be turned back by the page. Into themselves. You can follow the images around and
around, and each time they try to reach outside the page they are rebuffed by paradox.

J. S. Bach’s works, at least the ones of interest to Hofstadter, attempted the same feat musically. A fugue is a
series of notes that progresses up or down the scale before ending where it began, with the same note. Canons
are repeating and overlapping musical sequences. Like the character in Escher’s prints, the notes of a canon and
fugue are trying to escape the limitations of the scale, only to be turned back, folded back onto each other. Our
ears are not trained to hear the struggle in this, but the paradox of starting over without ever seeming to go back
is central to Bach’s works. Perhaps the effect is so beautiful because the struggle to transcend our boundaries is
so fundamental to the human condition, and we unwittingly relate to the efforts of the music.

Bach was a “baroque” composer. “Baroque” means “misshapen pearl.” Unlike a pearl whose smooth lustre is
derived largely from the perfection of its completeness, Bach’s music tried to reach beyond its mathematical
limits.

What is “thought” and where does it come from? These were the questions that Hofstedter addressed so
brilliantly. He was obsessed with the concept of organization. What is the difference between an ant, for
example, as it thoughtlessly zips hither and thither in search of food, and an ant colony, which seems an
altogether more intelligent entity? Ants are driven entirely by momentary hormonal impulses or whatever
insectoid impulses they may have, whereas ant colonies seem to plan, and to act strategically.

The brain is an electrical system made up of a very large number of neurons (brain cells) which periodically
“spark.” “Sparking” means that the neurons emit electrical impulses. They do this seemingly randomly, as
certain chemicals build up and form an electrical charge that must be discharged, and they also, when combined,
can trigger other (nonrandom) sparking. A single firing neuron may, or may not, set off a chain reaction, and
scientists have discovered that touching parts of the brain with electrical stimulators can trigger all sorts of
“mental” reactions (like memories or sensations). What is the difference between a brain cell (neuron)
mindlessly sparking electrical impulses at random, and a “mind?”

Computers consist of silicone “chips” which contain (vast numbers of) electrical switches. An electrical switch is
either “on,” completing the electrical circuit, or it is “off.” At its most basic, mechanical level, a computer is a
mind-bogglingly complex system of electrical circuits turning (only) either on or off. And that is all. Computer
code is simply a way to organize the billions of interrelated circuits and relate each on/off switch to its electrical
consequences. Code used to be nothing more than a long line of “o” and “—” marks, e.g., “o--o—oo--.” These
merely signified the electrical sequence of events occurring at the computer chip level. Like one neuron sparking
(sending out an electrical impulse) and triggering another.

How does a series of on/off switches, however complicated, turn into artificial intelligence? Because indeed our
computers are nothing more than that. And does the human brain truly consist of nothing more than neurons
firing: on/off? Is the internet the same thing as a bunch of computers relating to each other? Or does it become a
different sort of creature altogether, as an ant colony is different from an ant?

Or change the question slightly. If neurons in sufficient quantity and organization become mind, and electrical
circuits switching on or off in sufficient numbers give us computer capacity, might it also be true that all of
creation in its organization and interactions, become “god?” Or to speak metaphorically, if you pile enough
Escher prints on top of each other, do they really eventually transcend two-dimensionality? Can they reach
beyond their limits if enough of them keep trying?

We mustn’t forget fractals or quantum mechanics.


For a period in the mid to late 1980s fractals were all the rage. They look like repeating patterns, but they aren’t.
Is life “organized” into repeating, resolving patterns? Or is it “chaotic?” Well, does “pi” ever “round out?” No, it
does not. Pi never resolves or repeats, but instead it spins out infinitely. How weird. And yet how true to our
most basic observations, too: a round peg never truly fits into a square hole. Thought never quite encompasses
reality.

Einstein hated quantum mechanics. His Theory of Relativity had, he thought, connected and explained time,
matter and energy. We’ve all heard of “E=MC squared.” But his theory, written around the time of Kurt Godel,
went very much further than that. It rationalized the actions and interactions of matter, energy and time all the
way down to the particle level. We lived, Einstein believed, in an ordered universe.

Quantum mechanics shattered that vision. I leave it to the reader’s imagination to see the connection to fractals
or pi. According to quantum mechanics, the world operates on “probability.” You can set up experiments to show
that nuclear particles “sort of” go particular places. If you track them down it invalidates the experiment. Set up
an experiment involving the firing of an electron into a box. Does it go there? Well, yes. And no. The only way
you can look is to set it free if it’s there. If you look instead to the results (if it goes in, it will activate certain
markers), things get really freaky. The markers show that it does go in, or stay out, according to probability. For
some purposes it’s in there, and for others it isn’t.

Einstein could not stand it. He insisted that “God does not play dice with the universe” and dedicated the rest of
his life in a so-far vain search for a “unifying theory.” But the evidence at this point, overwhelmingly, is that at
the particle level quantum mechanics explains things that cannot be accounted for by the theory of relativity. And
at the experiential level, too. Because what is free will if not a transcencion of the law of cause and effect?
Einstein, a great proponent of free will, could never reconcile himself to the fact that, in some way, “free will”
operates all the way down to the particle level. That is, there are gaps in the chain of cause and effect where free
will could operate. Even things refuse to adhere strictly to a set of laws; they can surprise you even if you know
everything there is to know about them. On the other hand, while you may never know what an individual will
do, you can usually figure out what a group will do given enough time. They operate according to probability.

Neurons fire randomly, but brains operate in an organized fashion.

Linguistics fits into this discussion more easily than at first might seem obvious. Hofstadter himself loved
language and used it with great sophistication in his book. Beyond that, though, there is still this question of how
the brain works and how neurons within the brain do their thing.

To reiterate, neurons fire both randomly for chemical reasons and also for more electrical reasons. Here’s how it
works. The neurons are close to each other, almost touching. They are being fed appropriate nutrients let us
hope, and this causes some spontaneous liveliness. When a neuron fires an electric charge it zaps its neighbor.
This often creates an electrical imbalance necessitating another discharge. Thus chain reactions often occur. But
for various reasons they don’t always.

Now it gets a little more esoteric. For some reason, when neurons fire often along the same channels, they line
up in a way that makes it more likely that a smaller charge will trigger a chain reaction. And remember that a
“chain reaction of neuron firings” is... a thought. Neurons “form habits,” you might say. And this has led,
incidentally, to some amazing research relating to alzheimers, intelligence, and omega oils among other things.
But in any event, neurons that play together stay together it turns out. You might say, as many who are interested
in this question do, that neuron chain reactions and their consequences become “habits.” Eventually the reactions
follow so readily that we may say they have become “hard-wired.”

Note the linguistic irony. Most of the discussion has involved the way artificial intelligence has drawn its images
and language from brain research and “natural” intelligence. Here is an example of brain research and discussion
borrowing a concept and term from artificial intelligence. Although “hardwiring” in artificial intelligence does
not refer to habituated responses, but rather to the placement of a physical conducting agent (“wire”) in the
system to mandate certain electrical behavior, neuron pathways operate in much the same manner. Computer
research has contributed tremendously to understanding the brain.

Hard-wired or habitual, neuron chain reactions become more likely the more often they occur. This can be useful
in the development of good habits, harmful in the development of bad ones, and just weird in the case of
arbitrary associations that become hard-wired. (Think Pavlov or image imprinting by baby birds.) But in any
event, the hard-wired chain reactions begin to look less like real choice than mechanistic cause and effect. And
note how intimate it can be. Masquerading as the freest of choices, hard-wired behavior can arise in any
decision-making arena at all. On the other hand, quantum mechanics assures us that even the most seemingly
predetermined events harbor uncertainty.

If you’re going to change some behaviors, you may have to get out the ol’ power tools and go to town. Hence the
popularity, but typically ultimate failure of electro-convulsive “therapy” (E.C.T.) also known colloquially as
“shock-therapy.” E.C.T. blasts a person’s brain with massive quantities of electricity, disarranging all the
established neuro-channels and disrupting all the hard-wired behaviors. Because a huge amount of random
electricity has blasted through all the neurons, every individual cell has been depolarized and disorganized,
triggering massive chemical and electrical reactions.

Consider an ant colony and remember that the colony is different than any one, or even all, of its component
ants. The colony is the organization of the ants. If you stir up the mound and remove the dominant queen ant,
you have “killed” the colony. True, all the ants are still alive, but you have destroyed the colony by totally
disorganizing it.

Given time the ants will rearrange themselves and become a new colony. And this colony may have many
different features than the previous one even though all of the ants are still there and still functioning in the same
way. The new colony may be far more efficient and “happier.” Or the reverse.

To adapt an analogy from Hofstadter, to the extent a “person” is the product of his or
her neural patterns and organizations, E.C.T. kills the person. And if you have seen someone who has been
through the experience you will not doubt the power and validity of this observation.

But why doesn’t E.C.T. at least work in the way it was intended to work? Why do people who have undergone
the disruption and devastation of E.C.T. so often revert to unhappiness and depression after all? Why can’t they
at least turn into happy ant colonies?

Because no man is an island, that’s why. If you have ever been overweight and embarked upon a successful diet,
you will have a perfectly good idea of how this works. Your boyfriend who was so tolerant of your previous
condition and then so supportive of your choice to diet, starts talking about how “thin” or “emaciated” you look
after you lose fifteen of a planned fifty pounds. Your friends act weird. Your mother invites you over for dinner
and tries to force-feed you fried chicken and rich desserts. Individual people have expectations. As a society we
have hard-wired thought patterns. Subtle forces from everywhere will act to bring you back in line.

And it’s the same way with the set of behaviors that reveal and encode “depression” in individuals. As weird and
as perverted as it may seem, vast social pressures are brought to force the depressed person back into depression.
Each person who helps the shattered E.C.T. patient reassimilate her life helps make sure that the part he or she
knows is returned to its previous condition. Collectively they restore her to paralyzed depression.

7
Language reflects a society’s hard-wiring. It may start with mere repetition, as people hear certain expressions
and repeat them. Yet it also exerts a normative force just as established neural patterns or helpful friends do.

In English grammar, collective terms are considered “singular,” and that singular is male. Take a group of
twenty women preparing for an excursion. To be grammatically correct, you would say, “everyone packed his
bags for the trip.” This is proper English even if every single individual making up the “everyone” is female.
How do all those women turn into a single male entity, grammatically speaking? See how evocative Hofstadter’s
ant colony analogy is?

French is marginally more realistic; if the reader knows that all the individuals are female, then the collective is
grammatically female as well. But a single “y” chromosome still changes everything.

Remember all the fuss about “politically correct” language?

Presumably there are limits even in English grammar. Consider the sentence, “Everyone is expected to douche
before he engages in sexual activity.” Or, “Man is an adaptive species–the females sometimes rear the children
and sometimes work outside the home.”

Who is in charge of this stuff, anyway?

How does language develop?

Almost all language is metaphor. It is a relation, let us say, between the on/off circuits in our brain and the world
of events we experience around us. Therefore words are a sort of square peg we are trying to shove into round
holes.

As I said above, collectives in English are grammatically masculine. This “sentiment” was famously (or perhaps
“notoriously” is a better word) put by Blackstone, the English jurist, centuries ago. He said that, “in marriage
husband and wife are one, and the husband is that one.” He was referring to a legal concept known as
“coverture.” During the period of a marriage a woman’s legal person was “covered” up by her husband’s (and
almost entirely disappeared). This concept was adopted by American law and survived in important senses well
into the twentieth century.

But consider the “rule of thumb.” This has come to mean a helpful guide, or a benign reference to things that
should or should not be done. In fact its origin was in one of the few rights of women that were not “covered up”
by marriage, a right of women against their husbands. “The” rule of thumb was this, that no husband could beat
his wife with a stick thicker than his thumb. Violate the rule of thumb and you’re guilty of wife-beating; stay
within it and it’s just healthy “discipline.”

Oliver Wendell Holmes said that, regarding American and English law, “a page of history is worth a volume of
logic.” The same is equally true of language, and particularly English.

It kills me that I cannot remember the name of the book and author that first brought the wonders of linguistics
and the history of language to my fascinated attention. In its way that book brought me the same radical insights
that Hofstadter’s book did.

I had read books about Quantum Mechanics and relativity, had wrestled with Aristotle, Kierkegaard and Kant,
astrophysics and computer theory. It took Godel, Escher, Bach to fuse them all together and detonate them in my
mind. Likewise, I had been obsessed with language, studied several languages and their literatures, and read and
written vast quantities of the English language. This book, something like “The Development of Modern
English” or some similarly innocuous title, brought it all into a sort of comprehensive system in my mind.
Suddenly words were not merely symbols, but scars and trophies. I never again thought of language in terms of
mere self-expression. Rather, language is the encoded history of the people. The voice of an ant colony, as it
were.

Do you know why, for example, we say “the strait and the narrow” or many other phrases which seem
redundant? These days, most people hear “strait” and assume “straight,” and that phrase has gotten a bit lost in
the woods, so to speak. This is perfectly customary for words, which often change for purely phonetic reasons.
But the actual phrase is “strait and narrow,” and the words are synonyms. They mean the same thing, in other
words, with one distinction. “Strait” is a Latinate word coming to us by way of French, and “narrow” is of
Anglosaxon derivation. English is littered with such dual expressions, one of French, the other of Anglosaxon
origin. Why? Because of the Norman invasion of 1066 a.d.

The Normans were French, and they invaded England and conquered it for a time beginning in 1066. As it
happened, the Normans were pretty much “live and let live” sorts of people, and while they conquered the
armies and installed the French-style of government, they left the English alone in many ways. Notably to
governance. But the court language was now French, and everybody who was anybody sprinkled a little French
into his speech to impress his friends. But just to make sure he was understood, he also used the Anglo word too.
And this is why you “swear and affirm” certain statements at law, and why you are “honest and truthful” at
home.

Why does a Southerner speak with a different accent than a New Englander? And why do Carribeans and
Australians speak with such charmingly weird accents? Why do Quebecois speak such vastly different French
than Parisians?

Language mutates just like genetics. Bear in mind that evolution, or “natural selection,” is not the result of some
sort of self-help for genes, a reaching for excellence. Evolution occurs because of errors in the transmission of
genetic information. Often this manifests as a form of degeneration which hurts the chances for reproduction
(and the trait therefore tends not to be reproduced), but sometimes the “error” represents a substantial adaptation
to the environment. This provides an advantage that translates into reproduction, and the trait is “selected.” It is
hard-wired into the genetic code. When the environment changes, new traits are selected for the simple reason
that new traits confer competitive advantage. How the fish must have mocked the first amphibians! But who got
the last laugh?

Language is really the same way. It evolves over time either because the most important people have not reached
a consensus on pronunciation (“You say ‘tomayto,’ while I say ‘tomahto;” you say ‘potayto’ while I say
‘potahto’”), and so it changes according to which of us has more prestige and influence. Or else because of
simple mistakes. Samuel Johnson compiled the first English dictionary in the Eighteenth Century, giving an
objective and permanent authority on English for the first time, and this slowed the development of language
considerably. Compare Beowulf to The Canterbury Tales to Benjamin Franklin’s Autobiography for a graphic
demonstration of how important a development dictionaries were.

But even with dictionaries around language still evolves, and like genetic evolution, the results are largely
dependent upon the environment. When a people migrate, they change their surroundings radically and
commence an entirely different evolutionary path for their language. All the differences in English and French
accents I mentioned above are simply the result of when the King’s English (or French) was transferred to a new
set of circumstances, and what kind of grammatical and pronunciation errors the changed circumstances made
more common and acceptable. Why do so many people adopt an exaggerated southern accent when they pretend
to be stupid or ignorant? Because the South lost the Civil War 150 years ago.

Scars and trophies.


9

An “enclitic” is a word. Or a type of words, you might say. Enclitics are words that lose their particular
emphasis, but not their integrity so to speak, when linked to another word. “Thee,” in “prithee;” “not” in
“cannot” are two examples. If a word cannot begin a sentence, it “leans” on another word that can. Some argue
that the whole word “however” is enclitic, but certainly the “ever” part of it is.

When I first saw the word “enclitic” I thought it was related to “clitoris.” I fashioned elaborate linguistic theories
as to why clitorises could be considered enclitic, metaphorically speaking. I thought of various feminist
explanations and excoriations as to why men might declare a clitoris enclitic. Who was it who said so famously
that “feminism is the cure for all the neuroses Freud identified”?

I finally disabused myself of my linguistic fallacy. What a boner! “Enclitic” comes from a Greek word meaning
“incline,” as in “to lean on.” “Clitoris,” on the other hand, has entirely different Greek roots, coming from the
word kleiein, which means “to shut up.” That’s really too rich, isn’t it? And take a look at Webster’s definition:
“a small organ at the upper part of the vulva, homologous to the penis in the male.” I’ll save you the bet–the
definition of “penis” does not refer to clitoris. So perhaps my essential insight was not so far afield. I’m sure
most women would agree the clitoris and its history deserve more study. How else to become a cunning linguist?

10

Do you know what a “pathetic fallacy” is? Is it like a really, really big mistake? Or a really stupid or sad one? As
in, “what a pathetic mistake to make, any idiot would know better!”? Carl Sagan wrote a book called “The
Demon-Haunted World.” It’s a wonderful exposition on the way nonscientific thought has expressed itself
through the ages, often of course in witch burnings and the like. But it’s more fundamental than that. Our minds
cry out for understanding, often for human (or quasi-human) feelings or motives that might explain why
something has happened in the world. The pathetic fallacy is to attribute human thought or feeling to a natural
object. As in “the cruel sea” or “pitiless storm” or “the darling buds of May.” It’s a figure of speech, in other
words, coming from “pathos,” the Greek word for suffering or feeling.

But if you changed Sagan’s phrase just slightly, into “the Daemon-Haunted World,” then you have the world I
have inhabited ever since the summer before my junior year in college and that every imaginative person
inhabits to some extent. The world is haunted by daemons–daemons of science and language, love and memory.
I don’t say these daemons give anything an objective meaning or that they intervene in the affairs of people. But
they haunt us just the same. Everywhere you look is a door to some fabulous mystery, and there’s some daemon
shucking and jiving in front of it. Our challenge and delight is to open that door.

You might also like