You are on page 1of 256

Literature

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about (usually written) works. For the card game, see Literature (card game).

Old book bindings at the Merton College library.

Literature is the art of written works. Literally translated, the word means "acquaintance with letters"
(from Latin littera letter). In Western culture the most basic written literary types include fiction and non-
fiction.

Contents
[hide]

• 1 Definitions
• 2 History
o 2.1 Poetry
o 2.2 Prose
 2.2.1 Essays
 2.2.2 Fiction
 2.2.3 Other prose literature
o 2.3 Drama
o 2.4 Oral literature
o 2.5 Other narrative forms
• 3 Genres of literature
• 4 Literary techniques
• 5 Literary criticism
• 6 Legal status
o 6.1 UK
• 7 See also
o 7.1 Lists
o 7.2 Related topics
• 8 Notes

• 9 External links
Definitions

Literature

Major forms

Novel · Poem · Drama
Short story · Novella
Genres
Epic · Lyric · Drama
Romance · Satire
Tragedy · Comedy
Tragicomedy
Media
Performance (play) · Book
Techniques
Prose · Poetry
History and lists
Basic topics · Literary terms
History · Modern history
Books · Writers
Literary awards · Poetry awards
Discussion
Criticism · Theory · Magazines

The word "literature" has different meanings depending on who is using it. It could be applied broadly to
mean any symbolic record, encompassing everything from images and sculptures to letters. In a more
narrow sense the term could mean only text composed of letters, or other examples of symbolic written
language (Egyptian hieroglyphs, for example). An even more narrow interpretation is that text have a
physical form, such as on paper or some other portable form, to the exclusion of inscriptions or digital
media. The Muslim scholar and philosopher Imam Ja'far al-Sadiq (702-765 AD) defined Literature as
follows: "Literature is the garment which one puts on what he says or writes so that it may appear more
attractive."[1] added that literature is a slice of life that has been given direction and meaning, an artistic
interpretation of the world according to the percipient's point of views. Frequently, the texts that make up
literature crossed over these boundaries. Russian Formalist Roman Jakobson defines literature as
"organized violence committed on ordinary speech", highlighting literature's deviation from the day-to-
day and conversational structure of words. Illustrated stories, hypertexts, cave paintings and inscribed
monuments have all at one time or another pushed the boundaries of "literature."

People may perceive a difference between "literature" and some popular forms of written work. The terms
"literary fiction" and "literary merit" often serve to distinguish between individual works. For example,
almost all literate people perceive the works of Charles Dickens as "literature," whereas some critics[citation
needed]
look down on the works of Jeffrey Archer as unworthy of inclusion under the general heading of
"English literature." Critics may exclude works from the classification "literature," for example, on the
grounds of a poor standard of grammar and syntax, of an unbelievable or disjointed story-line, or of
inconsistent or unconvincing characters. Genre fiction (for example: romance, crime, or science fiction)
may also become excluded from consideration as "literature."
History
One of the earliest known literary works is the Sumerian Epic of Gilgamesh, an epic poem dated around
2700 B.C., which deals with themes of heroism, friendship, loss, and the quest for eternal life. Different
historical periods have emphasized various characteristics of literature. Early works often had an overt or
covert religious or didactic purpose. Moralizing or prescriptive literature stems from such sources. The
exotic nature of romance flourished from the Middle Ages onwards, whereas the Age of Reason
manufactured nationalistic epics and philosophical tracts. Romanticism emphasized the popular folk
literature and emotive involvement, but gave way in the 19th-century West to a phase of realism and
naturalism, investigations into what is real. The 20th century brought demands for symbolism or
psychological insight in the delineation and development of character.

Poetry

A poem is defined as a composition written in verse (although verse has been equally used for epic and
dramatic fiction). Poems rely heavily on imagery, precise word choice, and metaphor; they may take the
form of measures consisting of patterns of stresses (metric feet) or of patterns of different-length syllables
(as in classical prosody); and they may or may not utilize rhyme. One cannot readily characterize poetry
precisely. Typically though, poetry as a form of literature makes some significant use of the formal
properties of the words it uses – the properties attached to the written or spoken form of the words, rather
than to their meaning. Metre depends on syllables and on rhythms of speech; rhyme and alliteration
depend on words

Poetry perhaps pre-dates other forms of literature: early known examples include the Sumerian Epic of
Gilgamesh (dated from around 2700 B.C.), parts of the Bible, the surviving works of Homer (the Iliad and
the Odyssey), and the Indian epics Ramayana and Mahabharata. In cultures based primarily on oral
traditions the formal characteristics of poetry often have a mnemonic function, and important texts: legal,
genealogical or moral, for example, may appear first in verse form.

Some poetry uses specific forms: the haiku, the limerick, or the sonnet, for example. A traditional haiku
written in Japanese must have something to do with nature, contain seventeen onji (syllables), distributed
over three lines in groups of five, seven, and five, and should also have a kigo, a specific word indicating
a season. A limerick has five lines, with a rhyme scheme of AABBA, and line lengths of 3,3,2,2,3
stressed syllables. It traditionally has a less reverent attitude towards nature. Poetry not adhering to a
formal poetic structure is called "free verse"

Language and tradition dictate some poetic norms: Persian poetry always rhymes, Greek poetry rarely
rhymes, Italian or French poetry often does, English and German can go either way (although modern
non-rhyming poetry often, perhaps unfairly, has a more "serious" aura). Perhaps the most paradigmatic
style of English poetry, blank verse, as exemplified in works by Shakespeare and by Milton, consists of
unrhymed iambic pentameters. Some languages prefer longer lines; some shorter ones. Some of these
conventions result from the ease of fitting a specific language's vocabulary and grammar into certain
structures, rather than into others; for example, some languages contain more rhyming words than others,
or typically have longer words. Other structural conventions come about as the result of historical
accidents, where many speakers of a language associate good poetry with a verse form preferred by a
particular skilled or popular poet.

Works for theatre (see below) traditionally took verse form. This has now become rare outside opera and
musicals, although many would argue that the language of drama remains intrinsically poetic.
In recent years, digital poetry has arisen that takes advantage of the artistic, publishing, and synthetic
qualities of digital media.

Prose

Prose consists of writing that does not adhere to any particular formal structures (other than simple
grammar); "non-poetic" writing, perhaps. The term sometimes appears pejoratively, but prosaic writing
simply says something without necessarily trying to say it in a beautiful way, or using beautiful words.
Prose writing can of course take beautiful form; but less by virtue of the formal features of words
(rhymes, alliteration, metre) but rather by style, placement, or inclusion of graphics. But one need not
mark the distinction precisely, and perhaps cannot do so. One area of overlap is "prose poetry", which
attempts to convey using only prose, the aesthetic richness typical of poetry.

Essays

An essay consists of a discussion of a topic from an author's personal point of view, exemplified by works
by Francis Bacon or by Charles Lamb.

'Essay' in English derives from the French 'essai', meaning 'attempt'. Thus one can find open-ended,
provocative and/or inconclusive essays. The term "essays" first applied to the self-reflective musings of
Michel de Montaigne, and even today he has a reputation as the father of this literary form.

Genres related to the essay may include:

• the memoir, telling the story of an author's life from the author's personal point of view
• the epistle: usually a formal, didactic, or elegant letter.

Fiction

Narrative fiction (narrative prose) generally favours prose for the writing of novels, short stories, graphic
novels, and the like. Singular examples of these exist throughout history, but they did not develop into
systematic and discrete literary forms until relatively recent centuries. Length often serves to categorize
works of prose fiction. Although limits remain somewhat arbitrary, modern publishing conventions
dictate the following:

• A Mini Saga is a short story of exactly 50 words
• A Flash fiction is generally defined as a piece of prose under a thousand words.
• A short story comprises prose writing of between 1000 and 20,000 words (but typically more than
5000 words), which may or may not have a narrative arc.
• A story containing between 20,000 and 50,000 words falls into the novella category.
• A work of fiction containing more than 50,000 words falls squarely into the realm of the novel.

A novel consists simply of a long story written in prose, yet the form developed comparatively recently.
Icelandic prose sagas dating from about the 11th century bridge the gap between traditional national verse
epics and the modern psychological novel. In mainland Europe, the Spaniard Cervantes wrote perhaps the
first influential novel: Don Quixote, the first part of which was published in 1605 and the second in 1615.
Earlier collections of tales, such as the One Thousand and One Nights, Boccaccio's Decameron and
Chaucer's The Canterbury Tales, have comparable forms and would classify as novels if written today.
Other works written in classical Asian and Arabic literature resemble even more strongly the novel as we
now think of it – for example, works such as the Japanese Tale of Genji by Lady Murasaki, the Arabic
Hayy ibn Yaqdhan by Ibn Tufail, the Arabic Theologus Autodidactus by Ibn al-Nafis, and the Chinese
Romance of the Three Kingdoms by Luo Guanzhong.
Early novels in Europe did not, at the time, count as significant literature, perhaps because "mere" prose
writing seemed easy and unimportant. It has become clear, however, that prose writing can provide
aesthetic pleasure without adhering to poetic forms. Additionally, the freedom authors gain in not having
to concern themselves with verse structure translates often into a more complex plot or into one richer in
precise detail than one typically finds even in narrative poetry. This freedom also allows an author to
experiment with many different literary and presentation styles – including poetry – in the scope of a
single novel.

See Ian Watt's The Rise of the Novel. [This definition needs expansion]

Other prose literature

Philosophy, history, journalism, and legal and scientific writings traditionally ranked as literature. They
offer some of the oldest prose writings in existence; novels and prose stories earned the names "fiction" to
distinguish them from factual writing or nonfiction, which writers historically have crafted in prose.

The "literary" nature of science writing has become less pronounced over the last two centuries, as
advances and specialization have made new scientific research inaccessible to most audiences; science
now appears mostly in journals. Scientific works of Euclid, Aristotle, Copernicus, and Newton still
possess great value; but since the science in them has largely become outdated, they no longer serve for
scientific instruction, yet they remain too technical to sit well in most programmes of literary study.
Outside of "history of science" programmes students rarely read such works. Many books "popularizing"
science might still deserve the title "literature"; history will tell.

Philosophy, too, has become an increasingly academic discipline. More of its practitioners lament this
situation than occurs with the sciences; nonetheless most new philosophical work appears in academic
journals. Major philosophers through history – Plato, Aristotle, Augustine, Descartes, Nietzsche – have
become as canonical as any writers. Some recent philosophy works are argued to merit the title
"literature", such as some of the works by Simon Blackburn; but much of it does not, and some areas,
such as logic, have become extremely technical to a degree similar to that of mathematics.

A great deal of historical writing can still rank as literature, particularly the genre known as creative
nonfiction. So can a great deal of journalism, such as literary journalism. However these areas have
become extremely large, and often have a primarily utilitarian purpose: to record data or convey
immediate information. As a result the writing in these fields often lacks a literary quality, although it
often and in its better moments has that quality. Major "literary" historians include Herodotus, Thucydides
and Procopius, all of whom count as canonical literary figures.

Law offers a less clear case. Some writings of Plato and Aristotle, or even the early parts of the Bible,
might count as legal literature. The law tables of Hammurabi of Babylon might count. Roman civil law as
codified in the Corpus Juris Civilis during the reign of Justinian I of the Byzantine Empire has a
reputation as significant literature. The founding documents of many countries, including the United
States Constitution, can count as literature; however legal writing now rarely exhibits literary merit.

Game design scripts are never seen by the player of a game and only by the developers and/or publishers
to help them understand, visualize and maintain consistency while collaborating in creating a game, the
audience for these pieces is usually very small. Still, many game scripts contain immersive stories and
detailed worlds making them a hidden literary genre.

Most of these fields, then, through specialization or proliferation, no longer generally constitute
"literature" in the sense under discussion. They may sometimes count as "literary literature"; more often
they produce what one might call "technical literature" or "professional literature".
Drama

A play or drama offers another classical literary form that has continued to evolve over the years. It
generally comprises chiefly dialogue between characters, and usually aims at dramatic / theatrical
performance (see theatre) rather than at reading. During the eighteenth and nineteenth centuries, opera
developed as a combination of poetry, drama, and music. Nearly all drama took verse form until
comparatively recently. Shakespeare could be considered drama. Romeo and Juliet, for example, is a
classic romantic drama generally accepted as literature.

Greek drama exemplifies the earliest form of drama of which we have substantial knowledge. Tragedy, as
a dramatic genre, developed as a performance associated with religious and civic festivals, typically
enacting or developing upon well-known historical or mythological themes. Tragedies generally presented
very serious themes. With the advent of newer technologies, scripts written for non-stage media have
been added to this form. War of the Worlds (radio) in 1938 saw the advent of literature written for radio
broadcast, and many works of Drama have been adapted for film or television. Conversely, television,
film, and radio literature have been adapted to printed or electronic media.

Oral literature

The term oral literature refers not to written, but to oral traditions, which includes different types of epic,
poetry and drama, folktales, ballads, legends, jokes, and other genres of folklore. It exists in every society,
whether literate or not. It is generally studied by folklorists, or by scholars committed to cultural studies
and ethnopoetics, including linguists, anthropologists, and even sociologists.

Other narrative forms

• Electronic literature is a literary genre consisting of works which originate in digital environments.
• Films, videos and broadcast soap operas have carved out a niche which often parallels the
functionality of prose fiction.
• Graphic novels and comic books present stories told in a combination of sequential artwork,
dialogue and text.

Genres of literature
A literary genre refers to the traditional divisions of literature of various kinds according to a particular
criterion of writing. See the list of literary genres.

List of literary genres

• Autobiography, Memoir, Spiritual autobiography
• Biography
• Diaries and Journals
• Electronic literature
• Erotic literature
• Slave narrative
• Fiction
o Adventure novel
o Children's literature
o Comic novel
o Crime fiction
 Detective fiction
o Fable, Fairy tale, Folklore
o Fantasy (for more details see Fantasy subgenres; fantasy literature)
o Gothic fiction (initially synonymous with horror)
o Historical fiction
o Horror
o Medical novel
o Mystery fiction
o Philosophical novel
o Political fiction
o Romance novel
 Historical romance
o Saga, Family Saga
o Satire
o Science fiction (for more details see Science fiction genre)
o Thriller
 Conspiracy fiction
 Legal thriller
 Psychological thriller
 Spy fiction/Political thriller
o Tragedy

Literary techniques
Main article: Literary technique

A literary technique or literary device may be used by works of literature in order to produce a specific
effect on the reader. Literary technique is distinguished from literary genre as military tactics are from
military strategy. Thus, though David Copperfield employs satire at certain moments, it belongs to the
genre of comic novel, not that of satire. By contrast, Bleak House employs satire so consistently as to
belong to the genre of satirical novel. In this way, use of a technique can lead to the development of a new
genre, as was the case with one of the first modern novels, Pamela by Samuel Richardson, which by using
the epistolary technique strengthened the tradition of the epistolary novel, a genre which had been
practiced for some time already but without the same acclaim.

Literary criticism
Also see: Literary criticism, Literary history, Literary theory

Literary criticism implies a critique and evaluation of a piece of literature and in some cases is used to
improve a work in progress or classical piece. There are many types of literary criticism and each can be
used to critique a piece in a different way or critique a different aspect of a piece.

Legal status
UK

Literary works have been protected by copyright law from unauthorised reproduction since at least 1710.
[2]
Literary works are defined by copyright law to mean any work, other than a dramatic or musical work,
which is written, spoken or sung, and accordingly includes (a) a table or compilation (other than a
database), (b) a computer program, (c) preparatory design material for a computer program, and (d) a
database.

It should be noted that literary works are not limited to works of literature, but include all works
expressed in print or writing (other than dramatic or musical works).[3]

See also
Lists

• List of basic literature topics
• List of authors
• List of books
• List of literary awards
• List of literary terms
• List of prizes, medals, and awards for literary prizes.
• List of women writers
• List of writers

Related topics

• Asemic Writing
• Children's literature
• Cultural movement for literary movements.
• English studies
• Ergodic literature
• Hinman Collator
• History of literature (antiquity – 1800)
• History of modern literature (1800 – )
• Literature basic topics
• Literary criticism
• Literature cycle
• Literary magazine
• Modern Language Association
• Orature
• Postcolonial literature
• Rabbinic literature
• Rhetorical modes
• Scientific literature
• Vernacular literature
• World literature

Notes
Essay
Contents
[hide]
• 1 Etymology
• 2 The essay as a pedagogical tool
o 2.1 The five-paragraph essay
o 2.2 Academic essays
 2.2.1 Descriptive
 2.2.2 Narrative
 2.2.3 Exemplification
 2.2.4 Comparison and Contrast
 2.2.5 Cause and Effect
 2.2.6 Classification and division
 2.2.7 Definition
 2.2.8 Dialectic
 2.2.9 Other Logical Structures
• 3 Non-literary essays
o 3.1 Visual Arts
o 3.2 Music
o 3.3 Film
o 3.4 Photography
• 4 See also
• 5 References
• 6 Bibliography
• 7 External links

United States Declaration of Independence
From Wikipedia, the free encyclopedia

Jump to: navigation, search

United States Declaration of
Independence

1823 facsimile of the engrossed copy

Created June–July 1776

Ratified July 4, 1776
Location Engrossed copy:
National Archives
Original: lost
Rough draft: Library of
Congress

Authors Thomas Jefferson et al.

Signers 56 delegates to the
Continental Congress

Purpose To announce and explain
separation from Britain[1]

Wikibooks has a book on the topic of
United States Government/The Declaration of Independence

The United States Declaration of Independence is a statement adopted by the Continental Congress on
July 4, 1776, announcing that the thirteen American colonies then at war with Great Britain were no
longer a part of the British Empire. Written primarily by Thomas Jefferson, the Declaration is a formal
explanation of why Congress had voted on July 2 to declare independence from Great Britain, more than a
year after the outbreak of the American Revolutionary War. The birthday of the United States of America
—Independence Day—is celebrated on July 4, the day the wording of the Declaration was approved by
Congress.

After approving the wording on July 4, Congress issued the Declaration of Independence in several forms.
It was initially published as a printed broadside that was widely distributed and read to the public. The
most famous version of the Declaration, a signed copy that is usually regarded as the Declaration of
Independence, is on display at the National Archives in Washington, D.C. Contrary to popular mythology,
Congress did not sign this document on July 4, 1776; it was created after July 19 and was signed by most
Congressional delegates on August 2.

Philosophically, the Declaration stressed two Lockean themes: individual rights and the right of
revolution. These ideas of the Declaration continued to be widely held by Americans, and had an
influence internationally[citation needed], in particular the French Revolution[citation needed]. Abraham Lincoln,
beginning in 1854 as he spoke out against slavery and the Kansas-Nebraska Act,[2] provided a
reinterpretation[3] of the Declaration that stressed that the unalienable rights of “Life, Liberty and the
pursuit of Happiness” were not limited to the white race.[4] "Lincoln and those who shared his conviction"
created a document with “continuing usefulness” with a “capacity to convince and inspire living
Americans.”[5] The invocation by Lincoln in his Gettysburg Address of the Declaration of Independence
defines for many Americans how they interpret[6] Jefferson's famous preamble:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with
certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

Contents
[hide]
• 1 Background
o 1.1 Parliamentary sovereignty
o 1.2 Congress convenes
• 2 Towards independence
o 2.1 Revising instructions
o 2.2 Lee's resolution and the final push
• 3 Draft and adoption
• 4 Text
• 5 Influences
• 6 Signers
o 6.1 Date of signing
o 6.2 List of signers
o 6.3 Signer details
• 7 Publication and effect
• 8 History of the documents
o 8.1 Drafts and Fair Copy
o 8.2 Broadsides
o 8.3 Engrossed copy
o 8.4 Publication outside North America
• 9 Legacy
o 9.1 From the Founding through 1850
o 9.2 Abraham Lincoln and the Civil War era
o 9.3 Subsequent legacy
• 10 See also
• 11 Notes
• 12 References

• 13 External links

Background

Thomas Jefferson, the principal author of the Declaration, argued that Parliament was a foreign legislature
that was unconstitutionally trying to extend its sovereignty into the colonies.

Parliamentary sovereignty
By the time the Declaration of Independence was adopted in July 1776, the Thirteen Colonies and Great
Britain had been at war for more than a year. Relations between the colonies and the parent country had
been deteriorating since the end of the Seven Years' War in 1763. The war had plunged the British
government deep into debt, and so Parliament enacted a series of measures to increase tax revenue from
the colonies. Parliament believed that these acts, such as the Stamp Act of 1765 and the Townshend Acts
of 1767, were a legitimate means of having the colonies pay their fair share of the costs to keep the
colonies in the British Empire.[7]

Many colonists, however, had developed a different conception of the empire. Because the colonies were
not directly represented in Parliament, they argued that Parliament had no right to levy taxes upon them, a
view expressed by the slogan "No taxation without representation". After the Townshend Acts, some
essayists began to question whether Parliament had any legitimate jurisdiction in the colonies at all.[8] By
1774, American writers such as Samuel Adams, James Wilson, and Thomas Jefferson were arguing that
Parliament was the legislature of Great Britain only, and that the colonies, which had their own
legislatures, were connected to the rest of the empire only through their allegiance to the Crown.[9]
Parliament, by contrast, contended that the colonists received "virtual representation."[citation needed]

Congress convenes

The issue of parliamentary sovereignty in the colonies became a crisis after Parliament passed the
Coercive Acts in 1774 to punish the Province of Massachusetts for the Boston Tea Party. Many colonists
saw the Coercive Acts as a violation of the British Constitution and a threat to the liberties of all of British
America. In September 1774, the First Continental Congress convened in Philadelphia to coordinate a
response. Congress organized a boycott of British goods and petitioned the king for repeal of the acts.
These measures were unsuccessful because King George III and his ministers were determined to force
the issue. As the king wrote to Prime Minister Lord North in November 1774, "blows must decide
whether they [the colonies] are to be subject to this country or independent".[10]

Even after fighting in the American Revolutionary War began at Lexington and Concord in April 1775,
most colonists still hoped for reconciliation with Great Britain.[11] When the Second Continental Congress
convened at the Pennsylvania State House in Philadelphia in May 1775, some delegates hoped for
eventual independence, but no one yet advocated declaring it.[12] Although many colonists no longer
believed that Parliament had any sovereignty over them, they still professed loyalty to King George,
whom they hoped would intercede on their behalf. They were to be disappointed: in late 1775, the king
rejected Congress's second petition, issued a Proclamation of Rebellion, and announced before Parliament
on October 26 that he was even considering "friendly offers of foreign assistance" to suppress the
rebellion.[13] A pro-American minority in Parliament warned that the government was driving the colonists
towards independence.[14]

Towards independence
In January 1776, just as it became clear in the colonies that the king was not inclined to act as a
conciliator, Thomas Paine's pamphlet Common Sense was published.[15] Paine, who had only recently
arrived in the colonies from England, argued in favor of colonial independence, advocating republicanism
as an alternative to monarchy and hereditary rule.[16] Common Sense introduced no new ideas,[17] and
probably had little direct effect on Congress's thinking about independence; its importance was in
stimulating public debate on a topic that few had previously dared to openly discuss.[18] Public support for
separation from Great Britain steadily increased after the publication of Paine's enormously popular
pamphlet.[19]
The Assembly Room in Philadelphia's Independence Hall, where the Second Continental Congress
adopted the Declaration of Independence.

Although some colonists still held out hope for reconciliation, developments in early 1776 further
strengthened public support for independence. In February 1776, colonists learned of Parliament's passage
of the Prohibitory Act, which established a blockade of American ports and declared American ships to be
enemy vessels. John Adams, a strong supporter of independence, believed that Parliament had effectively
declared American independence before Congress had been able to. Adams labeled the Prohibitory Act
the "Act of Independency", calling it "a compleat Dismemberment of the British Empire".[20] Support for
declaring independence grew even more when it was confirmed that King George had hired German
mercenaries to use against his American subjects.[21]

Despite this growing popular support for independence, Congress lacked the clear authority to declare it.
Delegates had been elected to Congress by thirteen different governments—which included extralegal
conventions, ad hoc committees, and elected assemblies—and were bound by the instructions given to
them. Regardless of their personal opinions, delegates could not vote to declare independence unless their
instructions permitted such an action.[22] Several colonies, in fact, expressly prohibited their delegates
from taking any steps towards separation from Great Britain, while other delegations had instructions that
were ambiguous on the issue.[23] As public sentiment for separation from Great Britain grew, advocates of
independence sought to have the Congressional instructions revised. For Congress to declare
independence, a majority of delegations would need authorization to vote for independence, and at least
one colonial government would need to specifically instruct its delegation to propose a declaration of
independence in Congress. Between April and July 1776, a "complex political war"[24] was waged in order
to bring this about.[25]

Revising instructions

In the campaign to revise Congressional instructions, many Americans formally expressed their support
for separation from Great Britain in what were effectively state and local declarations of independence.
Historian Pauline Maier identified more than ninety such declarations that were issued throughout the
Thirteen Colonies from April to July 1776.[26] These "declarations" took a variety of forms.[27] Some were
formal, written instructions for Congressional delegations, such as the Halifax Resolves of April 12, with
which North Carolina became the first colony to explicitly authorize its delegates to vote for
independence.[28] Others were legislative acts that officially ended British rule in individual colonies, such
as on May 4, when the Rhode Island legislature became to the first to declare its independence from Great
Britain.[29] Many "declarations" were resolutions adopted at town or county meetings that offered support
for independence. A few came in the form of jury instructions, such as the statement issued on April 23,
1776, by Chief Justice William Henry Drayton of South Carolina: "the law of the land authorizes me to
declare...that George the Third, King of Great Britain...has no authority over us, and we owe no
obedience to him."[30] Most of these declarations are now obscure, having been overshadowed by the
declaration approved by Congress on July 4.[31]

Some colonies held back from endorsing independence. Resistance was centered in the middle colonies of
New York, New Jersey, Maryland, Pennsylvania, and Delaware.[32] Advocates of independence saw
Pennsylvania as the key: if that colony could be converted to the pro-independence cause, it was believed
that the others would follow.[33] On May 1, however, opponents of independence retained control of the
Pennsylvania Assembly in a special election that had focused on the question of independence.[34] In
response, on May 10 Congress passed a resolution, which had been introduced by John Adams, calling on
colonies without a "government sufficient to the exigencies of their affairs" to adopt new governments.[35]
The resolution passed unanimously, and was even supported by Pennsylvania's John Dickinson, the leader
of the anti-independence faction in Congress, who believed that it did not apply to his colony.[36]

This Day the Congress has passed
the most important Resolution, that
ever was taken in America.
——John Adams, May 15, 1776[37]

As was the custom, Congress appointed a committee to draft a preamble that would explain the purpose of
the resolution. John Adams wrote the preamble, which stated that because King George had rejected
reconciliation and was even hiring foreign mercenaries to use against the colonies, "it is necessary that the
exercise of every kind of authority under the said crown should be totally suppressed".[38] Everyone
understood that Adams's preamble was meant to encourage the overthrow of the governments of
Pennsylvania and Maryland, which were still under proprietary governance.[39] Congress passed the
preamble on May 15 after several days of debate, but four of the middle colonies voted against it, and the
Maryland delegation walked out in protest.[40] Adams regarded his May 15 preamble as effectively an
American declaration of independence, although he knew that a formal declaration would still have to be
made.[41]

Lee's resolution and the final push

On the same day that Congress passed Adams's radical preamble, the Virginia Convention set the stage
for a formal Congressional declaration of independence. On May 15, the Convention passed a resolve
instructing Virginia's congressional delegation "to propose to that respectable body to declare the United
Colonies free and independent States, absolved from all allegiance to, or dependence upon, the Crown or
Parliament of Great Britain".[42] In accordance with those instructions, Richard Henry Lee of Virginia
presented a three-part resolution to Congress on June 7. The motion, which was seconded by John Adams,
called on Congress to declare independence, form foreign alliances, and prepare a plan of colonial
confederation. The part of the resolution relating to declaring independence read:

Resolved, that these United Colonies are, and of right ought to be, free and independent States, that they are
absolved from all allegiance to the British Crown, and that all political connection between them and the State of
Great Britain is, and ought to be, totally dissolved.[43]

The resolution met with resistance in the ensuing debate. Moderate delegates, while conceding that
reconciliation with Great Britain was no longer possible, argued that a resolution of independence was
premature. Therefore, further discussion of Lee's resolution was postponed for three weeks.[44] Until then,
while support for independence was consolidated, Congress decided that a committee should prepare a
document announcing and explaining independence in the event that the resolution of independence was
approved.

Draft and adoption
On June 11, 1776, Congress appointed a "Committee of Five", consisting of John Adams of
Massachusetts, Benjamin Franklin of Pennsylvania, Thomas Jefferson of Virginia, Robert R. Livingston
of New York, and Roger Sherman of Connecticut, to draft a declaration. Because the committee left no
minutes, there is some uncertainty about how the drafting process proceeded—accounts written many
years later by Jefferson and Adams, although frequently cited, are contradictory and not entirely reliable.
[45]
What is certain is that the committee, after discussing the general outline that the document should
follow, decided that Jefferson would write the first draft.[46] Considering Congress's busy schedule,
Jefferson probably had limited time for writing over the next 17 days, and likely wrote the draft quickly.
[47]
He then consulted the others, made some changes, and then produced another copy incorporating these
alterations. The committee presented this copy to the Congress on June 28, 1776. The title of the
document was "A Declaration by the Representatives of the United States of America, in General
Congress assembled."[48] Congress ordered that the draft "lie on the table".[49]

John Trumbull's famous painting is often identified as a depiction of the signing of the Declaration, but it
actually shows the drafting committee presenting its work to the Congress.[50]

On Monday, July 1, having tabled the draft of the declaration, Congress resolved itself into a committee
of the whole and resumed debate on Lee's resolution of independence.[51] John Dickinson made one last
effort to delay the decision, arguing that Congress should not declare independence without first securing
a foreign alliance and finalizing the Articles of Confederation.[52] John Adams gave a speech in reply to
Dickinson, restating the case for an immediate declaration.

After a long day of speeches, a vote was taken. As always, each colony cast a single vote and the
delegation for each colony—numbering two to seven members—voted amongst themselves to determine
the colony's vote. Pennsylvania and South Carolina voted against declaring independence. The New York
delegation, lacking permission to vote for independence, abstained. Delaware cast no vote because the
delegation was split between Thomas McKean (who voted yes) and George Read (who voted no). The
remaining nine delegations voted in favor of independence, which meant that the resolution had been
approved by the committee of the whole. The next step was for the resolution to be voted upon by the
Congress itself. Edward Rutledge of South Carolina, who was opposed to Lee's resolution but desirous of
unanimity, moved that the vote be postponed until the following day.[53]

On July 2, South Carolina reversed its position and voted for independence. In the Pennsylvania
delegation, Dickinson and Robert Morris abstained, allowing the delegation to vote three-to-two in favor
of independence. The tie in the Delaware delegation was broken by the timely arrival of Caesar Rodney,
who voted for independence. The New York delegation abstained once again, since they were still not
authorized to vote for independence, although they would be allowed to do so by the New York
Provincial Congress a week later.[54] The resolution of independence had been adopted with twelve
affirmative votes and one abstention. With this, the colonies had officially severed political ties with
Great Britain.[55] In a now-famous letter written to his wife on the following day, John Adams predicted
that July 2 would become a great American holiday.[56]

After voting in favor of the resolution of independence, Congress turned its attention to the committee's
draft of the declaration. Over several days of debate, Congress made a few changes in wording and
deleted nearly a fourth of the text, most notably a passage critical of the slave trade, changes that Jefferson
resented. On July 4, 1776, the wording of the Declaration of Independence was approved and sent to the
printer for publication.
Text

Wikisource has original text related to this article:
United States Declaration of Independence

The first sentence of the Declaration asserts as a matter of Natural Law the ability of a people to assume
political independence, and acknowledges that the grounds for such independence must be reasonable,
and therefore explicable, and ought to be explained.

When in the Course of human events, it becomes necessary for one people to dissolve the political bands which
have connected them with another, and to assume among the powers of the earth, the separate and equal station to
which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires
that they should declare the causes which impel them to the separation.

The next section, the famous preamble, includes the ideas and ideals that were principles of the
Declaration. It is also an assertion of what is known as the "right of revolution": that is, people have
certain rights, and when a government violates these rights, the people have the right to "alter or abolish"
that government.[57]

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with
certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. That to secure these
rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, That
whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to
abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in
such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that
Governments long established should not be changed for light and transient causes; and accordingly all experience
hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by
abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing
invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their
duty, to throw off such Government, and to provide new Guards for their future security.

The next section is a list of charges against King George which aim to demonstrate that he has violated
the colonists' rights and is therefore unfit to be their ruler:

Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter
their former Systems of Government. The history of the present King of Great Britain is a history of repeated
injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To
prove this, let Facts be submitted to a candid world.

He has refused his Assent to Laws, the most wholesome and necessary for the public good.
He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in
their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend
to them.
He has refused to pass other Laws for the accommodation of large districts of people, unless those people
would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable
to tyrants only.
He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository
of their public Records, for the sole purpose of fatiguing them into compliance with his measures.
He has dissolved Representative Houses repeatedly, for opposing with manly firmness his invasions on the
rights of the people.
He has refused for a long time, after such dissolutions, to cause others to be elected; whereby the
Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise; the
State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions
within.
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for
Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the
conditions of new Appropriations of Lands.
He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary
powers.
He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and
payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harrass our people, and
eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil power.
He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged
by our laws; giving his Assent to their Acts of pretended Legislation:
For Quartering large bodies of armed troops among us:
For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the
Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefits of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences
For abolishing the free System of English Laws in a neighbouring Province, establishing therein an
Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument
for introducing the same absolute rule into these Colonies:
For taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the Forms of
our Governments:
For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in
all cases whatsoever.
He has abdicated Government here, by declaring us out of his Protection and waging War against us.
He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people.
He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death,
desolation and tyranny, already begun with circumstances of Cruelty & perfidy scarcely paralleled in the
most barbarous ages, and totally unworthy the Head of a civilized nation.
He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country,
to become the executioners of their friends and Brethren, or to fall themselves by their Hands.
He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our
frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of
all ages, sexes and conditions.
In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated
Petitions have been answered only by repeated injury. A Prince whose character is thus marked by every act which
may define a Tyrant, is unfit to be the ruler of a free people.

Many Americans still felt a kinship with the people of Great Britain, and had appealed in vain to the
prominent among them, as well as to Parliament, to convince the King to relax his more objectionable
policies toward the colonies.[58] The next section represents disappointment that these attempts had been
unsuccessful.

Nor have We been wanting in attentions to our Brittish [sic] brethren. We have warned them from time to time of
attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the
circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity,
and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would
inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of
consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as
we hold the rest of mankind, Enemies in War, in Peace Friends.
In the final section, the signers assert that there exist conditions under which people must change their
government, that the British have produced such conditions, and by necessity the colonies must throw off
political ties with the British Crown and become independent states. The conclusion incorporates
language from the resolution of independence that had been passed on July 2.

We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to
the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good
People of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be
Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political
connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and
Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and
to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration,
with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our
Fortunes and our sacred Honor.

Influences

Thomas Jefferson considered English philosopher John Locke (1632–1704) to be one of "the three
greatest men that have ever lived".[59]

Historians have often sought to identify the sources that most influenced the words of the Declaration of
Independence. By Jefferson's own admission, the Declaration contained no original ideas, but was instead
a statement of sentiments widely shared by supporters of the American Revolution. As he explained in
1825:

Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it
was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit
called for by the occasion.[60]

Jefferson's most immediate sources were two documents written in June 1776: his own draft of the
preamble of the Constitution of Virginia, and George Mason's draft of the Virginia Declaration of Rights.
Ideas and phrases from both of these documents appear in the Declaration of Independence.[61] They were
in turn directly influenced by the 1689 English Declaration of Rights, which formally ended the reign of
King James II.[62] During the American Revolution, Jefferson and other Americans looked to the English
Declaration of Rights as a model of how to end the reign of an unjust king.[63]

English political theorist John Locke is usually cited as a primary influence on the Declaration. As
historian Carl L. Becker wrote in 1922, "Most Americans had absorbed Locke's works as a kind of
political gospel; and the Declaration, in its form, in its phraseology, follows closely certain sentences in
Locke's second treatise on government."[64] The extent of Locke's influence on the American Revolution
was questioned by some subsequent scholars, however, who emphasized the influence of republicanism
rather than Locke's classical liberalism.[65] Historian Garry Wills argued that Jefferson was influenced by
the Scottish Enlightenment, particularly Francis Hutcheson, rather than Locke,[66] an interpretation that has
been strongly criticized.[67] The Scottish Declaration of Arbroath (1320) and the Dutch Act of Abjuration
(1581) have also been offered as models for Jefferson's Declaration, but these arguments have been
disputed.[68]

Signers

The signed, engrossed copy of the Declaration, now badly faded, is on display at the National Archives in
Washington, DC.

Date of signing

One of the most enduring myths about the Declaration of Independence is that it was signed by Congress
on July 4, 1776.[69] The misconception became established so quickly that, before a decade had passed,
even Thomas Jefferson, Benjamin Franklin, and John Adams believed it.[70] While it is possible that
Congress signed a document on July 4 that has since been lost, historians do not think that this is likely.[71]

The myth may have originated with the Journals of Congress, the official public record of the Continental
Congress. When the proceedings for 1776 were first published in 1777, the entry for July 4, 1776, stated
that the Declaration was "engrossed and signed" on that date, after which followed a list of signers.[72] In
1796, signer Thomas McKean disputed the claim that the Declaration had been signed on July 4, pointing
out that some of the signers had not yet been elected to Congress on that day.[73] Jefferson and Adams
remained unconvinced, however, and cited the published Journal as evidence that they had signed on July
4. McKean's version of the story gained support when the Secret Journals of Congress were published in
1821, but uncertainty remained.[74] In 1884, historian Mellen Chamberlain demonstrated that the entry in
the published Journal was erroneous, and that the famous signed version of the Declaration had been
created after July 4.[75] Historian John Hazelton confirmed in 1906 that many of the signers had not been
present in Congress on July 4, and that the signers had never actually been together as a group.[76]

The actual signing of the Declaration took place after the New York delegation had been given permission
to support independence, which allowed the Declaration to be proclaimed as the unanimous decision of
the thirteen states. On July 19, 1776, Congress ordered a copy of the Declaration to be engrossed
(carefully handwritten) on parchment for the delegates to sign. The engrossed copy, which was probably
produced by Thomson's clerk Timothy Matlack, was given the new title of "The unanimous declaration of
the thirteen United States of America".[77] Most of the delegates who signed did so on August 2, 1776,
although some eventual signers were not present and added their names later.
List of signers

Fifty-six delegates eventually signed the Declaration:

President of Congress New Jersey Virginia

1. John Hancock 19. Richard Stockton 40. George Wythe
(Massachusetts) 20. John Witherspoon 41. Richard Henry Lee
21. Francis Hopkinson 42. Thomas Jefferson
New Hampshire 22. John Hart 43. Benjamin Harrison
23. Abraham Clark 44. Thomas Nelson, Jr.
2. Josiah Bartlett 45. Francis Lightfoot Lee
3. William Whipple Pennsylvania 46. Carter Braxton
4. Matthew Thornton
24. Robert Morris North Carolina
Massachusetts 25. Benjamin Rush
26. Benjamin Franklin 47. William Hooper
5. Samuel Adams 27. John Morton 48. Joseph Hewes
6. John Adams 28. George Clymer 49. John Penn
7. Robert Treat Paine 29. James Smith
8. Elbridge Gerry 30. George Taylor South Carolina
31. James Wilson
Rhode Island 32. George Ross 50. Edward Rutledge
51. Thomas Heyward, Jr.
9. Stephen Hopkins Delaware 52. Thomas Lynch, Jr.
10. William Ellery 53. Arthur Middleton
33. George Read
Connecticut 34. Caesar Rodney Georgia
35. Thomas McKean
11. Roger Sherman 54. Button Gwinnett
12. Samuel Huntington Maryland 55. Lyman Hall
13. William Williams 56. George Walton
14. Oliver Wolcott 36. Samuel Chase
37. William Paca
New York 38. Thomas Stone
39. Charles Carroll of
15. William Floyd Carrollton
16. Philip Livingston
17 Francis Lewis
18. Lewis Morris

Signer details

Of the approximately fifty delegates who are thought to have been present in Congress during the voting
on independence in early July 1776,[78] eight never signed the Declaration: John Alsop, George Clinton,
John Dickinson, Charles Humphreys, Robert R. Livingston, John Rogers, Thomas Willing, and Henry
Wisner.[79] Clinton, Livingston, and Wisner were attending to duties away from Congress when the
signing took place. Willing and Humphreys, who voted against the resolution of independence, were
replaced in the Pennsylvania delegation before the August 2 signing. Rogers had voted for the resolution
of independence but was no longer a delegate on August 2. Alsop, who favored reconciliation with Great
Britain, resigned rather than add his name to the document.[80] Dickinson refused to sign, believing the
Declaration premature, but remained in Congress. Although George Read had voted against the resolution
of independence, he signed the Declaration.

The most famous signature on the engrossed copy is that of John Hancock, who, as President of Congress,
presumably signed first.[81] Hancock's large, flamboyant signature became iconic, and "John Hancock"
emerged in the United States an informal synonym for "signature".[82] Two future presidents, Thomas
Jefferson and John Adams, were among the signatories. Edward Rutledge (age 26) was the youngest
signer, and Benjamin Franklin (age 70) was the oldest signer.

John Hancock's now-iconic signature on the Declaration is nearly 5 inches (13 cm) long.[83]

Some delegates, such as Samuel Chase and Charles Carroll of Carrollton, were away on business when
the Declaration was debated, but were back in Congress for the signing on August 2. Other delegates were
present when the Declaration was adopted, but were away on August 2 and added their names later,
including Elbridge Gerry, Lewis Morris, Oliver Wolcott, and Thomas McKean. Richard Henry Lee and
George Wythe were in Virginia during July and August, but returned to Congress and signed the
Declaration probably in September and October, respectively.[84]

As new delegates joined the Congress, they were also allowed to sign. Seven men signed the Declaration
who did not become delegates until after July 4: Matthew Thornton, William Williams, Benjamin Rush,
George Clymer, James Smith, George Taylor, and George Ross.[85] Because of a lack of space, Thornton
was unable to place his signature on the top right of the signing area with the other New Hampshire
delegates, and had to place his signature at the end of the document, on the lower right.[86]

The first published version of the Declaration, the Dunlap broadside, was printed before Congress had
signed the Declaration. The public did not learn who had signed the engrossed copy until January 18,
1777, when the Congress ordered that an "authenticated copy", including the names of the signers, be sent
to each of the thirteen states.[87] This copy, the Goddard Broadside, was the first to list the signers.[88]

Various legends about the signing of the Declaration emerged years later, when the document had become
an important national symbol. In one famous story, John Hancock supposedly said that Congress, having
signed the Declaration, must now "all hang together", and Benjamin Franklin replied: "Yes, we must
indeed all hang together, or most assuredly we shall all hang separately." The quote did not appear in print
until more than fifty years after Franklin's death.[89]

Publication and effect
The Dunlap broadside was the first published version of the Declaration.

After Congress approved the final wording of the Declaration on July 4, a handwritten copy was sent a
few blocks away to the printing shop of John Dunlap. Through the night between 150 and 200 copies
were made, now known as "Dunlap broadsides". Before long, the Declaration was read to audiences and
reprinted in newspapers across the thirteen states. The first official public reading of the document was by
John Nixon in the yard of Independence Hall on July 8; public readings also took place on that day in
Trenton, New Jersey, and Easton, Pennsylvania.

President of Congress John Hancock sent a copy of the Dunlap broadside to General George Washington,
instructing him to have it proclaimed "at the Head of the Army in the way you shall think it most proper".
[90]
Washington had the Declaration read to his troops in New York City on July 9, with the British forces
not far away. Washington and Congress hoped the Declaration would inspire the soldiers, and encourage
others to join the army.[91] After hearing the Declaration, crowds in many cities tore down and destroyed
signs or statues representing royalty. An equestrian statue of King George in New York City was pulled
down and the lead used to make musket balls.[92]

History of the documents
Although the document signed by Congress and enshrined in the National Archives is usually regarded as
the Declaration of Independence, historian Julian P. Boyd, editor of Jefferson's papers, argued that the
Declaration of Independence, like Magna Carta, is not a single document. The version signed by Congress
is, according to Boyd, "only the most notable of several copies legitimately entitled to be designated as
official texts".[93] By Boyd's count there were five "official" versions of the Declaration, in addition to
unofficial drafts and copies.

Drafts and Fair Copy

Jefferson preserved a four-page draft that late in life he called the "original Rough draught".[94] Known to
historians as the Rough Draft, early students of the Declaration believed that this was a draft written alone
by Jefferson and then presented to the Committee of Five. Scholars now believe that the Rough Draft was
not actually an "original Rough draught", but was instead a revised version completed by Jefferson after
consultation with the Committee.[95] How many drafts Jefferson wrote prior to this one, and how much of
the text was contributed by other committee members, is unknown. In 1947, Boyd discovered a fragment
in Jefferson's handwriting that predates the Rough Draft. Known as the Composition Draft, this fragment
is the earliest known version of the Declaration.[96]
The earliest known draft of the Declaration is the Composition Draft, a fragment in Jefferson's
handwriting.

Jefferson showed the Rough Draft to Adams and Franklin, and perhaps other committee members,[97] who
made a few more changes. Franklin, for example, may have been responsible for changing Jefferson's
original phrase "We hold these truths to be sacred and undeniable" to "We hold these truths to be self-
evident".[98] Jefferson incorporated these changes into a copy that was submitted to Congress in the name
of the Committee. Jefferson kept the Rough Draft and made additional notes on it as Congress revised the
text. He also made several copies of the Rough Draft without the changes made by Congress, which he
sent to friends, including Richard Henry Lee and George Wythe, after July 4. At some point in the
process, Adams also wrote out a copy.[99]

The copy that was submitted to Congress by the Committee on June 28 is known as the Fair Copy.
Presumably, the Fair Copy was marked up by secretary Charles Thomson while Congress debated and
revised the text.[100] This document was the one that Congress approved on July 4, making it the first
"official" copy of the Declaration. The Fair Copy was sent to be printed under the title "A Declaration by
the Representatives of the UNITED STATES OF AMERICA, in General Congress assembled". The Fair
Copy has been lost, and was perhaps destroyed in the printing process.[101] If a document was signed on
July 4, it would have been the Fair Copy, and would likely have been signed only by John Hancock,
president of Congress, and secretary Charles Thomson.[102]

Broadsides

The Goddard Broadside, the first printed version of the Declaration of Independence to include the names
of the signatories.

The Declaration was first published as a broadside printed the night of July 4 by John Dunlap of
Philadelphia. John Hancock's eventually famous signature was not on this document; his name appeared
in type under "Signed by Order and in Behalf of the Congress", with Thomson listed as a witness. It is
unknown exactly how many Dunlap broadsides were originally printed, but the number is estimated at
about 200, of which 25 are known to survive. One broadside was pasted into Congress's journal, making it
what Boyd called the "second official version" of the Declaration.[103] Boyd considered the engrossed copy
to be the third official version, and the Goddard Broadside to be the fourth.
Engrossed copy

The copy of the Declaration that was signed by Congress is known as the engrossed or parchment copy.
Throughout the Revolutionary War, the engrossed copy was moved with the Continental Congress,[104]
which relocated several times to avoid the British army. In 1789, after creation of a new government
under the United States Constitution, the engrossed Declaration was transferred to the custody of the
secretary of state.[104] The document was evacuated to Virginia when the British attacked Washington,
D.C. during the War of 1812.[104]

National Bureau of Standards preserving the engrossed version of the Declaration of Independence in
1951.

After the War of 1812, the symbolic stature of the Declaration steadily increased even as the engrossed
copy was noticeably fading. In 1820, Secretary of State John Quincy Adams commissioned printer
William J. Stone to create an engraving essentially identical to the engrossed copy.[104] Boyd called this
copy the "fifth official version" of the Declaration. Stone's engraving was made using a wet-ink transfer
process, where the surface of the document was moistened, and some of the original ink transferred to the
surface of a copper plate, which was then etched so that copies could be run off the plate on a press. When
Stone finished his engraving in 1823, Congress ordered 200 copies to be printed on parchment.[104]
Because of poor conservation of the engrossed copy through the 19th century, Stone's engraving, rather
than the original, has become the basis of most modern reproductions.[105]

From 1841 to 1876, the engrossed copy was publicly exhibited at the Patent Office building in
Washington, D.C. Exposed to sunlight and variable temperature and humidity, the document faded badly.
In 1876, it was sent to Independence Hall in Philadelphia for exhibit during the Centennial Exposition,
which was held in honor of the Declaration's 100th anniversary, and then returned to Washington the next
year.[104] In 1892, preparations were made for the engrossed copy to be exhibited at the World's
Columbian Exposition in Chicago, but the poor condition of the document led to the cancellation of those
plans and the removal of the document from public exhibition.[104] The document was sealed between two
plates of glass and placed in storage. For nearly thirty years, it was exhibited only on rare occasions at the
discretion of the secretary of state.[106]

The Rotunda for the Charters of Freedom in the National Archives building.

In 1921, custody of the Declaration, along with the United States Constitution, was transferred from the
State Department to the Library of Congress. Funds were appropriated to preserve the documents in a
public exhibit that opened in 1924. After the Japanese attack on Pearl Harbor in 1941, the documents
were moved for safekeeping to the United States Bullion Depository at Fort Knox in Kentucky, where
they were kept until 1944.[107]

For many years, officials at the National Archives believed that they, rather than the Library of Congress,
should have custody of the Declaration and the Constitution. The transfer finally took place in 1952, and
the documents, along with the Bill of Rights, are now on permanent display at the National Archives in
the "Rotunda for the Charters of Freedom". Although encased in helium, by the early 1980s the
documents were threatened by further deterioration. In 2001, using the latest in preservation technology,
conservators treated the documents and re-encased them in encasements made of titanium and aluminum,
filled with inert argon gas.[108] They were put on display again with the opening of the remodeled National
Archives Rotunda in 2003.

Publication outside North America

The Declaration of Independence was first published in full outside North America by the Belfast
Newsletter on the 23rd of August, 1776.[109] A copy of the document was being transported to London via
ship when bad weather forced the vessel to port at Derry. The document was then carried on horseback to
Belfast for the continuation of its voyage to England, whereupon a copy was made for the Belfast
newspaper.[110][111]

Legacy
Please help improve this section by expanding it. Further information might be found on the talk
page. (July 2008)

From the Founding through 1850

Historian Pauline Maier wrote of the legacy of the Declaration of Independence from 1800 on, “The
Declaration was at first forgotten almost entirely, then recalled and celebrated by Jeffersonian
Republicans, and later elevated into something akin to holy writ, which made it a prize worth capturing on
behalf of one cause after another.” Its meaning changed from a justification for revolution in 1776 to a
“moral standard by which day-to-day policies and practices of the nation could be judged.”[112]

In the first fifteen years after its adoption, including the debates over the ratification of the Constitution,
the Declaration was rarely mentioned in the period’s political writings. It was not until the 1790s, as the
Federalists and Jeffersonian Republics began the bitter debates of the First Party System, that Republicans
praised a Declaration created by Jefferson alone while Federalists argued that it was a collective creation
based on the instructions from the Continental Congress.[113]

The abolitionist movement combined their own interpretation of the Declaration of Independence with
their religious views. Historian Bertram Wyatt-Brown wrote:

The abolitionist movement was primarily religious in its origins, its leadership, its language, and its methods of
reaching the people. While the ideas of a secular Enlightenment played a major role, too, abolitionists tended to
interpret the Declaration of Independence as a theological as well as a political document. They stressed the
spiritual as much as the civil damage done to the slave and the nation. Antislavery sentiment, of course, found its
political expression in the Free Soil, and later the Republican, parties.[114]

Abolitionist leaders Benjamin Lundy and William Lloyd Garrison both adopted the “twin rocks” of “the
Bible and the Declaration of Independence” as the basis for their philosophies. Garrison wrote, “as long as
there remains a single copy of the Declaration of Independence, or of the Bible, in our land, we will not
despair.”[115] Garrison and most other abolitionists like Lewis Tappan saw their role outside the electoral
process with “the broader moral education of the citizenry to be the movement’s most urgent political
task.”[116]

Abraham Lincoln and the Civil War era

In the political arena, Abraham Lincoln, beginning in 1854 as he spoke out against slavery and the
Kansas-Nebraska Act,[117] provided a reinterpretation of the Declaration that stressed that the unalienable
rights of “Life, Liberty and the pursuit of Happiness” were not limited to the white race.[118] In his October
1854 Peoria speech, Lincoln said:

Nearly eighty years ago we began by declaring that all men are created equal; but now from that beginning we have
run down to the other declaration, that for some men to enslave others is a 'sacred right of self-government. ... Our
republican robe is soiled and trailed in the dust. Let us repurify it. ...Let us re-adopt the Declaration of
Independence, and with it, the practices, and policy, which harmonize with it. ... If we do this, we shall not only
have saved the Union: but we shall have saved it, as to make, and keep it, forever worthy of the saving.[119]

Lincoln accused southerners and Democrats of showing a willingness to "reject, and scout, and spit upon"
the Founders and creating their own reinterpretation of the Declaration in order to exclude blacks.[120]

As the Civil War approached, some Southerners did frequently invoke the right of revolution to justify
secession, comparing their grievances to those suffered by the colonists under British rule. Northerners
rejected this line of thought. The New York Times wrote that while the Declaration of Independence was
based on “Natural Rights against Established Institutions”, the Confederate cause was a counterrevolution
“reversing the wheels of progress ... to hurl everything backward into deepest darkness ... despotism and
oppression.”[121]

Southern leaders such as Confederate President Jefferson Davis and the leading publisher James B. D.
DeBow likewise denied that they were revolutionaries. Davis called it “an abuse of language” to equate
secession and revolution; the South had left the Union in order “to save ourselves from a revolution. The
Republicans and abolitionists were seen as the real revolutionaries because of their intent to attack the
institution of slavery.[122]

In his 1863 Gettysburg Address, Lincoln, referring to the Declaration of Independence, noted: "Four score
and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and
dedicated to the proposition that all men are created equal." Historian Douglas L. Wilson wrote:

But with the victory at Gettysburg, coming almost exactly on the Fourth of July, Lincoln saw something like the
blind hand of fate and determined to look for an opportunity to reinvoke the spirit and emotional response of
Jefferson’s own inspiring words.

Having crafted and condensed his message and adapted it to an occasion ideally suited to a receptive hearing,
Lincoln had maximized his chances for success. Once it gained wide readership, the Gettysburg Address would
gradually become ingrained in the national consciousness. Nether an argument nor an analysis nor a new credo, it
was instead a moving tribute incorporated into an alluring affirmation of the nation’s ideals. “This was the perfect
medium for changing the way most Americans thought about the nation’s founding act,” Garry Wills has written.
“Lincoln does not argue law or history, as Daniel Webster did. He makes history.”[123]

Subsequent legacy

The Declaration has also been influential outside of the United States.[124][vague]
In fiction, the adoption of the Declaration of Independence was dramatized in the 1969 Tony Award-
winning musical play 1776, and the 1972 movie of the same name, as well as in the 2008 television
miniseries John Adams. The engrossed copy of the Declaration is central to the 2004 Hollywood film
National Treasure, in which the main character steals the document because he believes it has secret clues
to a treasure hidden by some of the Founding Fathers of the United States. The Declaration figures
prominently in The Probability Broach, wherein the point of divergence rests in the addition of a single
word to the document, causing it to state that governments "derive their just power from the unanimous
consent of the governed."

See also
• Declaration of Independence
• History of the United States
• Articles of Confederation
• United States Constitution
• United States Bill of Rights

Notes
Freedom (philosophy)
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article is missing citations or needs footnotes.
Using inline citations helps guard against copyright violations and factual inaccuracies. (May 2008)
For other uses, see Freedom.
Part of a series on

Freedom

Freedom by topic
Assembly
Association
Economics
Intellectual pursuits
Liberty
Movement
Personal life
Philosophy
Politics
Press
Religion and beliefs
Speech and expression
Information
Thought
Other
Censorship
Coercion
Children's rights
Human rights
Indices
Media transparency
Negative liberty
Positive liberty

The Statue of Liberty, a popular icon of freedom.

Freedom, or the idea of being free, is a broad concept that has been given numerous interpretations by
philosophies and schools of thought. The protection of interpersonal freedom can be the object of a social
and political investigation, while the metaphysical foundation of inner freedom is a philosophical and
psychological question.

Contents
[hide]
• 1 Etymology
• 2 Forms
• 3 Interpretation
o 3.1 Innate state
o 3.2 Positive and negative freedom
o 3.3 Inner autonomy
• 4 The ontology of freedom
• 5 See also
• 6 References
• 7 Bibliography

• 8 External links

[edit] Etymology

Ama-gi, an early human symbol representing freedom in Sumerian cuneiform

The ama-gi, a Sumerian cuneiform word, is the earliest known written symbol representing the idea of
freedom. The English word "freedom" comes from an Indo-European root that means "to love." Cognates
of the English word "freedom" include the Old High German word for "peace" and the English word
"afraid" from a Vulgar Latin word for breaking the peace and all have freedom.

[edit] Forms

Liberty Leading the People, a personification of Liberty.

• Outer or political freedom, or personal liberty, is the absence of outward restraints, for example
with respect to speech, freedom of thought, religious practice, and the press; freedom to modify
one's outward circumstances. (See Freedom (political))

• Inner freedom, i.e. the state of being an inwardly autonomous individual capable of exerting free
will or freedom of choice within a given set of outward circumstances.

[edit] Interpretation
[edit] Innate state
Gandhi promoted political and spiritual freedom through nonviolence.

In philosophy, freedom often ties in with the question of free will. The French philosopher Jean-Jacques
Rousseau asserted that the condition of freedom was inherent to humanity, an inevitable facet of the
possession of a soul and sapience, with the implication that all social interactions subsequent to birth
imply a loss of freedom, voluntarily or involuntarily. He made the famous quote "Man is born free, but
everywhere he is in chains". Libertarian philosophers have argued that all human beings are always free
— Jean-Paul Sartre, for instance, famously claimed that humans are "condemned to be free" — because
they always have a choice. Even an external authority can only threaten punishment after an action, not
physically prevent a person from carrying out an action. At the other end of the spectrum, determinism
claims that the future is inevitably determined by prior causes and freedom is an illusion.

[edit] Positive and negative freedom

The philosopher Isaiah Berlin drew an important distinction between "freedom from" (negative freedom)
and "freedom to" (positive freedom). For example, freedom from oppression and freedom to develop one's
potential. Both these types of freedom are in fact reflected in the Universal Declaration of Human Rights.

Freedom as the absence of restraint means unwilling to subjugate, lacking submission, or without forceful
inequality.[citation needed] The achievement of this form of freedom depends upon a combination of the
resistance of the individual (or group) and one's (their) environment; if one is in jail or even limited by a
lack of resources, this person is free within their power and environment, but not free to defy reality.
Natural laws restrict this form of freedom; for instance, no one is free to fly (though we may or may not
be free to attempt to do so). Isaiah Berlin appears to call this kind of freedom "negative freedom" — an
absence of obstacles put in the way of action (especially by other people). He distinguishes this from
"positive freedom", which refers to one's power to make choices leading to action.

[edit] Inner autonomy
Kierkegaard insists that awareness of one's freedom leads to existential anxiety.

Freedom can also signify inner autonomy, or mastery over one's inner condition. This has several possible
significances:[1]

• the ability to act in accordance with the dictates of reason;
• the ability to act in accordance with one's own true self or values;
• the ability to act in accordance with universal values (such as the True and the Good); and
• the ability to act independently of both the dictates of reason and the urges of desires, i.e.
arbitrarily (autonomously).

Especially spiritually-oriented philosophers have considered freedom to be a positive achievement of
human will rather than an inherent state granted at birth. Rudolf Steiner developed a philosophy of
freedom based upon the development of situationally-sensitive ethical intuitions: "acting in freedom is
acting out of a pure love of the deed as one intuits the moral concept implicit in the deed".[2] Similarly, E.
F. Schumacher held that freedom is an inner condition, and that a human being cannot "have" freedom,
but "can make it his aim to become free".[3] In this sense, freedom may also encompass the peaceful
acceptance of reality. The theological question of freedom generally focuses on reconciling the experience
or reality of inner freedom with the omnipotence of the divine. Freedom has also been used a rallying cry
for revolution or rebellion.

In Hans Sachs' play Diogenes, the Greek philosopher says to Alexander the Great, whom he believes to be
unfree: "You are my servants' servant". The philosopher states that he has conquered fear, lust, and anger
- and is thus inwardly free - while Alexander still serves these masters - and despite his outward power
has failed to achieve freedom; having conquered the world without, he has not mastered the world within.
The self-mastery Sachs refers to here is dependent upon no one and nothing other than ourselves.

Notable 20th century individuals who have exemplified this form of freedom include Nelson Mandela,
Rabbi Leo Baeck, Gandhi, Lech Wałęsa and Václav Havel.

[edit] The ontology of freedom
Freedom appears to be in conflict with scientific determinism. One solution to this is dualistic, suggesting
that if everything material is subjective to deterministic causality, then for freedom to exist, it must be of a
fundamentally different substantial nature then material existence.

If, on the other hand, freedom does not exist, then our subjective experience of freedom - and thus our
responsibility for our own actions - is an illusion. Thus, determinism can lead to the claim that nobody is
responsible for anything, and materialism may also put into question concepts of ethics and guilt.

[edit] See also
• Freedom (political)
• Anarchism
• Golden Freedom
• Liberty
• Anarchy
• Christian libertarianism
• Parametric determinism
• List of indices of freedom
• Leo Strauss
• Inner peace
• Self-ownership
• Philosophy of Freedom

[edit] References
1. ^ Wolf, Susan, Freedom Within Reason
2. ^ Robert McDermott, The Essential Steiner, ISBN 00606553450, p. 43
3. ^ E. F. Schumacher, Guide for the Perplexed, ISBN 0060906111, pp. 29f

[edit] Bibliography
• Aristotle, The Nicomachean Ethics, Book III.
• Augustine (Saint), On Free Will.
• Hobbes, Thomas, Of Liberty and Necessity.
• Hume, David, An Enquiry Concerning Human Understanding.
• Mill, John Stuart, On Liberty.
• Plato, The Republic.
• Schiller, Friedrich, Letters upon the Aesthetic Education of Man. ISBN 1-4191-3003-X
• Wolf, Susan, Freedom Within Reason, Oxford: 1990.
• Berlin, Isaiah, Four Essays on Liberty. London: Oxford University Press, 1969.

[edit] External links
Wikiquote has a collection of quotations related to: freedom

• Sovereignty and Freedom
• Non-Freedom - an article about the concept of non-freedom (in german), Ich denke,dass ich frei
bin, in Sic et Non
• Free Will article from Catholic Encyclopedia
Philosophy of religion
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article needs additional citations for verification. Please help improve this article by adding
reliable references. Unsourced material may be challenged and removed. (October 2008)
This article may contain original research or unverified claims. Please improve the article by adding
references. See the talk page for details. (October 2008)
Philosophy portal

Philosophy of religion is a branch of philosophy that is concerned with the philosophical study of
religion, including arguments over the nature and existence of God, religious language, miracles, prayer,
the problem of evil, and the relationship between religion and other value-systems such as science and
ethics, among others.[citation needed]
It is sometimes distinguished from "religious philosophy", the philosophical thinking that is inspired and
directed by religion, such as Christian philosophy and Islamic philosophy. Instead, philosophy of religion
is the philosophical thinking about religion, which can be carried out dispassionately by a believer and
non-believer alike.[1]

Contents
[hide]

• 1 Philosophy of religion as a part of metaphysics
• 2 Questions asked in philosophy of religion
• 3 What is God?
o 3.1 Monotheistic definitions
o 3.2 Polytheistic definitions
o 3.3 Pantheistic definitions
• 4 Rationality of belief
o 4.1 Positions
o 4.2 Natural theology
• 5 Major philosophers of religion
• 6 See also
• 7 References
• 8 Further reading

• 9 External links

[edit] Philosophy of religion as a part of metaphysics

Aristotle

Philosophy of religion has classically been regarded as a part of metaphysics. In Aristotle's Metaphysics,
he described first causes as one of the subjects of his investigation. For Aristotle, the first cause was the
unmoved mover, which has been read as God, particularly when Aristotle's work became prevalent again
in the Medieval West. This first PO cause argument later came to be called natural theology by rationalist
philosophers of the seventeenth and eighteenth centuries. In Metaphysics, Aristotle also states that the
word that comes closest to describing the meaning of the word God is 'Understanding.'[citation needed] Today,
philosophers have adopted the term philosophy of religion for the subject, and typically it is regarded as a
separate field of specialization, though it is also still treated by some, particularly Catholic philosophers,
as a part of metaphysics.

To understand the historical relationship between metaphysics and philosophy of religion, remember that
the traditional objects of religious discussion have been very special sorts of entities (such as gods, angels,
supernatural forces, and the like) and events, abilities, or processes (the creation of the universe, the
ability to do or know anything, interaction between humans and gods, and so forth). Metaphysicians (and
ontologists in particular) are characteristically interested in understanding what it is for something to
exist--what it is for something to be an entity, event, ability, process, and so forth. Because many
members of religious traditions believe in things that exist in profoundly different ways from more
everyday things, objects of religious belief both raise special philosophical problems and, as extreme or
limiting cases, invite us to clarify central metaphysical concepts.

However, the philosophy of religion has concerned itself with more than just metaphysical questions. In
fact the subject has long involved important questions in areas such as epistemology, philosophy of
language, philosophical logic, and moral philosophy. See also world view.

[edit] Questions asked in philosophy of religion

Kierkegaard

One way to understand the tasks at hand for philosophers of religion is to contrast them with theologians.
Theologians sometimes consider the existence of God as axiomatic, or self-evident. Most theological
treatises seek to justify or support religious claims by two primary epistemic means: rationalization or
intuitive metaphors. A philosopher of religion examines and critiques the epistemological, logical,
aesthetic and ethical foundations inherent in the claims of a religion. Whereas a theologian could
elaborate metaphysically on the nature of God either rationally or experientially, a philosopher of religion
is more interested in asking what may be knowable and opinable with regards to religions' claims.

A philosopher of religion does not ask "What is God?", for such is a complex question in that it assumes
the existence of God and that God has a knowable nature. Instead, a philosopher of religion asks whether
there are sound reasons to think that God does or does not exist.[citation needed]

Still, there are other questions studied in the philosophy of religion. For example: What, if anything,
would give us good reason to believe that a miracle has occurred? What is the relationship between faith
and reason? What is the relationship between morality and religion? What is the status of religious
language? Does petitionary prayer (sometimes still called impetratory prayer) make sense? Are salvo-
lobotomies (lobotomies performed to keep believer from sinning) moral actions?

[edit] What is God?
The question "What is God?" is sometimes also phrased as "What is the meaning of the word God?" Most
philosophers expect some sort of definition as an answer to this question, but they are not content simply
to describe the way the word is used: they want to know the essence of what it means to be God. Western
philosophers typically concern themselves with the God of monotheistic religions (see the nature of God
in Western theology), but discussions also concern themselves with other conceptions of the divine.[original
research?]
Indeed, before attempting a definition of a term it is essential to know what sense of the term is to be
defined. In this case, this is particularly important because there are a number of widely different senses
of the word 'God.' So before we try to answer the question "What is God?" by giving a definition, first we
must get clear on which conception of God we are trying to define. Since this article is on "philosophy of
religion" it is important to keep to the canon of this area of philosophy. For whatever reasons, the
Western, monotheistic conception of God (discussed below) has been the primary source of investigation
in philosophy of religion. (One likely reason as to why the Western conception of God is dominant in the
canon of philosophy of religion is that philosophy of religion is primarily an area of analytic philosophy,
which is primarily Western.) Among those people who believe in supernatural beings, some believe there
is just one God (monotheism; see also monotheistic religion), while others, such as Hindus, believe in
many different deities (polytheism; see also polytheistic religion) while maintaining that all are
manifestations of one God. Hindus also have a widely followed monistic philosophy that can be said to be
neither monotheistic nor polytheistic (see Advaita Vedanta). Since Buddhism tends to deal less with
metaphysics and more with ontological (see Ontology) questions, Buddhists generally do not believe in
the existence of a creator God similar to that of the Abrahamic religions, but direct attention to a state
called Nirvana (See also Mu).

Within these two broad categories (monotheism and polytheism) there is a wide variety of possible
beliefs, although there are relatively few popular ways of believing. For example, among the monotheists
there have been those who believe that the one God is like a watchmaker who wound up the universe and
now does not intervene in the universe at all; this view is deism. By contrast, the view that God continues
to be active in the universe is called theism. (Note that 'theism' is here used as a narrow and rather
technical term, not as a broader term as it is below. For full discussion of these distinct meanings, refer to
the article Theism.)

[edit] Monotheistic definitions

Augustine

Monotheism is the view that only one God exists (as opposed to multiple gods). In Western thought, God
is traditionally described as a being that possesses at least three necessary properties: omniscience (all-
knowing), omnipotence (all-powerful), and omnibenevolence (supremely good). In other words, God
knows everything, has the power to do anything, and is perfectly good. Many other properties (e.g.,
omnipresence) have been alleged to be necessary properties of a god; however, these are the three most
uncontroversial and dominant in Christian tradition. By contrast, Monism is the view that all is of one
essential essence, substance or energy. Monistic theism, a variant of both monism and monotheism, views
God as both immanent and transcendent. Both are dominant themes in Hinduism.

Even once the word "God" is defined in a monotheistic sense, there are still many difficult questions to be
asked about what this means. For example, what does it mean for something to be created? How can
something be "all-powerful"?
[edit] Polytheistic definitions

The distinguishing characteristic of polytheism is its belief in more than one god(dess). There can be as
few as two (such as a classical Western understanding of Zoroastrian dualism) or an innumerably large
amount, as in Hinduism (as the Western world perceives it). There are many varieties of polytheism; they
all accept that many gods exist, but differ in their responses to that belief. Henotheists for example,
worship only one of the many gods, either because it is held to be more powerful or worthy of worship
than the others (some pseudo-Christian sects take this view of the Trinity, holding that only God the
Father should be worshipped, Jesus and the Holy Spirit being distinct and lesser gods), or because it is
associated with their own group, culture, state, etc. (ancient Judaism is sometimes interpreted in this way).
The distinction isn't a clear one, of course, as most people consider their own culture superior to others,
and this will also apply to their culture's God. Kathenotheists have similar beliefs, but worship a different
god at different times or places. In Kali Yukam all gets unified into Ayya Vaikundar for destroying the
Kaliyan.

[edit] Pantheistic definitions

Pantheists assert that God is itself the natural universe. The most famous Western pantheist is Baruch
Spinoza, though the precise characterization of his particular set of views is complex and is often cited as
one of the most internally consistent philosophical systems.

Panentheism holds that the physical universe is part of God, but that God is more than this. While
pantheism can be summed up by "God is the world and the world is God", panentheism can be summed
up as "The world is in God and God is in the world, but God is more than the world and is not
synonymous with the world". However, this might be a result of a misinterpretation of what is meant by
world in pantheism, as many pantheists use "universe" rather than "world" and point out the utter vastness
of the universe and how much of it (temporal causality, alternate dimensions, superstring theory) remains
unknown to humanity. By expressing pantheism in this way and including such elements, rather than
limiting it to this particular planet, and specifically limiting it to human experience, the theory is
somewhat nearer to the view of panentheists while still maintaining the distinct characteristics of
pantheism.[original research?]

[edit] Rationality of belief
Main article: Existence of God

Aquinas

[edit] Positions

The second question, "Do we have any good reason to think that God does (or does not) exist?", is equally
important in the philosophy of religion. There are five main positions with regard to the existence of God
that one might take:
1. Theism - the belief in the existence of one or more divinities or deities.
2. Pantheism - the belief that God is both immanent and transcendent; God is one and all is God.
3. Deism - the belief that God does exist, but does not interfere with human life and the laws of the
universe.
4. Agnosticism - the belief that the existence or non-existence of deities is currently unknown or
unknowable, or that the existence of a God or of gods cannot be proven.
5. Atheism - the rejection of belief, or absence of belief, in deities.
6. Retreism - The belief in the ending or previous existence of god or gods

It is important to note that some of these positions are not mutually exclusive. For example, agnostic
theists choose to believe God exists while asserting that knowledge of God's existence is inherently
unknowable. Similarly, agnostic atheists lack belief in God or choose to believe God does not exist while
also asserting that knowledge of God's existence is inherently unknowable.

[edit] Natural theology

The attempt to provide proofs or arguments for the existence of God is one aspect of what is known as
natural theology or the natural theistic project. This strand of Natural theology attempts to justify belief in
God by independent grounds. There is plenty of philosophical literature on faith (especially fideism) and
other subjects generally considered to be outside the realm of natural theology. Perhaps most of
philosophy of religion is predicated on natural theology's assumption that the existence of God can be
justified or warranted on rational grounds. There has been considerable philosophical and theological
debate about the kinds of proofs, justifications and arguments that are appropriate for this discourse.[2]

The philosopher Alvin Plantinga has shifted his focus to justifying belief in God (that is, those who
believe in God, for whatever reasons, are rational in doing so) through reformed epistemology, in the
context of a theory of warrant and proper function.

Other reactions to natural theology include the efforts of Wittgensteinian philosophers of religion, most
notably D. Z. Phillips who passed away in 2006. Phillips rejects "natural theology" in favor of a
grammatical approach which investigates the meaning of belief in God, as opposed to attempts which aim
at investigating its truth or falsity. For Phillips, the question of whether God exists confuses the logical
categories which govern theistic language with those that govern other forms of discourse. Specifically,
the Wittgensteinian maintains that the nature of religious belief is conceptually distorted by the natural
theologian who takes the religious confession, "God exists," as a propositional statement (akin to a
scientific claim). According to Phillips, the question of whether or not God exists cannot be "objectively"
answered by philosophy because the categories of truth and falsity, which are necessary to make this sort
of discourse possible, have no application in the religious contexts wherein religious belief has its sense
and meaning. Hence, the job of philosophy, according to this approach, is not to investigate the
"rationality" of belief in God but to elucidate its meaning.

[edit] Major philosophers of religion
• Abhinavagupta • Peter van Inwagen • Philo of Alexandria
• Adi Shankara • Allama Iqbal • Alvin Plantinga
• Avicenna (also known as • William James • Plotinus
Ibn Sina) • Immanuel Kant • Sarvepalli Radhakrishnan
• Ramanuja • Ibn Khaldun • Muhammad ibn Zakarīya
• Madhvacharya • Søren Kierkegaard Rāzi (also known as
• Marilyn McCord Adams • Al-Kindi (also known as Rhazes)
• Robert Adams Alkindus) • Bertrand Russell
• William Alston • Nishida Kitaro • Mulla Sadra
• Anselm of Canterbury • Ruhollah Khomeini • Duns Scotus
• Averroes (also known as • Nishitani Keiji • Ninian Smart
Ibn Rushd) • Harold Kushner • Abdolkarim Soroush
• Thomas Aquinas • C. S. Lewis • Baruch Spinoza
• Augustine of Hippo • Gottfried Leibniz • Melville Y. Stewart
• Anicius Manlius Severinus • Knud Ejler Løgstrup • Shahab al-Din Suhrawardi
Boethius • J. L. Mackie • Richard Swinburne
• Giordano Bruno • Maimonides • Denys Turner
• Joseph Butler • Nicolas Malebranche • Peter Vardy
• Samuel Clarke • Jean-Luc Marion • Vasubandhu
• Anne Conway • Michael Martin • Keith Ward
• William Lane Craig • Herbert McCabe • William Whewell
• René Descartes • Alister E. McGrath • Nicholas Wolterstorff
• Pseudo-Dionysius • Thomas V. Morris • Ramakrishna
• Herman Dooyeweerd • Nagarjuna • Vivekananda
• Mircea Eliade • Milarepa • René Guénon
• Desiderius Erasmus • Dogen Zenji • Frithjof Schuon
• Al-Farabi (also known as • Ibn al-Nafis • Seyyed Hossein Nasr
Alpharabius) • Friedrich Nietzsche • F.W.J. Schelling
• Siddartha Gautama • William of Ockham • Paul Tillich
• Al Ghazali (also known as • Rudolph Otto • Gordon Clark
Algazel) • William Paley • Ahmad Rafique
• Yehuda Halevi • Blaise Pascal
• Charles Hartshorne • Dr. Zakir Naik
• Ibn al-Haytham (also • D. Z. Phillips
known as Alhazen)
• Heraclitus
• John Hick

• David Hume

[edit] See also
• Evolutionary origin of religions
• Evolutionary psychology of religion
• Major world religions
• Natural theology
• Psychology of religion
• Religion
• Theodicy
• Theology
• Theories of religion

[edit] References
Action theory
From Wikipedia, the free encyclopedia

(Redirected from Philosophy of action)
Jump to: navigation, search

Action theory is an area in philosophy concerned with theories about the processes causing intentional
(wilful) human bodily movements of more or less complex kind. This area of thought has attracted the
strong interest of philosophers ever since Aristotle's Nicomachean Ethics (Third Book). Increasingly,
considerations of action theory have been taken up by scholars in the social sciences. With the advent of
psychology and later neuroscience, many theories of action are now subject to empirical testing.

Basic action theory typically describes action as behaviour caused by an agent in a particular situation.
The agent's desires and beliefs (e.g. my wanting a glass of water and believing the clear liquid in the cup
in front of me is water) lead to bodily behavior (e.g. reaching over for the glass). In the simple theory (see
Donald Davidson), the desire and belief jointly cause the action. Michael Bratman has raised problems for
such a view and argued that we should take the concept of intention as basic and not analyzable into
beliefs and desires.

In some theories a desire plus a belief about the means of satisfying that desire are always what is behind
an action. Agents aim, in acting, to maximize the satisfaction of their desires. Such a theory of prospective
rationality underlies much of economics and other social sciences within the more sophisticated
framework of Rational Choice. However, many theories of action argue that rationality extends far
beyond calculating the best means to achieve ones ends. For instance, a belief that I ought to do X, in
some theories, can directly cause me to do X without my having to want to do X (i.e. have a desire to do
X). Rationality, in such theories, also involves responding correctly to the reasons an agent perceives, not
just acting on his wants.

While action theorists generally employ the language of causality in their theories of what the nature of
action is, the issue of what causal determination comes to has been central to controversies about the
nature of free will.

Conceptual discussions also revolve around a precise definition of action in philosophy. Scholars may
disagree on which bodily movements fall under this category, e.g. whether thinking should be analysed as
action, and how complex actions involving several steps to be taken and diverse intended consequences
are to be summarised or decomposed.

[edit] Scholars of action theory

What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?
—Ludwig Wittgenstein, Philosophical Investigations §621

• Maria Alvarez
• Robert Audi
• G. E. M. Anscombe
• Aristotle
• Jonathan Bennett
• Michael Bratman
• D.G. Brown
• David Charles
• August Cieszkowski
• Arthur Collins
• Jonathan Dancy
• Donald Davidson
• William H. Dray
• Fred Dretske
• John Martin Fischer
• Harry Frankfurt
• Carl Ginet
• Alvin I. Goldman
• Jürgen Habermas
• Hegel
• Carl Hempel
• Rosalind Hursthouse
• David Hume
• Jennifer Hornsby
• John Hyman
• Hans Joas
• Robert Kane
• Anthony Kenny
• Jaegwon Kim
• Kathleen Lennon
• Timothy O' Connor
• Brian O'Shaughnessy
• John McDowell
• A.I. Melden
• Alfred R. Mele
• Ludwig von Mises
• Carlos J. Moya
• Thomas Nagel
• Paul Pietroski
• Joseph Raz
• Thomas Reid
• David-Hillel Ruben
• Constantine Sandis
• G.F. Schueler
• John Searle
• Scott Sehon
• Wilfred Sellars
• Kieran Setiya
• Michael Smith
• Ralf Stoecker
• Rowland Stout
• Frederick Stoutland
• Galen Strawson
• Charles Taylor
• Richard Taylor
• Irving Thalberg
• Michael Thompson
• Judith Jarvis Thomson
• Raimo Tuemola
• David Velleman
• Candace Vogler
• R. Jay Wallace
• Gary Watson
• George Wilson
• Georg Henrik von Wright
• Ludwig Wittgenstein
• Max Weber

[edit] See also
• Praxeology

Earth
Moon
This article is about Earth's moon. For moons in general, see Natural satellite. For other uses, see Moon
(disambiguation).

The Moon (Latin: Luna) is Earth's only natural satellite and the fifth largest natural satellite in the Solar
System.
The average centre-to-centre distance from the Earth to the Moon is 384,403 km, about thirty times the
diameter of the Earth. The Moon's diameter is 3,474 km,[6] a little more than a quarter that of the Earth.
Thus, the Moon's volume is about 2 percent that of Earth; the pull of gravity at its surface is about 17
percent that of the Earth. The Moon makes a complete orbit around the Earth every 27.3 days (the orbital
period), and the periodic variations in the geometry of the Earth–Moon–Sun system are responsible for
the lunar phases that repeat every 29.5 days (the synodic period).

The Moon is the only celestial body to which humans have travelled and upon which humans have
landed. The first artificial object to escape Earth's gravity and pass near the Moon was the Soviet Union's
Luna 1, the first artificial object to impact the lunar surface was Luna 2, and the first photographs of the
normally occluded far side of the Moon were made by Luna 3, all in 1959. The first spacecraft to perform
a successful lunar soft landing was Luna 9, and the first unmanned vehicle to orbit the Moon was Luna
10, both in 1966.[6] The United States (U.S.) Apollo program achieved the only manned missions to date,
resulting in six landings between 1969 and 1972. Human exploration of the Moon ceased with the
conclusion of the Apollo program, although several countries have either sent or announced plans to send
people and/or robotic spacecraft to the Moon.
Sun
The Sun (Latin: Sol) is the star at the center of the Solar System. The Earth and other matter (including
other planets, asteroids, meteoroids, comets, and dust) orbit the Sun,[9] which by itself accounts for about
99.8% of the Solar System's mass. Energy from the Sun, in the form of sunlight, supports almost all life
on Earth via photosynthesis, and drives the Earth's climate and weather.

The surface of the Sun consists of hydrogen (about 74% of its mass, or 92% of its volume), helium (about
24% of mass, 7% of volume), and trace quantities of other elements, including iron, nickel, oxygen,
silicon, sulfur, magnesium, carbon, neon, calcium, and chromium.[10] The Sun has a spectral class of G2V.
G2 means that it has a surface temperature of approximately 5,780 K (5,500 C) giving it a white color that
often, because of atmospheric scattering, appears yellow when seen from the surface of the Earth. This is
a subtractive effect, as the preferential scattering of shorter wavelength light removes enough violet and
blue light, leaving a range of frequencies that is perceived by the human eye as yellow. It is this scattering
of light at the blue end of the spectrum that gives the surrounding sky its color. When the Sun is low in
the sky, even more light is scattered so that the Sun appears orange or even red.[11]

The Sun's spectrum contains lines of ionized and neutral metals as well as very weak hydrogen lines. The
V (Roman five) in the spectral class indicates that the Sun, like most stars, is a main sequence star. This
means that it generates its energy by nuclear fusion of hydrogen nuclei into helium. There are more than
100 million G2 class stars in our galaxy. Once regarded as a small and relatively insignificant star, the
Sun is now known to be brighter than 85% of the stars in the galaxy, most of which are red dwarfs.[12]

The Sun orbits the center of the Milky Way galaxy at a distance of approximately 26,000 to 27,000 light-
years from the galactic center, moving generally in the direction of Cygnus and completing one revolution
in about 225–250 million years (one Galactic year). Its orbital speed was thought to be 220±20 km/s, but a
new estimate gives 251 km/s[13]. This is equivalent to about one light-year every 1,190 years, and about
one AU every 7 days. These measurements of galactic distance and speed are as accurate as we can get
given our current knowledge, but may change as we learn more.[14] Since our galaxy is moving with
respect to the cosmic microwave background radiation (CMB) in the direction of Hydra with a speed of
550 km/s, the sun's resultant velocity with respect to the CMB is about 370 km/s in the direction of Crater
or Leo.[15]

The Sun is currently traveling through the Local Interstellar Cloud in the low-density Local Bubble zone
of diffuse high-temperature gas, in the inner rim of the Orion Arm of the Milky Way Galaxy, between the
larger Perseus and Sagittarius arms of the galaxy. Of the 50 nearest stellar systems within 17 light-years
(1.6×1014 km) from the Earth, the Sun ranks 4th in absolute magnitude as a fourth magnitude star
(M=4.83).
Walking
From Wikipedia, the free encyclopedia

(Redirected from Walk)
Jump to: navigation, search
For other uses, see Walking (disambiguation).
Contents
[hide]

• 1 Biomechanics
• 2 As a leisure activity
• 3 As transportation
• 4 In robotics
• 5 See also
• 6 Notes
• 7 References

• 8 External links

Walking (also called ambulation) is the main form of animal locomotion on land, distinguished from
running and crawling.[1][2] When carried out in shallow waters, it is usually described as wading and when
performed over a steeply rising object or an obstacle it becomes scrambling or climbing. The word walk is
descended from the Old English wealcan "to roll".
Walking is generally distinguished from running in that only one foot at a time leaves contact with the
ground: for humans and other bipeds running begins when both feet are off the ground with each step.
(This distinction has the status of a formal requirement in competitive walking events, resulting in
disqualification at the Olympic level.) For horses and other quadrupedal species, the running gaits may be
numerous, and walking keeps three feet at a time on the ground.

The average human child achieves independent walking ability around 11 months old.[3]

While not strictly bipedal, several primarily bipedal human gaits (where the long bones of the arms
support at most a small fraction of the body's weight) are generally regarded as variants of walking. These
include:

• Hand walking; an unusual form of locomotion, in which the walker moves primarily using their
hands.
• Walking on crutches (usually executed by alternating between standing on both legs, and rocking
forward "on the crutches" (i.e., supported under the armpits by them);
• Walking with one or two walking stick(s) or trekking poles (reducing the load on one or both legs,
or supplementing the body's normal balancing mechanisms by also pushing against the ground
through at least one arm that holds a long object);
• Walking while holding on to a walker, a framework to aid with balance; and
• Scrambling, using the arms (and hands or some other extension to the arms) not just as a backup to
normal balance, but, as when walking on talus, to achieve states of balance that would be
impossible or unstable when supported solely by the legs;

For humans, walking is the main form of transportation without a vehicle or riding animal. An average
walking speed is about 4 to 5 km/h (2 to 3 mph), although this depends heavily on factors such as height,
weight, age and terrain.[4][5] A pedestrian is a person who is walking on a road, sidewalk or path.

[edit] Biomechanics
Human walking is accomplished with a strategy called the double pendulum. During forward motion, the
leg that leaves the ground swings forward from the hip. This sweep is the first pendulum. Then the leg
strikes the ground with the heel and rolls through to the toe in a motion described as an inverted
pendulum. The motion of the two legs is coordinated so that one foot or the other is always in contact
with the ground. The process of walking recovers approximately sixty per cent of the energy used due to
pendulum dynamics and ground reaction force.[6][7]

Walking differs from a running gait in a number of ways. The most obvious is that during walking one leg
always stays on the ground while the other is swinging. In running there is typically a ballistic phase
where the runner is airborne with both feet in the air (for bipedals).

Another difference concerns the movement of the center of mass of the body. In walking the body 'vaults'
over the leg on the ground, raising the center of mass to its highest point as the leg passes the vertical, and
dropping it to the lowest as the legs are spread apart. Essentially kinetic energy of forward motion is
constantly being traded for a rise in potential energy. This is reversed in running where the center of mass
is at its lowest as the leg is vertical. This is because the impact of landing from the ballistic phase is
absorbed by bending the leg and consequently storing energy in muscles and tendons. In running there is
a conversion between kinetic, potential, and elastic energy.

There is an absolute limit on an individual's speed of walking (without special techniques such as those
employed in speed walking) due to the velocity at which the center of mass rises or falls - if it's greater
than the acceleration due to gravity the person will become airborne as they vault over the leg on the
ground. Typically however, animals switch to a run at a lower speed than this due to energy efficiencies.

[edit] As a leisure activity

Race walking

Many people walk as a hobby, and in our post-industrial age it is often enjoyed as one of the best forms of
exercise.[8]

Fitness walkers and others may use a pedometer to count their steps. The types of walking include
bushwalking, racewalking, weight-walking, hillwalking, volksmarching, Nordic walking and hiking on
long-distance paths. Sometimes people prefer to walk indoors using a treadmill. In some countries
walking as a hobby is known as hiking (the typical North American term), rambling (a somewhat dated
British expression, but remaining in use because it is enshrined in the title of the important Ramblers'
Association), or tramping. Hiking is a subtype of walking, generally used to mean walking in nature areas
on specially designated routes or trails, as opposed to in urban environments; however, hiking can also
refer to any long-distance walk. More obscure terms for walking include "to go by Marrow-bone stage",
"to take one's daily constitutional", "to ride Shank's pony", "to ride Shank's mare", or "to go by Walker's
bus." Among search and rescue responders, those responders who walk (rather than ride, drive, fly, climb,
or sit in a communications trailer) often are known as "ground pounders".[9][10]

The Walking the Way to Health Initiative[1] is the largest volunteer led walking scheme in the United
Kingdom. Volunteers are trained to lead free Health Walks from community venues such as libraries and
GP surgeries. The scheme has trained over 35,000 volunteers and have over 500 schemes operating across
the UK, with thousands of people walking every week.

Professionals working to increase the number of people walking more usually come from 6 sectors:
health, transport, environment, schools, sport & recreation and urban design. A new organization called
Walk England[2] launched a web site on the 18th June 2008 to provide these professionals with evidence,
advice and examples of success stories of how to encourage communities to walk more. The site has a
social networking aspect to allow professionals and the public to ask questions, discuss, post news and
events and communicate with others in their area about walking ,as well as a 'walk now' option to find out
what walks are available in each region.

The world's largest registration walking event is the International Four Days Nijmegen. The annual Labor
Day walk on Mackinac Bridge draws over sixty thousand participants. The Chesapeake Bay Bridge walk
annually draws over fifty thousand participants. Walks are often organized as charity events with walkers
seeking sponsors to raise money for a specific cause. Charity walks range in length from two mile (3 km)
or five km walks to as far as fifty miles (eighty km). The MS Challenge Walk is an example of a fifty
mile walk which raises money to fight multiple sclerosis. The Oxfam Trailwalker is a one hundred km
event.
Sheep walking along a road

In Britain, the Ramblers' Association is the biggest organization that looks after the interests of walkers. A
registered charity, it has 139 000 members. Regular, brisk cycling or walking can improve confidence,
stamina, energy, weight control, life expectancy and reduce stress. It can also reduce the risk of coronary
heart disease, strokes, diabetes, high blood pressure, bowel cancer and osteoporosis. Modern scientific
studies have showed that walking, besides its physical benefits, is also beneficial for the mind —
improving memory skills, learning ability, concentration and abstract reasoning, besides reducing stress
and uplifting ones' spirits. Source

[edit] As transportation
Walking is the most basic and common mode of transportation and is recommended for a healthy
lifestyle, and has numerous environmental benefits [11]. However, people are walking less in the UK, a
Department of Transport report[3] found that between 1995/97 and 2005 the average number of walk trips
per person fell by 16%, from 292 to 245 per year. Many professionals in local authorities and the NHS are
employed to halt this decline by ensuring that the built environment allows people to walk and that there
are walking opportunities available to them.

In Europe Walk21[4] launched an 'International Charter for Walking' to help refocus existing policies,
activities and relationships to create a culture where people choose to walk.

"Walking is convenient, it needs no special equipment, is self-regulating and inherently safe. Walking is as natural
as breathing". John Butcher, Founder Walk21, 1999

There has been a recent focus among urban planners in some communities to create pedestrian-friendly
areas and roads, allowing commuting, shopping and recreation to be done on foot. Some communities are
at least partially car-free, making them particularly supportive of walking and other modes of
transportation. In the United States, the Active Living network is an example of a concerted effort to
develop communities more friendly to walking and other physical activities. Walk England[5] is an
example of a similar movement.

Walking is also considered to be a clear example of sustainable mode of transport, especially suited for
urban use and/or relatively shorter distances. Non Motorised Transport modes such as walking, but also
cycling, small-wheeled transport (skates, skateboards, push scooters and hand carts) or wheelchair travel
are often key elements of successfully encouraging clean urban transport.[12] A large variety of case
studies and good practices (from European cities and some world-wide examples) that promote and
stimulate walking as a means of transportation in cities can be found at Eltis, Europe's portal for local
transport.[13]

However, some studies indicate that walking is more harmful to the environment than car travel. This is
because more energy is expended in growing and providing the food necessary to regain the calories
burned by walking compared to the energy used in the operation of a car. These studies have been
criticised for using inefficient food sources (i.e. those that use large amounts of energy to produce) such
as milk or meat to skew the results.[14]

On roads with no sidewalks, pedestrians should always walk facing the oncoming traffic for their own and
other peoples' safety.

When distances are too great to be convenient, walking can be combined with other modes of
transportation, such as cycling, public transport, car sharing, carpooling, hitchhiking, ride sharing, car
rentals and taxis. These methods may be more efficient or desirable than private car ownership, being a
healthy means of physical exercise.

The development of specific rights of way with appropriate infrastructure can promote increased
participation and enjoyment of walking. Examples of types of investment include malls, and
foreshoreways such as oceanways and riverwalks.

[edit] In robotics
Main article: Robot locomotion

The first successful attempts at walking robots tended to have 6 legs. The number of legs was reduced as
microprocessor technology advanced, and there are now a number of robots that can walk on 2 legs, albeit
not nearly as well as a human being.

[edit] See also
• Footpath
• Hiking
• Hillwalking
• List of long-distance footpaths
• List of U.S. cities with most pedestrian commuters
• Nordic walking
• Outdoor education
• Pedestrian-friendly
• Pedometers
• Power Walking
• Racewalking
• Sidewalk
• Sustainable transport
• Terrestrial locomotion in animals
• Trail
• Walking fish
• Walking in the United Kingdom
• Walking stick
• Flâneur

[edit] Notes
Learning
From Wikipedia, the free encyclopedia

(Redirected from Learn)
Jump to: navigation, search
Neuropsychology

Topics

Brain-computer interface
Traumatic brain injury
Brain regions • Clinical neuropsychology
Cognitive neuroscience • Human brain
Neuroanatomy • Neurophysiology
Phrenology • Common misconceptions
Brain functions
arousal • attention
consciousness • decision making
executive functions • natural language
learning • memory
motor coordination • sensory perception
planning • problem solving • thought
People
Arthur L. Benton • David Bohm
António Damásio • Phineas Gage
Norman Geschwind • Elkhonon Goldberg
Donald O. Hebb • Kenneth Heilman
Edith Kaplan • Muriel Lezak
Benjamin Libet • Rodolfo Llinás
Alexander Luria • Brenda Milner
Karl H. Pribram • Oliver Sacks
Roger W. Sperry • H. M. • K. C.
Tests
Bender-Gestalt Test
Benton Visual Retention Test
Clinical Dementia Rating
Continuous Performance Task
Glasgow Coma Scale
Hayling and Brixton tests
Johari window • Lexical decision task
Mini-mental state examination
Stroop effect
Wechsler Adult Intelligence Scale
Wisconsin card sorting
Mind and Brain Portal
This box: view • talk • edit
For the 2004 indie album, see Early Recordings: Chapter 2: Learning.
"Learn" redirects here. For other uses, see Learn (disambiguation).
"Learned" redirects here. For other uses, see Learned (disambiguation).
"Learner" redirects here. For the fictional character, see Dean Learner.

In the fields of neuropsychology, personal development and education, learning is one of the most
important mental function of humans, animals and artificial cognitive systems. It relies on the acquisition
of different types of knowledge supported by perceived information. It leads to the development of new
capacities, skills, values, understanding, and preferences. Its goal is the increasing of individual and group
experience. Learning functions can be performed by different brain learning processes, which depend on
the mental capacities of learning subject, the type of knowledge which has to be acquitted, as well as on
socio-cognitive and environmental circumstances[1].

Learning ranges from simple forms of learning such as habituation and classical conditioning seen in
many animal species, to more complex activities such as play, seen only in relatively intelligent animals[2]
[3]
and humans. Therefore, in general, a learning can be conscious and not conscious.

For example, for small children, non-conscious learning processes are as natural as breathing. In fact,
there is evidence for behavioral learning prenatally, in which habituation has been observed as early as 32
weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for
learning and memory to occur very early on in development.[4]

From the social perspective, learning should be the goal of teaching and education.

Conscious learning is a capacity requested by students, therefore is usually goal-oriented and requires a
motivation.

Learning has also been mathematically modeled using a differential equation related to an arbitrarily
defined knowledge indicator with respect to time, and dependent on a number of interacting factors
(constants and variables) such as initial knowledge, motivation, intelligence, knowledge anchorage or
resistance, etc.[5][6] Thus, learning does not occur if there is no change in the amount of knowledge even
for a long time, and learning is negative if the amount of knowledge is decreasing in time. Inspection of
the solution to the differential equation also shows the sigmoid and logarithmic decay learning curves, as
well as the knowledge carrying capacity for a given learner.

Contents
[hide]

• 1 Types of learning
o 1.1 Simple non-associative learning
 1.1.1 Habituation
 1.1.2 Sensitization
o 1.2 Associative learning
 1.2.1 Operant conditioning
 1.2.2 Classical conditioning
o 1.3 Imprinting
o 1.4 Observational learning
o 1.5 Play
o 1.6 Multimedia learning
o 1.7 e-Learning and m-Learning
o 1.8 Rote learning
o 1.9 Informal learning
o 1.10 Formal learning
o 1.11 Non-formal learning and combined approaches
o 1.12 Learning as a process you do, not a process that is done to you
• 2 See also
• 3 References

• 4 External links

[edit] Types of learning
[edit] Simple non-associative learning

[edit] Habituation

Main article: Habituation

In psychology, habituation is an example of non-associative learning in which there is a progressive
diminution of behavioral response probability with repetition of a stimulus. It is another form of
integration. An animal first responds to a stimulus, but if it is neither rewarding nor harmful the animal
reduces subsequent responses. One example of this can be seen in small song birds - if a stuffed owl (or
similar predator) is put into the cage, the birds initially react to it as though it were a real predator. Soon
the birds react less, showing habituation. If another stuffed owl is introduced (or the same one removed
and re-introduced), the birds react to it again as though it were a predator, demonstrating that it is only a
very specific stimulus that is habituated to (namely, one particular unmoving owl in one place).
Habituation has been shown in essentially every species of animal, including the large protozoan Stentor
Coeruleus.[7]

[edit] Sensitization

Main article: Sensitization

Sensitization is an example of non-associative learning in which the progressive amplification of a
response follows repeated administrations of a stimulus (Bell et al., 1995). An everyday example of this
mechanism is the repeated tonic stimulation of peripheral nerves that will occur if a person rubs his arm
continuously. After a while, this stimulation will create a warm sensation that will eventually turn painful.
The pain is the result of the progressively amplified synaptic response of the peripheral nerves warning
the person that the stimulation is harmful. Sensitization is thought to underlie both adaptive as well as
maladaptive learning processes in the organism.

[edit] Associative learning
[edit] Operant conditioning

Main article: Operant conditioning

Operant conditioning is the use of consequences to modify the occurrence and form of behavior. Operant
conditioning is distinguished from Pavlovian conditioning in that operant conditioning deals with the
modification of voluntary behavior. Discrimination learning is a major form of operant conditioning. One
form of it is called Errorless learning.

[edit] Classical conditioning

Main article: Classical conditioning

The typical paradigm for classical conditioning involves repeatedly pairing an unconditioned stimulus
(which unfailingly evokes a particular response) with another previously neutral stimulus (which does not
normally evoke the response). Following conditioning, the response occurs both to the unconditioned
stimulus and to the other, unrelated stimulus (now referred to as the "conditioned stimulus"). The
response to the conditioned stimulus is termed a conditioned response.

[edit] Imprinting

Main article: Imprinting (psychology)

Imprinting is the term used in psychology and ethology to describe any kind of phase-sensitive learning
(learning occurring at a particular age or a particular life stage) that is rapid and apparently independent of
the consequences of behavior. It was first used to describe situations in which an animal or person learns
the characteristics of some stimulus, which is therefore said to be "imprinted" onto the subject.

Please help improve this section by expanding it. Further information might be found on the talk
page. (June 2008)

[edit] Observational learning

Main article: Observational learning

The most common human learning process is imitation; one's personal repetition of an observed
behaviour, such as a dance. Humans can copy three types of information simultanesouly: the
demonstrators goals, actions and environmental outcomes (results, see Emulation (observational
learning)). Through copying these types of information, (most) infants will tune into their surrounding
culture.

[edit] Play

Main article: Play (activity)

Play generally describes behavior which has no particular end in itself, but improves performance in
similar situations in the future. This is seen in a wide variety of vertebrates besides humans, but is mostly
limited to mammals and birds. Cats are known to play with a ball of string when young, which gives them
experience with catching prey. Besides inanimate objects, animals may play with other members of their
own species or other animals, such as orcas playing with seals they have caught. Play involves a
significant cost to animals, such as increased vulnerability to predators and the risk or injury and possibly
infection. It also consumes energy, so there must be significant benefits associated with play for it to have
evolved. Play is generally seen in younger animals, suggesting a link with learning. However, it may also
have other benefits not associated directly with learning, for example improving physical fitness.

[edit] Multimedia learning

The learning where learner uses multimedia learning environments (Mayer, 2001). This type of learning
relies on dual-coding theory (Paivio, 1971).

[edit] e-Learning and m-Learning

Electronic learning or e-learning is a general term used to refer to Internet-based networked computer-
enhanced learning. A specific and always more diffused e-learning is mobile learning (m-Learning), it
uses different mobile telecommunication equipments, such as cellular phones.

[edit] Rote learning

Main article: Rote learning

Rote learning is a technique which avoids understanding the inner complexities and inferences of the
subject that is being learned and instead focuses on memorizing the material so that it can be recalled by
the learner exactly the way it was read or heard. The major practice involved in rote learning techniques is
learning by repetition, based on the idea that one will be able to quickly recall the meaning of the material
the more it is repeated. Rote learning is used in diverse areas, from mathematics to music to religion.
Although it has been criticized by some schools of thought, rote learning is a necessity in many situations.

[edit] Informal learning

Main article: Informal learning

Informal learning occurs through the experience of day-to-day situations (for example, one would learn to
look ahead while walking because of the danger inherent in not paying attention to where one is going). It
is learning from life, during a meal at table with parents, Play, exploring.

[edit] Formal learning

Main article: Education

A depiction of the world's oldest university, the University of Bologna, Italy

Formal learning is learning that takes place within a teacher-student relationship, such as in a school
system.
Non-formal learning is organized learning outside the formal learning system. For example: learning by
coming together with people with similar interests and exchanging viewpoints, in clubs or in
(international) youth organizations, workshops.

[edit] Non-formal learning and combined approaches

The educational system may use a combination of formal, informal, and non-formal learning methods.
The UN and EU recognize these different forms of learning (cf. links below). In some schools students
can get points that count in the formal-learning systems if they get work done in informal-learning
circuits. They may be given time to assist international youth workshops and training courses, on the
condition they prepare, contribute, share and can proof this offered valuable new insights, helped to
acquire new skills, a place to get experience in organizing, teaching, etc.

In order to learn a skill, such as solving a Rubik's cube quickly, several factors come into play at once:

• Directions help one learn the patterns of solving a Rubik's cube
• Practicing the moves repeatedly and for extended time helps with "muscle memory" and therefore
speed
• Thinking critically about moves helps find shortcuts, which in turn helps to speed up future
attempts.
• The Rubik's cube's six colors help anchor solving it within the head.
• Occasionally revisiting the cube helps prevent negative learning or loss of skill

[edit] Learning as a process you do, not a process that is done to you

Main article: Sudbury model

Some critics of today's schools, of the concept of learning disabilities, of special education, and of
response to intervention, take the position that every child has a different learning style and pace and that
each child is unique, not only capable of learning but also capable of succeeding.

Sudbury Model democratic schools assert that there are many ways to study and learn. They argue that
learning is a process you do, not a process that is done to you; That is true of everyone. It's basic.[8] The
experience of Sudbury model democratic schools shows that there are many ways to learn without the
intervention of teaching, to say, without the intervention of a teacher being imperative. In the case of
reading for instance in the Sudbury model democratic schools some children learn from being read to,
memorizing the stories and then ultimately reading them. Others learn from cereal boxes, others from
games instructions, others from street signs. Some teach themselves letter sounds, others syllables, others
whole words. Sudbury model democratic schools adduce that in their schools no one child has ever been
forced, pushed, urged, cajoled, or bribed into learning how to read or write, and they have had no
dyslexia. None of their graduates are real or functional illiterates, and no one who meets their older
students could ever guess the age at which they first learned to read or write.[9] In a similar form students
learn all the subjects, techniques and skills in these schools.

Describing current instructional methods as homogenization and lockstep standardization, alternative
approaches are proposed, such as the Sudbury Model of Democratic Education schools, an alternative
approach in which children, by enjoying personal freedom thus encouraged to exercise personal
responsibility for their actions, learn at their own pace and style rather than following a compulsory and
chronologically-based curriculum.[10][11][12][13] Proponents of unschooling have also claimed that children
raised in this method learn at their own pace and style, and do not suffer from learning disabilities.
[edit] See also
• Animal cognition • Pedagogy
• Developmental Psychology • Reasoning
• History of education • Sequence learning
• Intelligence • Sleep and learning

• Machine learning • Study skills

[edit] References

Memory
In psychology, memory is an organism's mental ability to store, retain and recall information. Traditional
studies of memory began in the fields of philosophy, including techniques of artificially enhancing the
memory. The late nineteenth and early twentieth century put memory within the paradigms of cognitive
psychology. In recent decades, it has become one of the principal pillars of a branch of science called
cognitive neuroscience, an interdisciplinary link between cognitive psychology and neuroscience.

Contents
[hide]

• 1 Processes
• 2 Classification
o 2.1 Sensory
o 2.2 Short-term
o 2.3 Long Term
• 3 Models
o 3.1 Multi-store (Atkinson-Shiffrin memory model)
o 3.2 Working memory
o 3.3 Levels of processing
• 4 Classification by information type
• 5 Classification by temporal direction
• 6 Physiology
• 7 Disorders
• 8 Memorization
• 9 Improving memory
• 10 Memory tasks
• 11 See also
• 12 Notes
• 13 References

• 14 External links

[edit] Processes
From an information processing perspective there are three main stages in the formation and retrieval of
memory:
• Encoding or registration (receiving, processing and combining of received information)
• Storage (creation of a permanent record of the encoded information)
• Retrieval or recall (calling back the stored information in response to some cue for use in a
process or activity)

[edit] Classification
A basic and generally accepted classification of memory is based on the duration of memory retention,
and identifies three distinct types of memory: sensory memory, short term memory and long term
memory.

[edit] Sensory

Sensory memory corresponds approximately to the initial 200 - 500 milliseconds after an item is
perceived. The ability to look at an item, and remember what it looked like with just a second of
observation, or memorization, is an example of sensory memory. With very short presentations,
participants often report that they seem to "see" more than they can actually report. The first experiments
exploring this form of sensory memory were conducted by George Sperling (1960) using the "partial
report paradigm." Subjects were presented with a grid of 12 letters, arranged into three rows of 4. After a
brief presentation, subjects were then played either a high, medium or low tone, cuing them which of the
rows to report. Based on these partial report experiments, Sperling was able to show that the capacity of
sensory memory was approximately 12 items, but that it degraded very quickly (within a few hundred
milliseconds). Because this form of memory degrades so quickly, participants would see the display, but
be unable to report all of the items (12 in the "whole report" procedure) before they decayed. This type of
memory cannot be prolonged via rehearsal.

[edit] Short-term

Short-term memory allows one to recall something from several seconds to as long as a minute without
rehearsal. Its capacity is also very limited: George A. Miller (1956), when working at Bell Laboratories,
conducted experiments showing that the store of short term memory was 7±2 items (the title of his
famous paper, "The magical number 7±2"). Modern estimates of the capacity of short-term memory are
lower, typically on the order of 4-5 items, and we know that memory capacity can be increased through a
process called chunking. For example, if presented with the string:

FBIPHDTWAIBM

people are able to remember only a few items. However, if the same information is presented in the
following way:

FBI PHD TWA IBM

people can remember a great deal more letters. This is because they are able to chunk the information into
meaningful groups of letters. Beyond finding meaning in the abbreviations above, Herbert Simon showed
that the ideal size for chunking letters and numbers, meaningful or not, was three. This may be reflected
in some countries in the tendency to remember phone numbers as several chunks of three numbers with
the final four-number groups generally broken down into two groups of two.

Short-term memory is believed to rely mostly on an acoustic code for storing information, and to a lesser
extent a visual code. Conrad (1964)[1] found that test subjects had more difficulty recalling collections of
words that were acoustically similar (e.g. dog, hog, fog, bog, log).
However, some individuals have been reported to be able to remember large amounts of information,
quickly, and be able to recall that information in seconds.

[edit] Long Term

Olin Levi Warner, Memory (1896). Library of Congress Thomas Jefferson Building, Washington, D.C.

The storage in sensory memory and short-term memory generally have a strictly limited capacity and
duration, which means that information is available for a certain period of time, but is not retained
indefinitely. By contrast, long-term memory can store much larger quantities of information for
potentially unlimited duration (sometimes a whole life span). For example, given a random seven-digit
number, we may remember it for only a few seconds before forgetting, suggesting it was stored in our
short-term memory. On the other hand, we can remember telephone numbers for many years through
repetition; this information is said to be stored in long-term memory. While short-term memory encodes
information acoustically, long-term memory encodes it semantically: Baddeley (1966)[2] discovered that
after 20 minutes, test subjects had the least difficulty recalling a collection of words that had similar
meanings (e.g. big, large, great, huge).

Short-term memory is supported by transient patterns of neuronal communication, dependent on regions
of the frontal lobe (especially dorsolateral prefrontal cortex) and the parietal lobe. Long-term memories,
on the other hand, are maintained by more stable and permanent changes in neural connections widely
spread throughout the brain. The hippocampus is essential (for learning new information) to the
consolidation of information from short-term to long-term memory, although it does not seem to store
information itself. Without the hippocampus new memories are unable to be stored into long-term
memory, very short attention span. Rather, it may be involved in changing neural connections for a period
of three months or more after the initial learning. One of the primary functions of sleep is improving
consolidation of information, as it can be shown that memory depends on getting sufficient sleep between
training and test, and that the hippocampus replays activity from the current day while sleeping.
[edit] Models
Models of memory provide abstract representations of how memory is believed to work. Below are
several models proposed over the years by various psychologists. Note that there is some controversy as
to whether there are several memory structures, for example, Tarnow (2005) finds that it is likely that
there is only one memory structure between 6 and 600 seconds.

[edit] Multi-store (Atkinson-Shiffrin memory model)

The multi-store model (also known as Atkinson-Shiffrin memory model) was first recognised in 1968 by
Atkinson and Shiffrin.

The multi-store model has been criticized for being too simplistic. For instance, long-term memory is
believed to be actually made up of multiple subcomponents, such as episodic and procedural memory. It
also proposes that rehearsal is the only mechanism by which information eventually reaches long-term
storage, but evidence shows us capable of remembering things without rehearsal.

The model also shows all the memory stores as being a single unit whereas research into this shows
different. For example, short-term memory can be broken up into different units such as visual
information and acoustic information. Patient KF proves this. Patient KF was brain damaged and had
problems with his short term memory. He had problems with things such as spoken numbers, letters and
words and with significant sounds (such as doorbells and cats mewing). Other parts of STM were
unnaffected, such as visual (pictures).

It also shows the sensory store as a single unit whilst we know that the sensory store is split up into
several different parts such as taste, vision, and hearing.

(See also: Memory consolidation)

[edit] Working memory

The working memory model.

In 1974 Baddeley and Hitch proposed a working memory model which replaced the concept of general
short term memory with specific, active components. In this model, working memory consists of three
basic stores: the central executive, the phonological loop and the visuo-spatial sketchpad. In 2000 this
model was expanded with the multimodal episodic buffer.[3]

The central executive essentially acts as attention. It channels information to the three component
processes: the phonological loop, the visuo-spatial sketchpad, and the episodic buffer.

The phonological loop stores auditory information by silently rehearsing sounds or words in a continuous
loop; the articulatory process (the "" over and over again), then a list of short words is no easier to
remember.

The visuo-spatial sketchpad stores visual and spatial information. It is engaged when performing spatial
tasks (such as judging distances) or visual ones (such as counting the windows on a house or imagining
images).
The episodic buffer is dedicated to linking information across domains to form integrated units of visual,
spatial, and verbal information and chronological ordering (e.g., the memory of a story or a movie scene).
The episodic buffer is also assumed to have links to long-term memory and semantical meaning.

The working memory model explains many practical observations, such as why it is easier to do two
different tasks (one verbal and one visual) than two similar tasks (e.g., two visual), and the
aforementioned word-length effect. However, the concept of a central executive as noted here has been
criticized as inadequate and vague.[citation needed]

[edit] Levels of processing

Craik and Lockhart (1972) proposed that it is the method and depth of processing that affects how an
experience is stored in memory, rather than rehearsal.

• Organization - Mandler (1967) gave participants a pack of word cards and asked them to sort
them into any number of piles using any system of categorization they liked. When they were later
asked to recall as many of the words as they could, those who used more categories remembered
more words. This study suggested that the act of organizing information makes it more
memorable.
• Distinctiveness - Eysenck and Eysenck (1980) asked participants to say words in a distinctive
way, e.g. spell the words out loud. Such participants recalled the words better than those who
simply read them off a list.
• Effort - Tyler et al. (1979) had participants solve a series of anagrams, some easy (FAHTER) and
some difficult (HREFAT). The participants recalled the difficult anagrams better, presumably
because they put more effort into them.
• Elaboration - Palmere et al. (1983) gave participants descriptive paragraphs of a fictitious African
nation. There were some short paragraphs and some with extra sentences elaborating the main
idea. Recall was higher for the ideas in the elaborated paragraphs.

[edit] Classification by information type
Anderson (1976)[4] divides long-term memory into declarative (explicit) and procedural (implicit)
memories.

Declarative memory requires conscious recall, in that some conscious process must call back the
information. It is sometimes called explicit memory, since it consists of information that is explicitly
stored and retrieved.

Declarative memory can be further sub-divided into semantic memory, which concerns facts taken
independent of context; and episodic memory, which concerns information specific to a particular
context, such as a time and place. Semantic memory allows the encoding of abstract knowledge about the
world, such as "Paris is the capital of France". Episodic memory, on the other hand, is used for more
personal memories, such as the sensations, emotions, and personal associations of a particular place or
time. Autobiographical memory - memory for particular events within one's own life - is generally viewed
as either equivalent to, or a subset of, episodic memory. Visual memory is part of memory preserving
some characteristics of our senses pertaining to visual experience. One is able to place in memory
information that resembles objects, places, animals or people in sort of a mental image. Visual memory
can result in priming and it is assumed some kind of perceptual representational system underlies this
phenomenon. [1]
In contrast, procedural memory (or implicit memory) is not based on the conscious recall of information,
but on implicit learning. Procedural memory is primarily employed in learning motor skills and should be
considered a subset of implicit memory. It is revealed when one does better in a given task due only to
repetition - no new explicit memories have been formed, but one is unconsciously accessing aspects of
those previous experiences. Procedural memory involved in motor learning depends on the cerebellum
and basal ganglia.

Topographic memory is the ability to orient oneself in space, to recognize and follow an itinerary, or to
recognize familiar places.[5] Getting lost when traveling alone is an example of the failure of topographic
memory. This is often reported among elderly patients who are evaluated for dementia. The disorder
could be caused by multiple impairments, including difficulties with perception, orientation, and memory.
[6]

[edit] Classification by temporal direction
A further major way to distinguish different memory functions is whether the content to be remembered is
in the past, retrospective memory, or whether the content is to be remembered in the future, prospective
memory. Thus, retrospective memory as a category includes semantic memory and
episodic/autobiographical memory. In contrast, prospective memory is memory for future intentions, or
remembering to remember (Winograd, 1988). Prospective memory can be further broken down into
event- and time-based prospective remembering. Time-based prospective memories are triggered by a
time-cue, such as going to the doctor (action) at 4pm (cue). Event-based prospective memories are
intentions triggered by cues, such as remembering to post a letter (action) after seeing a mailbox (cue).
Cues do not need to be related to the action (as the mailbox example is), and lists, sticky-notes, knotted
handkerchiefs, or string around the finger are all examples of cues that are produced by people as a
strategy to enhance prospective memory.

[edit] Physiology
Overall, the mechanisms of memory are not completely understood.[7] Brain areas such as the
hippocampus, the amygdala, the striatum, or the mammillary bodies are thought to be involved in specific
types of memory. For example, the hippocampus is believed to be involved in spatial learning and
declarative learning, while the amygdala is thought to be involved in emotional memory. Damage to
certain areas in patients and animal models and subsequent memory deficits is a primary source of
information. However, rather than implicating a specific area, it could be that damage to adjacent areas, or
to a pathway traveling through the area is actually responsible for the observed deficit. Further, it is not
sufficient to describe memory, and its counterpart, learning, as solely dependent on specific brain regions.
Learning and memory are attributed to changes in neuronal synapses, thought to be mediated by long-
term potentiation and long-term depression.

Hebb distinguished between short-term and long-term memory. He postulated that any memory that
stayed in short-term storage for a long enough time would be consolidated into a long-term memory.
Later research showed this to be false. Research has shown that direct injections of cortisol or epinephrine
help the storage of recent experiences. This is also true for stimulation of the amygdala. This proves that
excitement enhances memory by the stimulation of hormones that affect the amygdala. Excessive or
prolonged stress (with prolonged cortisol) may hurt memory storage. Patients with amygdalar damage are
no more likely to remember emotionally charged words than nonemotionally charged ones. The
hippocampus is important for explicit memory. The hippocampus is also important for memory
consolidation. The hippocampus receives input from different parts of the cortex and sends its output out
to different parts of the brain also. The input comes from secondary and tertiary sensory areas that have
processed the information a lot already. Hippocampal damage may also cause memory loss and problems
with memory storage[8].

[edit] Disorders
Much of the current knowledge of memory has come from studying memory disorders. Loss of memory
is known as amnesia. There are many sorts of amnesia, and by studying their different forms, it has
become possible to observe apparent defects in individual sub-systems of the brain's memory systems,
and thus hypothesize their function in the normally working brain. Other neurological disorders such as
Alzheimer's disease can also affect memory and cognition. Hyperthymesia, or hyperthymesic syndrome,
is a disorder which affects an individual's autobiographical memory, essentially meaning that they cannot
forget small details that otherwise would not be stored.[9] Korsakoff's syndrome, also known as
Korsakoff's psychosis, amnesic-confabulatory syndrome, is an organic brain disease that adversely affects
memory.

While not a disorder, a common temporary failure of word retrieval from memory is the tip-of-the-tongue
phenomenon. Sufferers of Nominal Aphasia (also called Anomia), however, do experience the tip-of-the-
tongue phenomenon on an ongoing basis due to damage to the frontal and parietal lobes of the brain.

[edit] Memorization
Memorization is a method of learning that allows an individual to recall information verbatim. Rote
learning is the method most often used. Methods of memorizing things have been the subject of much
discussion over the years with some writers, such as Cosmos Rossellius using visual alphabets. The
spacing effect shows that an individual is more likely to remember a list of items when rehearsal is spaced
over an extended period of time. In contrast to this is cramming which is intensive memorization in a
short period of time. Also relevant is the Zeigarnik effect which states that people remember uncompleted
or interrupted tasks better than completed ones.

In March 2007 German researchers found they could use odors to re-activate new memories in the brains
of people while they slept and the volunteers remembered better later.[10]

Tony Noice, an actor, director, teacher and cognitive researcher, and his psychologist wife Helga, have
studied how actors remember lines and found that their techniques can be useful to non-actors as well.[11]

At the Center for Cognitive Science at Ohio State University, researchers have found that memory
accuracy of adults is hurt by the fact that they know more than children and tend to apply this knowledge
when learning new information. The findings appeared in the August 2004 edition of the journal
Psychological Science.

Interference can hamper memorization and retrieval. There is retroactive interference when learning new
information causes you to forget old information and proactive interference where learning one piece of
information makes it harder to learn similar new information. [12]

Emotion can have a powerful impact on memory. Numerous studies have shown that the most vivid
autobiographical memories tend to be of emotional events, which are likely to be recalled more often and
with more clarity and detail than neutral events. [13]

[edit] Improving memory
The best way to improve memory seems to be to increase the supply of oxygen to the brain, which may be
accomplished with aerobic exercises; walking for three hours each week suffices, as does swimming or
bicycle riding. One study found that eating frequently such as five small meals a day promotes a healthy
memory by preventing dips in blood glucose, the primary energy source for the brain. [14]

The International Longevity Center released in 2001 a report[15] which includes in pages 14-16
recommendations for keeping the mind in good functionality until advanced age. Some of the
recommendations are to stay intellectually active through learning, training or reading, to keep physically
active so to promote blood circulation to the brain, to socialize, to reduce stress, to keep sleep time
regular, to avoid depression or emotional instability and to observe good nutrition.

[edit] Memory tasks
• Paired associate learning - when one learns to associate one specific word with another. For
example when given a word such as "safe" one must learn to say another specific word, such as
green. This is stimulus and response.[16]
• Free recall- during this task a subject would be asked to study a list of words and then sometime
later they will be asked to recall or write down as many words that they can remember.[17]
• Recognition- subjects are asked to remember a list of words or pictures, after which point they are
asked to identify the previously presented words or pictures from among a list of alternatives that
were not presented in the original list.[18]

[edit] See also
• Autobiographical memory • Forgetting curve • Method of loci
• Cellular memory • Genetic memory • Mnemonic
• Cultural memory • Involuntary memory • Muscle memory
• Eidetic memory • List of memory biases • Politics of memory
• Emotion and memory • Memory and aging
• Episodic memory • Memory inhibition • Synaptic plasticity

• False memory syndrome • Memory-prediction framework

[edit] Notes
Suffering
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article is about suffering or pain in the broadest sense. For physical pain, see Pain. For other uses,
see The Suffering.

Suffering, or pain,[1] is an individual's basic affective experience of unpleasantness and aversion
associated with harm or threat of harm. Suffering may be qualified as physical,[2] or mental.[3] It may come
in all degrees of intensity, from mild to intolerable. Factors of duration and frequency of occurrence
usually compound that of intensity. In addition to such factors, people's attitudes toward suffering may
take into account how much it is, in their opinion, avoidable or unavoidable, useful or useless, deserved or
undeserved.

All sentient beings suffer during their lives, in diverse manners, and often dramatically. As a result, many
fields of human activity are concerned, from their own points of view, with some aspects of suffering.
These aspects may include its nature and processes, its origin and causes, its meaning and significance, its
related personal, social, and cultural behaviors, its remedies, management, and uses.

Contents
[hide]

• 1 Terminology
• 2 Philosophy
• 3 Religion
• 4 Arts and literature
• 5 Social sciences
• 6 Biology, neurology, psychology
• 7 Health care
• 8 Relief and prevention in society
• 9 Uses
• 10 See also
• 11 Selected bibliography

• 12 Notes and references

[edit] Terminology
The word suffering is sometimes used in the narrow sense of physical pain, but more often it refers to
mental or emotional pain, or more often yet to pain in the broad sense, i.e. to any unpleasant feeling,
emotion or sensation.

The word pain usually refers to physical pain, but it is also a common synonym of suffering.

The words pain and suffering are often used both together in different ways. For instance, they may be
used as interchangeable synonyms. Or they may be used in 'contradistinction' to one another, as in "pain is
inevitable, suffering is optional", or "pain is physical, suffering is mental". Or they may be used to define
each other, as in "pain is physical suffering", or "suffering is severe physical or mental pain".

Qualifiers, such as mental, emotional, psychological, and spiritual, are often used for referring to certain
types of pain or suffering. In particular, 'mental pain (or suffering)' may be used in relationship with
'physical pain (or suffering)' for distinguishing between two wide categories of pain or suffering. A first
caveat concerning such a distinction is that it uses 'physical pain' in a sense that normally includes not
only the 'typical sensory experience' of 'physical pain' but also other unpleasant bodily experiences such
as itching or nausea. A second caveat is that the terms physical or mental should not be taken too literally:
physical pain or suffering, as a matter of fact, happens through conscious minds and involves emotional
aspects, while mental pain or suffering happens through physical brains and, being an emotion, involves
important physiological aspects.

Words that are roughly synonymic with suffering, in addition to pain, include distress, sorrow,
unhappiness, misery, affliction, woe, ill, discomfort, displeasure, disagreeableness, unpleasantness.

[edit] Philosophy
Hedonism, as an ethical theory, claims that good and bad consist ultimately in pleasure and pain. Many
hedonists, in accordance with Epicurus, emphasize avoiding suffering over pursuing pleasure, because
they find that the greatest happiness lies in a tranquil state (ataraxia) free from pain and from the
worrisome pursuit or unwelcome consequences of pleasure. For stoicism, the greatest good lies in reason
and virtue, but the soul best reaches it through a kind of indifference (apatheia) to pleasure and pain: as a
consequence, this doctrine has become identified with stern self-control in regard to suffering.

Jeremy Bentham developed hedonistic utilitarianism, a popular doctrine in ethics, politics, and economics.
Bentham argued that the right act or policy was that which would cause "the greatest happiness of the
greatest number". He suggested a procedure called hedonic or felicific calculus, for determining how
much pleasure and pain would result from any action. John Stuart Mill improved and promoted the
doctrine of hedonistic utilitarianism. Karl Popper, in The Open Society and Its Enemies, proposed a
negative utilitarianism, which prioritizes the reduction of suffering over the enhancement of happiness
when speaking of utility: "I believe that there is, from the ethical point of view, no symmetry between
suffering and happiness, or between pain and pleasure. (…) human suffering makes a direct moral appeal
for help, while there is no similar call to increase the happiness of a man who is doing well anyway."
David Pearce's utilitarianism asks straightforwardly for the abolition of suffering. Many utilitarians, since
Bentham, hold that the moral status of a being comes from its ability to feel pleasure and pain: therefore,
moral agents should consider not only the interests of human beings but also those of animals. Richard
Ryder developed such a view in his concepts of 'speciesism' and 'painism'. Peter Singer's writings,
especially the book Animal Liberation, represent the leading edge of this kind of utilitarianism for
animals as well as for people.

Another doctrine related to the relief of suffering is humanitarianism (see also humanitarian aid and
humane society). "Where humanitarian efforts seek a positive addition to the happiness of sentient beings,
it is to make the unhappy happy rather than the happy happier. (...) [Humanitarianism] is an ingredient in
many social attitudes; in the modern world it has so penetrated into diverse movements (...) that it can
hardly be said to exist in itself."[4]

Pessimism holds this world to be the worst possible, plagued with worsening and unstoppable suffering.
Arthur Schopenhauer recommends us to take refuge in things like art, philosophy, loss of the will to live,
and tolerance toward 'fellow-sufferers'. Friedrich Nietzsche, first influenced by Schopenhauer, developed
afterward quite another attitude, exalting the will to power, despising weak compassion or pity, and
recommending us to embrace willfully the 'eternal return' of the greatest sufferings.

Philosophy of pain is a philosophical specialty that focuses on physical pain as a sensation. Through that
topic, it may also pertain to suffering in general.

[edit] Religion
Suffering plays an important role in most religions, regarding matters such as the following: consolation
or relief; moral conduct (do no harm, help the afflicted); spiritual advancement through life hardships or
through self-imposed trials (mortification of the flesh, penance, ascetism); ultimate destiny (salvation,
damnation, hell).

Theodicy deals with the problem of evil, which is the difficulty of reconciling an omnipotent and
benevolent god with evil. People often believe that the worst form of evil is extreme suffering, especially
in innocent children or in beings created ultimately for being tormented without end (see problem of hell).

The Four Noble Truths of Buddhism are about dukkha, a term usually translated as suffering. The Four
Noble Truths state (1) the nature of suffering, (2) its cause, (3) its cessation, and (4) the way leading to its
cessation (which is the Noble Eightfold Path). Buddhism considers liberation from suffering as basic for
leading a holy life and attaining nirvana.

Hinduism holds that suffering follows naturally from personal negative behaviors in one’s current life or
in a past life (see karma). One must accept suffering as a just consequence and as an opportunity for
spiritual progress. Thus the soul or true self, which is eternally free of any suffering, may come to
manifest itself in the person, who then achieves liberation (moksha). Abstinence from causing pain or
harm to other beings (ahimsa) is a central tenet of Hinduism.

The Bible's Book of Job reflects on the nature and meaning of suffering.

Pope John Paul II wrote "On the Christian Meaning of Human Suffering".[5] This meaning revolves
around the notion of redemptive suffering.
[edit] Arts and literature
Artistic and literary works often engage with suffering, sometimes at great cost to their creators or
performers. The Literature, Arts, and Medicine Database offers a list of such works under the categories
art, film, literature, and theater. Be it in the tragic, comic or other genres, art and literature offer means to
alleviate (and perhaps also exacerbate) suffering, as argued for instance in Harold Schweizer's Suffering
and the remedy of art.[6]

Landscape with the Fall of Icarus

This Breughel's painting is among those that inspired W.H. Auden's poem Musée des Beaux Arts :

About suffering they were never wrong,
The Old Masters; how well, they understood
Its human position; how it takes place
While someone else is eating or opening a window or just walking dully along;
(...)
In Breughel's Icarus, for instance: how everything turns away
Quite leisurely from the disaster; (...) [7]

[edit] Social sciences
Social suffering, according to Arthur Kleinman and others, describes "collective and individual human
suffering associated with life conditions shaped by powerful social forces."[8] Such suffering is an
increasing concern in medical anthropology, ethnography, mass media analysis, and Holocaust studies,
says Iain Wilkinson,[9] who is developing a sociology of suffering.

The Encyclopedia of World Problems and Human Potential is a work by the Union of International
Associations. Its main databases are about world problems (56,564 profiles), global strategies and
solutions (32,547 profiles), human values (3,257 profiles), and human development (4,817 profiles). It
states that "the most fundamental entry common to the core parts is that of pain (or suffering)" and
"common to the core parts is the learning dimension of new understanding or insight in response to
suffering."[10]

Ralph G.H. Siu, an American author, urged in 1988 the "creation of a new and vigorous academic
discipline, called panetics, to be devoted to the study of the infliction of suffering."[11] The International
Society for Panetics was founded in 1991 to study and develop ways to reduce the infliction of human
suffering by individuals acting through professions, corporations, governments, and other social groups.[12]

In economics, the following notions relate not only to the matters suggested by their positive appellations,
but to the matter of suffering as well: Well-being or Quality of life, Welfare economics, Happiness
economics, Gross National Happiness, Genuine Progress Indicator.

In law, "Pain and suffering" is a legal term that refers to the mental anguish or physical pain endured by a
plaintiff as a result of injury for which the plaintiff seeks redress.

[edit] Biology, neurology, psychology
Pain and pleasure, in the broad sense of these words, are respectively the negative and positive affects, or
hedonic tones, or valences that psychologists often identify as basic in our emotional lives.[13] The
evolutionary role of physical and mental suffering, through natural selection, is primordial: it warns of
threats, motivates coping (fight or flight, escapism), and reinforces negatively certain behaviors (see
punishment, aversives). Despite its initial disrupting nature, suffering contributes to the organization of
meaning in an individual's world and psyche. In turn, meaning determines how individuals or societies
experience and deal with suffering.

Neuroimaging sheds light on the seat of suffering

Many brain structures and physiological processes take part in the occurrence of suffering. Various
hypotheses try to account for the experience of unpleasantness. One of these, the pain overlap theory[14]
takes note, thanks to neuroimaging studies, that the cingulate cortex fires up when the brain feels
unpleasantness from experimentally induced social distress or physical pain as well. The theory proposes
therefore that physical pain and social pain (i.e. two radically differing kinds of suffering) share a
common phenomenological and neurological basis.

According to David Pearce’s online manifesto The Hedonistic Imperative, suffering is the avoidable result
of Darwinian genetic design. BLTC Research and the Abolitionist Society,[15] following Pearce's
abolitionism, promote replacing the pain/pleasure axis with a robot-like response to noxious stimuli[16] or
with gradients of bliss,[17] through genetic engineering and other technical scientific advances.

Hedonistic psychology,[18] affective science, and affective neuroscience are some of the emerging
scientific fields that could in the coming years focus their attention on the phenomenon of suffering.

[edit] Health care
Disease and injury cause suffering in humans and animals. Health care addresses this suffering in many
ways, in medicine, clinical psychology, psychotherapy, alternative medicine, hygiene, public health, and
through various health care providers.

Health care approaches to suffering, however, remain problematic, according to Eric Cassell, the most
cited author on that subject. Cassell writes: "The obligation of physicians to relieve human suffering
stretches back to antiquity. Despite this fact, little attention is explicitly given to the problem of suffering
in medical education, research or practice." Cassell defines suffering as "the state of severe distress
associated with events that threaten the intactness of the person."[19] Medicine makes a strong distinction
between physical pain and suffering, and most attention goes to the treatment of pain. Nevertheless,
physical pain itself still lacks adequate attention from the medical community, according to numerous
reports.[20]. Besides, some medical fields like palliative care, pain management (or pain medicine),
oncology, or psychiatry, does somewhat address suffering 'as such'. In palliative care, for instance,
pioneer Cicely Saunders created the concept of 'total pain' ('total suffering' say now the textbooks[21]),
which encompasses the whole set of physical and mental distress, discomfort, symptoms, problems, or
needs that a patient may experience hurtfully.

[edit] Relief and prevention in society
Since suffering is such a universal motivating experience, people, when asked, can relate their activities to
its relief and prevention. Farmers, for instance, may claim that they prevent famine, artists may say that
they take our minds off our worries, and teachers may hold that they hand down tools for coping with life
hazards. In certain aspects of collective life, however, suffering is more readily an explicit concern by
itself. Such aspects may include public health, human rights, humanitarian aid, disaster relief,
philanthropy, economic aid, social services, insurance, and animal welfare. To these can be added the
aspects of security and safety, which relate to precautionary measures taken by individuals or families, to
interventions by the military, the police, the firefighters, and to notions or fields like social security,
environmental security, and human security.

[edit] Uses
Philosopher Leonard Katz wrote: "But Nature, as we now know, regards ultimately only fitness and not
our happiness (...), and does not scruple to use hate, fear, punishment and even war alongside affection in
ordering social groups and selecting among them, just as she uses pain as well as pleasure to get us to
feed, water and protect our bodies and also in forging our social bonds".[22]

People make use of suffering for specific social or personal purposes in many areas of human life, as can
be seen in the following instances.

• In arts, literature, or entertainment, people may use suffering for creation, for performance, or for
enjoyment. Entertainment particularly makes use of suffering in blood sports, violence in the
media, or violent video games.

• In business and various organizations, suffering may be used for constraining humans or animals
into required behaviors.

• In a criminal context, people may use suffering for coercion, revenge, or pleasure.

• In interpersonal relationships, especially in places like families, schools, or workplaces, suffering
is used for various motives, particularly under the form of abuse and punishment. In another
fashion related to interpersonal relationships, the sick, or victims, or malingerers, may use
suffering more or less voluntarily to get primary, secondary, or tertiary gain.

• In law, suffering is used for punishment (see penal law ); victims may refer to what legal texts call
"pain and suffering" to get compensation; lawyers may use a victim's suffering as an argument
against the accused; an accused's or defendant's suffering may be an argument in their favor.

• In the news media, suffering is often the raw material.[23]
• In personal conduct, people may use suffering for themselves, in a positive way.[24] Personal
suffering may lead, if bitterness, depression, or spitefulness is avoided, to character-building,
spiritual growth, or moral achievement;[25] realizing the extent or gravity of suffering in the world
may motivate one to relieve it and may give an inspiring direction to one's life. Alternatively,
people may make self-detrimental use of suffering. Some may be caught in compulsive
reenactment of painful feelings in order to protect them from seeing that those feelings have their
origin in unmentionable past experiences; some may addictively indulge in disagreeable emotions
like fear, anger, or jealousy, in order to enjoy pleasant feelings of arousal or release that often
accompany these emotions; some may engage in acts of self-harm aimed at relieving otherwise
unbearable states of mind.

• In politics, there is purposeful infliction of suffering in war, torture, and terrorism; people may use
nonphysical suffering against competitors in nonviolent power struggles; people who argue for a
policy may put forward the need to relieve, prevent or avenge suffering; individuals or groups may
use past suffering as a political lever in their favor.

• In religion, suffering is used especially to grow spiritually, to expiate, to inspire compassion and
help, to frighten, to punish.

• In rites of passage, rituals that make use of suffering are frequent.

• In science, humans and animals are subjected on purpose to unpleasant experiences for the study
of suffering or other phenomena.

• In sex, individuals may use suffering in a context of sadism and masochism or BDSM.

• In sports, suffering may be used to outperform competitors or oneself; see sports injury, and no
pain no gain; see also blood sport and violence in sport as instances of pain-based entertainment.

[edit] See also
Topics related to suffering

Pain-related topics Pain · Pain (philosophy) · Weltschmerz · Psychogenic pain ·

Evil-related topics Evil · Problem of evil · Good and evil: welfarist theories

Sympathy-related
Sympathy · Pity · Mercy · Compassion · Compassion fatigue · Empathy
topics

Cruelty-related Cruelty · Schadenfreude · Sadistic personality disorder · Violence · Physical abuse ·
topics Psychological abuse · Emotional abuse · Self-harm

Death-related
Euthanasia · Animal euthanasia · Suicide
topics

Other related Stress · Dukkha · Theory of relative suffering · Amor fati · Dystopia · Victimology ·
topics Penology · Pleasure · Happiness

[edit] Selected bibliography
• Joseph A. Amato. Victims and Values: A History and a Theory of Suffering. New York: Praeger,
1990. ISBN 0-275-93690-2
• Cynthia Halpern. Suffering, Politics, Power : A Genealogy in Modern Political Theory. Albany:
State University of New York Press, 2002. ISBN 0-7914-5103-8
• Jamie Mayerfeld. Suffering and Moral Responsibility. New York: Oxford University Press, 2005.
ISBN 0-19-515495-9
• David B. Morris. The Culture of Pain. Berkley: University of California, 2002. ISBN 0-520-
08276-1
• Elaine Scarry. The Body in Pain: The Making and Unmaking of the World. New York: Oxford
University Press, 1987. ISBN 0-19-504996-9

[edit] Notes and references

Pain
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article is about physical pain. For pain in a broader sense, see suffering. For other uses, see Pain
(disambiguation).
Pain

ICD-10 R52

ICD-9 338

DiseasesDB 9503

MedlinePlus 002164

MeSH D010146
Pain, in the sense of physical pain,[1] is a typical sensory experience that may be described as the
unpleasant awareness of a noxious stimulus or bodily harm. Individuals experience pain by various daily
hurts and aches, and occasionally through more serious injuries or illnesses. For scientific and clinical
purposes, pain is defined by the International Association for the Study of Pain (IASP) as "an unpleasant
sensory and emotional experience associated with actual or potential tissue damage, or described in terms
of such damage".[2][3]

Pain is highly subjective to the individual experiencing it. A definition that is widely used in nursing was
first given as early as 1968 by Margo McCaffery: "'Pain is whatever the experiencing person says it is,
existing whenever he says it does".[4][5]

Pain of any type is the most frequent reason for physician consultation in the United States, prompting
half of all Americans to seek medical care annually.[6] It is a major symptom in many medical conditions,
significantly interfering with a person's quality of life and general functioning. Diagnosis is based on
characterizing pain in various ways, according to duration, intensity, type (dull, burning or stabbing),
source, or location in body. Usually pain stops without treatment or responds to simple measures such as
resting or taking an analgesic, and it is then called ‘acute’ pain. But it may also become intractable and
develop into a condition called chronic pain, in which pain is no longer considered a symptom but an
illness by itself. The study of pain has in recent years attracted many different fields such as
pharmacology, neurobiology, nursing sciences, dentistry, physiotherapy, and psychology. Pain medicine
is a separate subspecialty[7] figuring under some medical specialties like anesthesiology, physiatry,
neurology, psychiatry.

Pain is part of the body's defense system, triggering a reflex reaction to retract from a painful stimulus,
and helps adjust behaviour to increase avoidance of that particular harmful situation in the future. Given
its significance, physical pain is also linked to various cultural, religious, philosophical, or social issues.

Etymology : "Pain (n.) 1297,
"punishment," especially for a
crime; also (c.1300) "condition
one feels when hurt, opposite of
pleasure," from O.Fr. peine, from
L. poena "punishment, penalty"
(in L.L. also "torment, hardship,
suffering"), from Gk. poine
"punishment," from PIE *kwei-
"to pay, atone, compensate" (...)."
—Online Etymology Dictionary

Contents
[hide]

• 1 Clarification on the use of certain pain-related terms
• 2 Mechanism
• 3 Evolutionary and behavioral role
• 4 Diagnosis
o 4.1 Verbal characterization
o 4.2 Intensity
o 4.3 Localization
• 5 Management
o 5.1 Anesthesia
o 5.2 Analgesia
o 5.3 Complementary and alternative medicine
• 6 Special cases
o 6.1 Phantom pain
o 6.2 Pain asymbolia
o 6.3 Insensitivity to pain
o 6.4 Psychogenic pain
o 6.5 Pain as pleasure
• 7 Society and culture
• 8 In other species
• 9 Notes and references

• 10 External links

[edit] Clarification on the use of certain pain-related terms
• The word pain used without a modifier usually refers to physical pain, but it may also refer to pain
in the broad sense, i.e. suffering. The latter includes physical pain and mental pain, or any
unpleasant feeling, sensation, and emotion. It may be described as a private feeling of
unpleasantness and aversion associated with harm or threat of harm in an individual. Care should
be taken to make the appropriate distinction when required between the two meanings. For
instance, philosophy of pain is essentially about physical pain, while a philosophical outlook on
pain is rather about pain in the broad sense. Or, as another quite different instance, nausea or itch
are not 'physical pains', but they are unpleasant sensory or bodily experience, and a person
'suffering' from severe or prolonged nausea or itch may be said 'in pain'.
• Nociception, the unconscious activity induced by a harmful stimulus in sense receptors, peripheral
nerves, spinal column and brain, should not be confused with physical pain, which is a conscious
experience. Nociception or noxious stimuli usually cause pain, but not always, and sometimes pain
occurs without them.[8]
• Qualifiers, such as mental, emotional, psychological, and spiritual, are often used for referring to
more specific types of pain or suffering. In particular, 'mental pain' may be used in relationship
with 'physical pain' for distinguishing between two wide categories of pain. A first caveat
concerning such a distinction is that it uses 'physical pain' in a sense that normally includes not
only the 'typical sensory experience' of 'physical pain' but also other unpleasant bodily experience
such as itch or nausea. A second caveat is that the terms physical or mental should not be taken too
literally: physical pain, as a matter of fact, happens through conscious minds and involves
emotional aspects, while mental pain happens through physical brains and, being an emotion, it
involves important bodily physiological aspects.
• The term unpleasant or unpleasantness commonly means painful or painfulness in a broad sense.
It is also used in (physical) pain science for referring to the affective dimension of pain, usually in
contrast with the sensory dimension. For instance: “Pain-unpleasantness is often, though not
always, closely linked to both the intensity and unique qualities of the painful sensation.”[9] Pain
science acknowledges, in a puzzling challenge to IASP definition, that pain may be experienced as
a sensation devoid of any unpleasantness: see below pain asymbolia.[10]
• Suffering is sometimes used in the specific narrow sense of physical pain, but more often it refers
to mental pain, or more often yet to pain in the broad sense. Suffering is described as an
individual's basic affective experience of unpleasantness and aversion associated with harm or
threat of harm.
The terms pain and suffering are often used together in different senses which can become confusing, for
example:

• being used as synonyms;
• being used in contradistinction to one another: e.g. "pain is inevitable, suffering is optional", or
"pain is physical, suffering is mental";
• being used to define each other: e.g. "pain is physical suffering", or "suffering is severe physical or
mental pain".

To avoid confusion: this article is about physical pain in the narrow sense of a typical sensory experience
associated with actual or potential tissue damage. This excludes pain in the broad sense of any unpleasant
experience, which is covered in detail by the article Suffering.

[edit] Mechanism
Stimulation of a nociceptor, due to a chemical, thermal, or mechanical event that has the potential to
damage body tissue, may cause nociceptive pain.

Damage to the nervous system itself, due to disease or trauma, may cause neuropathic (or neurogenic)
pain.[11] Neuropathic pain may refer to peripheral neuropathic pain, which is caused by damage to nerves,
or to central neuropathic pain, which is caused by damage to the brain, brainstem, or spinal cord.

Nociceptive pain and neuropathic pain are the two main kinds of pain when the primary mechanism of
production is considered. A third kind may be mentioned: see below psychogenic pain.

Nociceptive pain may be classified further in three types that have distinct organic origins and felt
qualities.[12]

1. Superficial somatic pain (or cutaneous pain) is caused by injury to the skin or superficial tissues.
Cutaneous nociceptors terminate just below the skin, and due to the high concentration of nerve
endings, produce a sharp, well-defined, localized pain of short duration. Examples of injuries that
produce cutaneous pain include minor wounds, and minor (first degree) burns.

1. Deep somatic pain originates from ligaments, tendons, bones, blood vessels, fasciae, and muscles.
It is detected with somatic nociceptors. The scarcity of pain receptors in these areas produces a
dull, aching, poorly-localized pain of longer duration than cutaneous pain; examples include
sprains, broken bones, and myofascial pain.

1. Visceral pain originates from body's viscera, or organs. Visceral nociceptors are located within
body organs and internal cavities. The even greater scarcity of nociceptors in these areas produces
pain that is usually more aching or cramping and of a longer duration than somatic pain. Visceral
pain may be well-localized, but often it is extremely difficult to localize, and several injuries to
visceral tissue exhibit "referred" pain, where the sensation is localized to an area completely
unrelated to the site of injury.

Nociception is the unconscious afferent activity produced in the peripheral and central nervous system by
stimuli that have the potential to damage tissue. It should not be confused with pain, which is a conscious
experience.[8] It is initiated by nociceptors that can detect mechanical, thermal or chemical changes above
a certain threshold. All nociceptors are free nerve endings of fast-conducting myelinated A delta fibers or
slow-conducting unmyelinated C fibers, respectively responsible for fast, localized, sharp pain and slow,
poorly-localized, dull pain. Once stimulated, they transmit signals that travel along the spinal cord and
within the brain. Nociception, even in the absence of pain, may trigger withdrawal reflexes and a variety
of autonomic responses such as pallor, diaphoresis, bradycardia, hypotension, lightheadedness, nausea
and fainting.[13]

Brain areas that are particularly studied in relation with pain include the somatosensory cortex which
mostly accounts for the sensory discriminative dimension of pain, and the limbic system, of which the
thalamus and the anterior cingulate cortex are said to be especially involved in the affective dimension.

The gate control theory of pain describes how the perception of pain is not a direct result of activation of
nociceptors, but instead is modulated by interaction between different neurons, both pain-transmitting and
non-pain-transmitting. In other words, the theory asserts that activation, at the spine level or even by
higher cognitive brain processes, of nerves or neurons that do not transmit pain signals can interfere with
signals from pain fibers and inhibit or modulate an individual's experience of pain.

Pain may be experienced differently depending on genotype; as an example individuals with red hair may
be more susceptible to pain caused by heat,[14] but redheads with a non-functional melanocortin 1 receptor
(MC1R) gene are less sensitive to pain from electric shock.[15] Gene Nav1.7 has been identified as a major
factor in the development of the pain-perception systems within the body. A rare genetic mutation in this
area causes non-functional development of certain sodium channels in the nervous system, which prevents
the brain from receiving messages of physical damage, resulting in congenital insensitivity to pain.[16] The
same gene also appears to mediate a form of pain hyper-sensitivity, while other mutations may be the root
of paroxysmal extreme pain disorder.[16][17]

[edit] Evolutionary and behavioral role
Pain is part of the body's defense system, triggering mental and physical behavior to end the painful
experience. It promotes learning so that repetition of the painful situation will be less likely.

Despite its unpleasantness, pain is an important part of the existence of humans and other animals; in fact,
it is vital to healthy survival (see below Insensitivity to pain). Pain encourages an organism to disengage
from the noxious stimulus associated with the pain. Preliminary pain can serve to indicate that an injury is
imminent, such as the ache from a soon-to-be-broken bone. Pain may also promote the healing process,
since most organisms will protect an injured region in order to avoid further pain.

Interestingly, the brain itself is devoid of nociceptive tissue, and hence cannot experience pain. Thus, a
headache is not due to stimulation of pain fibers in the brain itself. Rather, the membrane surrounding the
brain and spinal cord, called the dura mater, is innervated with pain receptors, and stimulation of these
dural nociceptors is thought to be involved to some extent in producing headache pain. The
vasoconstriction of pain-innervated blood vessels in the head is another common cause. Some
evolutionary biologists have speculated that this lack of nociceptive tissue in the brain might be because
any injury of sufficient magnitude to cause pain in the brain has a sufficiently high probability of being
fatal that development of nociceptive tissue therein would have little to no survival benefit.

Chronic pain, in which the pain becomes pathological rather than beneficial, may be an exception to the
idea that pain is helpful to survival, although some specialists believe that psychogenic chronic pain exists
as a protective distraction to keep dangerous repressed emotions such as anger or rage unconscious.[18] It is
not clear what the survival benefit of some extreme forms of pain (e.g. toothache) might be; and the
intensity of some forms of pain (for example as a result of injury to fingernails or toenails) seem to be out
of all proportion to any survival benefits.

[edit] Diagnosis
To establish an understanding of an individual's pain, health-care practitioners will typically try to
establish certain characteristics of the pain: site, onset and offset, character, radiation, associated
symptoms, time pattern, exacerbating and ameliorating factors and severity.[19]

By using the gestalt of these characteristics, the source or cause of the pain can often be established. A
complete diagnosis of pain will require also to look at the patient's general condition, symptoms, and
history of illness or surgery. The physician may order blood tests, X-rays, scans, EMG, etc. Pain clinics
may investigate the person's psychosocial history and situation.

Pain assessment also uses the concepts of pain threshold, the least experience of pain which a subject can
recognize, and pain tolerance, the greatest level of pain which a subject is prepared to tolerate. Among the
most frequent technical terms for referring to abnormal perturbations in pain experience, there are:

• allodynia, pain due to a stimulus which does not normally provoke pain,
• hyperalgesia, an increased response to a stimulus which is normally painful,
• hypoalgesia, diminished pain in response to a normally painful stimulus.[20]

[edit] Verbal characterization

A key characteristic of pain is its quality. Typical descriptions of pain quality include sharp, stabbing,
tearing, squeezing, cramping, burning, lancinating (electric-shock like), or heaviness. It may be
experienced as throbbing, dull, nauseating, shooting or a combination of these. Indeed, individuals who
are clearly in extreme distress such as from a myocardial infarction may not describe the sensation as
pain, but instead as an extreme heaviness on the chest. Another individual with pain in the same region
and with the same intensity may describe the pain as tearing which would lead the practitioner to consider
aortic dissection. Inflammatory pain is commonly associated with some degree of itch sensation, leading
to a chronic urge to rub or otherwise stimulate the affected area. The difference between these diagnoses
and many others rests on the quality of the pain. The McGill Pain Questionnaire is an instrument often
used for verbal assessment of pain.

[edit] Intensity

Pain may range in intensity from slight through severe to agonizing and can appear as constant or
intermittent. The threshold of pain varies widely between individuals. Many attempts have been made to
create a pain scale that can be used to quantify pain, for instance on a numeric scale that ranges from 0 to
10 points. In this scale, zero would be no pain at all and ten would be the worst pain imaginable. The
purpose of these scales is to monitor an individual's pain over time, allowing care-givers to see how a
patient responds to therapy for example. Accurate quantification can also allow researchers to compare
results between groups of patients.

[edit] Localization

Pains are usually called according to their subjective localization in a specific area or region of the body:
headache, toothache, shoulder pain, abdominal pain, back pain, joint pain, myalgia, etc. Localization is
not always accurate in defining the problematic area, although it will often help narrow the diagnostic
possibilities. Some pain sensations may be diffuse (radiating) or referred. Radiation of pain occurs in
neuralgia when stimulus of a nerve at one site is perceived as pain in the sensory distribution of that
nerve. Sciatica, for instance, involves pain running down the back of the buttock, leg and bottom of foot
that results from compression of a nerve root in the lumbar spine. Referred pain usually happens when
sensory fibres from the viscera enter the same segment of the spinal cord as somatic nerves i.e. those from
superficial tissues. The sensory nerve from the viscera stimulates the nearby somatic nerve so that the
pain localization in the brain is confused. A well-known example is when the pain of a heart attack is felt
in the left arm rather than in the chest.[21]

[edit] Management
Main article: Pain management

Medical management of pain has given rise to a distinction between acute pain and chronic pain. Acute
pain is 'normal' pain, it is felt when hurting a toe, breaking a bone, having a toothache, or walking after an
extensive surgical operation. Chronic pain is a 'pain illness', it is felt day after day, month after month, and
seems impossible to heal.

In general, physicians are more comfortable treating acute pain, which usually is caused by soft tissue
damage, infection and/or inflammation among other causes. It is usually treated simultaneously with
pharmaceuticals, commonly analgesics, or appropriate techniques for removing the cause and for
controlling the pain sensation. The failure to treat acute pain properly may lead to chronic pain in some
cases.[22]

General physicians have only elementary training in chronic pain management. Often, patients suffering
from it are referred to various medical specialists. Though usually caused by an injury, an operation, or an
obvious illness, chronic pain may as well have no apparent cause, or may be caused by a developing
illness or imbalance. This disorder can trigger multiple psychological problems that confound both patient
and health care providers, leading to various differential diagnoses and to patient's feelings of helplessness
and hopelessness. Multidisciplinary pain clinics are growing in number since a few decades.

[edit] Anesthesia

Anesthesia is the condition of having the feeling of pain and other sensations blocked by drugs that
induces a lack of awareness. It may be a total or a minimal lack of awareness throughout the body (i.e.
general anesthesia), or a lack of awareness in a part of the body (i.e. regional or local anesthesia).

[edit] Analgesia

Main article: Analgesic

Analgesia is an alteration of the sense of pain without loss of consciousness. The body possesses an
endogenous analgesia system, which can be supplemented with painkillers or analgesic drugs to regulate
nociception and pain. Analgesia may occur in the central nervous system or in peripheral nerves and
nociceptors. The perception of pain can also be modified by the body according to the gate control theory
of pain.

The endogenous central analgesia system is mediated by 3 major components : the periaquaductal grey
matter, the nucleus raphe magnus and the nociception inhibitory neurons within the dorsal horns of the
spinal cord, which act to inhibit nociception-transmitting neurons also located in the spinal dorsal horn.
The peripheral regulation consists of several different types of opioid receptors that are activated in
response to the binding of the body's endorphins. These receptors, which exist in a variety of areas in the
body, inhibit firing of neurons that would otherwise be stimulated to do so by nociceptors.[citation needed]

The gate control theory of pain postulates that nociception is "gated" by non-noxious stimuli such as
vibration. Thus, rubbing a bumped knee seems to relieve pain by preventing its transmission to the brain.
Pain is also "gated" by signals that descend from the brain to the spinal cord to suppress (and in other
cases enhance) incoming nociceptive information.

[edit] Complementary and alternative medicine

A survey of American adults found pain was the most common reason that people use complementary and
alternative medicine.

Traditional Chinese medicine views pain as a 'blocked' qi, akin to electrical resistance, with treatments
such as acupuncture claimed as more effective for nontraumatic pain than traumatic pain. Although the
mechanism is not fully understood, acupuncture may stimulate the release of large quantities of
endogenous opioids.[23]

Pain treatment may be sought through the use of nutritional supplements such as curcumin, glucosamine,
chondroitin, bromelain and omega-3 fatty acids.

Hypnosis as well as diverse perceptional techniques provoking altered states of consciousness have
proven to be of important help in the management of all types of pain.[24]

Some kinds of physical manipulation or exercise are showing interesting results as well.[25]

[edit] Special cases
[edit] Phantom pain

Main article: Phantom pain

Phantom pain is the sensation of pain from a limb or organ that has been lost or from which a person no
longer receives physical signals. Phantom limb pain is an experience almost universally reported by
amputees and quadriplegics. Phantom pain is a neuropathic pain.

[edit] Pain asymbolia

Pain science acknowledges, in a puzzling challenge to IASP definition,[3] that pain may be experienced as
a sensation devoid of any unpleasantness: this happens in a syndrome called pain asymbolia or pain
dissociation, caused by conditions like lobotomy, cingulotomy or morphine analgesia. Typically, such
patients report that they have pain but are not bothered by it, they recognize the sensation of pain but are
mostly or completely immune to suffering from it.[10]

[edit] Insensitivity to pain

The ability to experience pain is essential for protection from injury, and recognition of the presence of
injury. Insensitivity to pain may occur in special circumstances, such as for an athlete in the heat of the
action, or for an injured soldier happy to leave the battleground. This phenomenon is now explained by
the gate control theory. However, insensitivity to pain may also be an acquired impairment following
conditions such as spinal cord injury, diabetes mellitus, or more rarely Hansen's Disease (leprosy).[26] A
few people can also suffer from congenital insensitivity to pain, or congenital analgesia, a rare genetic
defect that puts these individuals at constant risk from the consequences of unrecognized injury or illness.
Children with this condition suffer carelessly repeated damages to their tongue, eyes, bones, skin,
muscles. They may attain adulthood, but they have a shortened life expectancy.
[edit] Psychogenic pain

Main article: Psychogenic pain

Psychogenic pain, also called psychalgia or somatoform pain, is physical pain that is caused, increased, or
prolonged by mental, emotional, or behavioral factors.[27][28] Headache, back pain, or stomach pain are
some of the most common types of psychogenic pain.[27] Sufferers are often stigmatized, because both
medical professionals and the general public tend to think that pain from a psychological source is not
"real". However, specialists consider that it is no less actual or hurtful than pain from other sources.

[edit] Pain as pleasure

See also: algolagnia and sadomasochism

[edit] Society and culture
Physical pain has been diversely understood or defined from antiquity to modern times.[29]

Philosophy of pain is a branch of philosophy of mind that deals essentially with physical pain. Identity
theorists assert that the mental state of pain is completely identical with some physical state caused by
various physiological causes. Functionalists consider pain to be defined completely by its causal role and
nothing else.

Religious or secular traditions usually define the nature or meaning of physical pain in every society.[30]
Sometimes, extreme practices are highly regarded: mortification of the flesh, painful rites of passage,
walking on hot coals, etc.

Variations in pain threshold or in pain tolerance occur between individuals because of genetics, but also
according to cultural background, ethnicity and sex.

Physical pain is an important political topic in relation to various issues, including resources distribution
for pain management, drug control, animal rights, torture, pain compliance (see also pain beam, pain
maker, pain ray). Corporal punishment is the deliberate infliction of pain intended to punish a person or
change his/her behavior. Historically speaking, most punishments, whether in judicial, domestic, or
educational settings, were corporal in basis.[citation needed]

More generally, it is rather as a part of pain in the broad sense, i.e. suffering, that physical pain is dealt
with in cultural, religious, philosophical, or social issues.

[edit] In other species
The presence of pain in an animal, or another human for that matter, cannot be known for sure, but it can
be inferred through physical and behavioral reactions.[31] Specialists currently believe that all vertebrates
can feel pain, and that certain invertebrates, like the octopus, might too.[32][33] As for other animals, plants,
or other entities, their ability to feel physical pain is at present a question beyond scientific reach, since no
mechanism is known by which they could have such a feeling. In particular, there are no known
nociceptors in groups such as plants, fungi, and most insects,[34] except for instance in fruit flies.[35]

Veterinary medicine uses, for actual or potential animal pain, the same analgesics and anesthetics as used
in humans.[36]
[edit] Notes and references
Sense
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article is about the empirical or natural senses of living organisms (vision, taste, etc.). For other
uses, see Sense (disambiguation).

Depictions of the five senses became a popular subject for seventeenth-century artists, especially among
Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five
Senses (1668; Kelvingrove Art Gallery and Museum), in which each of the figures in the main group
allude to a sense: sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a
triangle, smell is represented by the girl with flowers, taste by the woman with the fruit and touch by the
woman holding the bird.
Senses are the physiological methods of perception. The senses and their operation, classification, and
theory are overlapping topics studied by a variety of fields, most notably neuroscience, cognitive
psychology (or cognitive science), and philosophy of perception. The nervous system has a specific
sensory system, or organ, dedicated to each sense.

Contents
[hide]

• 1 Definition of sense
• 2 Senses
o 2.1 Sight
o 2.2 Hearing
o 2.3 Taste
o 2.4 Smell
o 2.5 Touch
o 2.6 Balance and acceleration
o 2.7 Temperature
o 2.8 Kinesthetic sense
o 2.9 Pain
o 2.10 Other internal senses
• 3 Non-human senses
o 3.1 Analogous to human senses
 3.1.1 Smell
 3.1.2 Vision
 3.1.3 Balance
o 3.2 Not analogous to human senses
• 4 See also
o 4.1 Research Centers
• 5 References

• 6 External links

[edit] Definition of sense
This is no firm agreement among neurologists as to the number of senses because of differing definitions
of what constitutes a sense. One definition states that an exteroceptive sense is a faculty by which outside
stimuli are perceived.[1] The traditional five senses are sight, hearing, touch, smell, taste: a classification
attributed to Aristotle.[2] Humans also have at least six additional senses (a total of eleven including
interoceptive senses) that include: nociception (pain), equilibrioception (balance), proprioception &
kinesthesia (joint motion and acceleration), sense of time, thermoception (temperature differences), and in
some a weak magnetoception (direction)[3].

One commonly recognized categorisation for human senses is as follows: chemoreception;
photoreception; mechanoreception; and thermoception. Indeed, all human senses fit into one of these four
categories.

Different senses also exist in other creatures, for example electroreception.

A broadly acceptable definition of a sense would be "a system that consists of a group of sensory cell
types that responds to a specific physical phenomenon, and that corresponds to a particular group of
regions within the brain where the signals are received and interpreted." Disputes about the number of
senses typically arise around the classification of the various cell types and their mapping to regions of the
brain.

[edit] Senses
[edit] Sight

Sight or vision is the ability of the brain and eye to detect electromagnetic waves within the visible range
(light) interpreting the image as "sight." There is disagreement as to whether this constitutes one, two or
three senses. Neuroanatomists generally regard it as two senses, given that different receptors are
responsible for the perception of colour (the frequency of photons of light) and brightness
(amplitude/intensity - number of photons of light). Some argue[citation needed] that stereopsis, the perception of
depth, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of
brain to interpret sensory input and to derive new information. The inability to see is called blindness.

[edit] Hearing

Hearing or audition is the sense of sound perception. Since sound is vibrations propagating through a
medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical
sense akin to a sense of touch, albeit a very specialized one. In humans, this perception is executed by tiny
hair fibres in the inner ear which detect the motion of a membrane which vibrates in response to changes
in the pressure exerted by atmospheric particles within a range of 20 to 22000 Hz, with substantial
variation between individuals. Sound can also be detected as vibrations conducted through the body by
tactition. Lower and higher frequencies than that can be heard are detected this way only. The inability to
hear is called deafness.

[edit] Taste

This article or section is missing citations or needs footnotes.
Using inline citations helps guard against copyright violations and factual inaccuracies. (March 2008)

Taste or gustation is one of the two main "chemical" senses. There are at least four types of tastes[4] that
"buds" (receptors) on the tongue detect, and hence there are anatomists who argue[citation needed] that these
constitute five or more different senses, given that each receptor conveys information to a slightly
different region of the brain[citation needed]. The inability to taste is called ageusia.

The four well-known receptors detect sweet, salt, sour, and bitter, although the receptors for sweet and
bitter have not been conclusively identified. A fifth receptor, for a sensation called umami, was first
theorised in 1908 and its existence confirmed in 2000[5]. The umami receptor detects the amino acid
glutamate, a flavour commonly found in meat and in artificial flavourings such as monosodium glutamate.

Note that taste is not the same as flavour; flavour includes the smell of a food as well as its taste.

[edit] Smell

Smell or olfaction is the other "chemical" sense. Unlike taste, there are hundreds of olfactory receptors,
each binding to a particular molecular feature. Odour molecules possess a variety of features and thus
excite specific receptors more or less strongly. This combination of excitatory signals from different
receptors makes up what we perceive as the molecule's smell. In the brain, olfaction is processed by the
olfactory system. Olfactory receptor neurons in the nose differ from most other neurons in that they die
and regenerate on a regular basis. The inability to smell is called anosmia.

[edit] Touch

Touch, also called mechanoreception or somatic sensation, is the sense of pressure perception,
generally in the skin. There are a variety of nerve endings that respond to variations in pressure (e.g., firm,
brushing, and sustained). The inability to feel anything or almost anything is called anesthesia. Paresthesia
is a sensation of tingling, pricking, or numbness of a person's skin with no apparent long term physical
effect.

[edit] Balance and acceleration

Balance, Equilibrioception, or vestibular sense, is the sense which allows an organism to sense body
movement, direction, and acceleration, and to attain and maintain postural equilibrium and balance. The
organ of equilibrioception is the vestibular labyrinthine system found in both of the inner ears.
Technically this organ is responsible for two senses, angular momentum and linear acceleration (which
also senses gravity), but they are known together as equilibrioception.

The vestibular nerve conducts information from the three semicircular canals, corresponding to the three
spatial planes, the utricle, and the saccule. The ampulla, or base, portion of the three semicircular canals
each contain a structure called a crista. These bend in response to angular momentum or spinning. The
saccule and utricle, also called the "otolith organs", sense linear acceleration and thus gravity. Otoliths are
small crystals of calcium carbonate that provide the inertia needed to detect changes in acceleration or
gravity.

[edit] Temperature

Thermoception is the sense of heat and the absence of heat (cold) by the skin and including internal skin
passages. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the
brain (hypothalamus) which provide feedback on internal body temperature.

[edit] Kinesthetic sense

Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the
relative positions of the parts of the body. Neurologists test this sense by telling patients to close their
eyes and touch the tip of a finger to their nose. Assuming proper proprioceptive function, at no time will
the person lose awareness of where the hand actually is, even though it is not being detected by any of the
other senses. Proprioception and touch are related in subtle ways, and their impairment results in
surprising and deep deficits in perception and action. [6]

[edit] Pain

Nociception (physiological pain) signals near-damage or damage to tissue. The three types of pain
receptors are cutaneous (skin), somatic (joints and bones) and visceral (body organs). It was believed that
pain was simply the overloading of pressure receptors, but research in the first half of the 20th century
indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch.
Pain was once considered an entirely subjective experience, but recent studies show that pain is registered
in the anterior cingulate gyrus of the brain.[7]

[edit] Other internal senses
An internal sense or interoception is "any sense that is normally stimulated from within the body."[8]
These involve numerous sensory receptors in internal organs, such as stretch receptors that are
neurologically linked to the brain.

• Pulmonary stretch receptors are found in the lungs and control the respiratory rate.
• Cutaneous receptors in the skin not only respond to touch, pressure, and temperature, but also
respond to vasodilation in the skin such as blushing.
• Stretch receptors in the gastrointestinal tract sense gas distension that may result in colic pain.
• Stimulation of sensory receptors in the esophagus result in sensations felt in the throat when
swallowing, vomiting, or during acid reflux.
• Sensory receptors in pharynx mucosa, similar to touch receptors in the skin, sense foreign objects
such as food that may result in a gagging reflex and corresponding gagging sensation.
• Stimulation of sensory receptors in the urinary bladder and rectum may result in sensations of
fullness.
• Stimulation of stretch sensors that sense dilation of various blood vessels may result in pain, for
example headache caused by vasodilation of brain arteries.

[edit] Non-human senses
[edit] Analogous to human senses

Other living organisms have receptors to sense the world around them, including many of the senses listed
above for humans. However, the mechanisms and capabilities vary widely.

[edit] Smell

Among non-human species, dogs have a much keener sense of smell than humans, although the
mechanism is similar. Insects have olfactory receptors on their antennae.

[edit] Vision

Cats have the ability to see in low light due to muscles surrounding their irises to contract and expand
pupils as well as the tapetum lucidum, a reflective membrane that optimizes the image. Pitvipers, pythons
and some boas have organs that allow them to detect infrared light, such that these snakes are able to
sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.
[9]
It has been found that birds and some other animals are tetrachromats and have the ability to see in the
ultraviolet down to 300 nanometers. Bees are also able to see in the ultraviolet.

[edit] Balance

Ctenophores have a balance receptor (a statocyst) that works very differently from the mammalian's semi-
circular canals.

[edit] Not analogous to human senses

In addition, some animals have senses that humans do not, including the following:

• Electroception (or "electroreception"), the most significant of the non-human senses, is the
ability to detect electric fields. Several species of fish, sharks and rays have the capacity to sense
changes in electric fields in their immediate vicinity. Some fish passively sense changing nearby
electric fields; some generate their own weak electric fields, and sense the pattern of field
potentials over their body surface; and some use these electric field generating and sensing
capacities for social communication. The mechanisms by which electroceptive fish construct a
spatial representation from very small differences in field potentials involve comparisons of spike
latencies from different parts of the fish's body.

The only order of mammals that is known to demonstrate electroception is the monotreme order.
Among these mammals, the platypus[10] has the most acute sense of electroception.
Body modification enthusiasts have experimented with magnetic implants to attempt to replicate
this sense,[11] however in general humans (and probably other mammals) can detect electric fields
only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for
instance, will exert a force on human arm hairs, which can be felt through tactition and identified
as coming from a static charge (and not from wind or the like). This is however not electroception
as it is a post-sensory cognitive action.

• Echolocation is the ability to determine orientation to other objects through interpretation of
reflected sound (like sonar). Bats and cetaceans are noted for this ability, though some other
animals use it, as well. It is most often used to navigate through poor lighting conditions or to
identify and track prey. There is currently an uncertainty whether this is simply an extremely
developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate
sense. Resolution of the issue will require brain scans of animals while they actually perform
echolocation, a task that has proven difficult in practice. Blind people report they are able to
navigate by interpreting reflected sounds (esp. their own footsteps), a phenomenon which is
known as Human echolocation.

• Magnetoception (or "magnetoreception") is the ability to detect fluctuations in magnetic fields
and is most commonly observed in birds, though it has also been observed in insects such as bees.
Although there is no dispute that this sense exists in many avians (it is essential to the navigational
abilities of migratory birds), it is not a well-understood phenomenon[12]. One study has found that
cattle make use of magnetoception, as they tend to align themselves in a North-South direction[13].
Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their
orientation relative to the Earth's magnetic field.[citation needed]

• Pressure detection uses the lateral line, which is a pressure-sensing system of hairs found in fish
and some aquatic amphibians. It is used primarily for navigation, hunting, and schooling. Humans
have a basic relative-pressure detection ability when eustachian tube(s) are blocked, as
demonstrated in the ear's response to changes in altitude.

• Polarized light direction / detection is used by bees to orient themselves, especially on cloudy
days. Cuttlefish can also perceive the polarization of light. Most sighted humans can in fact learn
to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is
considered an Entoptic phenomenon rather than a separate sense.

[edit] See also
• Attention
• Basic tastes
• Communication
• Empiricism
• Extrasensory perception
• Hypersensors (people with unusual sense abilities)
o Human echolocation
o Supertaster
o Vision-related: Haidinger's brush (ordinary people sensing light polarisation),
Tetrachromat (increased colour perception)
• Illusions
o Auditory illusion
o Optical illusion
o Touch illusion
• Intuition
• Multimodal integration
• Perception
• Phantom limb
• Sensation and perception psychology
• Sense of time
• Sensitivity (human)
• Sensorium
• Synesthesia

[edit] Research Centers

• Howard Hughes Medical Institute (HHMI)
• Institute for Advanced Science & Engineering (IASE)

[edit] References

Hearing (sense)
From Wikipedia, the free encyclopedia

Jump to: navigation, search
"Listening" redirects here. For other uses, see Listen.

Hearing (or audition) is one of the traditional five senses. It is the ability to perceive sound by detecting
vibrations via an organ such as the ear. The inability to hear is called deafness.

In humans and other vertebrates, hearing is performed primarily by the auditory system: vibrations are
detected by the ear and transduced into nerve impulses that are perceived by the brain (primarily in the
temporal lobe). Like touch, audition requires sensitivity to the movement of molecules in the world
outside the organism. Both hearing and touch are types of mechanosensation.[1]

Contents
[hide]

• 1 Hearing tests
• 2 Hearing underwater
• 3 Hearing in animals
• 4 References
• 5 See also

• 6 External links
[edit] Hearing tests
Main article: Hearing test

Hearing can be measured by behavioral tests using an audiometer. Electrophysiological tests of hearing
can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include
auditory brainstem evoked potentials (ABR), otoacoustic emissions (OAE) and electrocochleography
(EchoG). Technical advances in these tests have allowed hearing screening for infants to become
widespread.

[edit] Hearing underwater
Hearing threshold and the ability to localize sound sources are reduced underwater, in which the speed of
sound is faster than in air. Underwater hearing is by bone conduction, and localization of sound appears to
depend on differences in amplitude detected by bone conduction.[2]

[edit] Hearing in animals

Not all sounds are normally audible to all animals. Each species has a range of normal hearing for both
loudness (amplitude) and pitch (frequency). Many animals use sound to communicate with each other,
and hearing in these species is particularly important for survival and reproduction. In species that use
sound as a primary means of communication, hearing is typically most acute for the range of pitches
produced in calls and speech.

Frequencies capable of being heard by humans are called audio or sonic. The range is typically considered
to be between 20Hz and 20,000Hz.[3] Frequencies higher than audio are referred to as ultrasonic, while
frequencies below audio are referred to as infrasonic. Some bats use ultrasound for echolocation while in
flight. Dogs are able to hear ultrasound, which is the principle of 'silent' dog whistles. Snakes sense
infrasound through their bellies, and whales, giraffes and elephants use it for communication.

[edit] References
1. ^ Kung C. (2005-08-04). "A possible unifying principle for mechanosensation". Nature 436
(7051): 647–654. doi:10.1038/nature03896.
http://www.nature.com/nature/journal/v436/n7051/full/nature03896.html.
2. ^ Shupak A. Sharoni Z. Yanir Y. Keynan Y. Alfie Y. Halpern P. (January 2005). "Underwater
Hearing and Sound Localization with and without an Air Interface". Otology & Neurotology 26
(1): 127–130. doi:10.1097/00129492-200501000-00023. http://otology-
neurotology.com/pt/re/otoneuroto/abstract.00129492-200501000-
00023.htm;jsessionid=Hn3GlTRJcB530CTrCxLlgrJLhv6WyCvpgcBmC0FLJCLWgY5yckpm!
1138671057!181195629!8091!-1?
index=1&database=ppvovft&results=1&count=10&searchid=1&nav=search.
3. ^ "Frequency Range of Human Hearing". The Physics Factbook.

[edit] See also
• Active listening
• Audiogram
• Audiometry
• Auditory illusion
• Auditory brainstem response (ABR) test
• Auditory scene analysis
• Auditory system
• Cochlear implant
• Equal-loudness contour
• Hearing impairment
• Hearing range
• Missing fundamental
• Music
• Music and the brain
• National Day of Listening
• Presbycusis
• Tinnitus

[edit] External links
• Egopont hearing range test

[hide]
v•d•e
Nervous system: Sensory systems / senses

Visual system – sight • Auditory system – hearing • Chemoreception
Special senses
(Olfactory system – smell • Gustatory system – taste)

Touch Pain • Heat • Balance • Mechanoreception (Pressure, vibration, proprioception)

Other Sensory receptor
Retrieved from "http://en.wikipedia.org/wiki/Hearing_(sense)"
Categories: Hearing | Sound

Somatosensory system
From Wikipedia, the free encyclopedia

Jump to: navigation, search
"Touch" redirects here. For other uses, see Touch (disambiguation).

The somatosensory system is a diverse sensory system comprising the receptors and processing centres
to produce the sensory modalities such as touch, temperature, proprioception (body position), and
nociception (pain). The sensory receptors cover the skin and epithelia, skeletal muscles, bones and joints,
internal organs, and the cardiovascular system. While touch is considered one of the five traditional
senses, the impression of touch is formed from several modalities; In medicine, the colloquial term touch
is usually replaced with somatic senses to better reflect the variety of mechanisms involved.

The system reacts to diverse stimuli using different receptors: thermoreceptors, mechanoreceptors and
chemoreceptors. Transmission of information from the receptors passes via sensory nerves through tracts
in the spinal cord and into the brain. Processing primarily occurs in the primary somatosensory area in the
parietal lobe of the cerebral cortex.

At its simplest, the system works when a sensory neuron is triggered by a specific stimulus such as heat;
this neuron passes to an area in the brain uniquely attributed to that area on the body—this allows the
processed stimulus to be felt at the correct location. The mapping of the body surfaces in the brain is
called a homunculus and is essential in the creation of a body image.

Contents
[hide]

• 1 Anatomy
o 1.1 General somatosensory pathway
o 1.2 Periphery
o 1.3 Spinal cord
o 1.4 Brain
• 2 Physiology
• 3 Technology
• 4 See also
• 5 Notes
• 6 References

• 7 External links

[edit] Anatomy
The somatosensory system is spread through all major parts of a mammal's body (and other vertebrates).
It consists both of sensory receptors and sensory (afferent) neurones in the periphery (skin, muscle and
organs for example), to deeper neurones within the central nervous system.

[edit] General somatosensory pathway

A somatosensory pathway typically has two long neurons[1]: primary, secondary and tertiary (or first,
second, and third).

• The first neuron always has its cell body in the dorsal root ganglion of the spinal nerve (if
sensation is in head or neck, it will be the trigeminal nerve ganglia or the ganglia of other sensory
cranial nerves).
• The second neuron has its cell body either in the spinal cord or in the brainstem. This neuron's
ascending axons will cross (decussate) to the opposite side either in the spinal cord or in the
brainstem. The axons of many of these neurones terminate in the thalamus (for example the
ventral posterior nucleus, VPN), others terminate in the reticular system or the cerebellum.
• In the case of touch and certain types of pain, the third neuron has its cell body in the VPN of the
thalamus and ends in the postcentral gyrus of the parietal lobe.

[edit] Periphery

In the periphery, the somatosensory systemn detects various stimuli by sensory receptors, e.g. by
mechanoreceptors for tactile sensation and nociceptors for pain sensation. The sensory information
(touch, pain, temperature etc.,) is then conveyed to the central nervous system by afferent neurones. There
are a number of different types of afferent neurones which vary in their size, structure and properties.
Generally there is a correlation between the type of sensory modality detected and the type of afferent
neurone involved. So for example slow, thin unmyelinated neurones conduct pain whereas faster, thicker,
myelinated neurones conduct casual touch.

[edit] Spinal cord

In the spinal cord, the somatosensory system [2] includes ascending pathways from the body to the brain.
One major target within the brain is the postcentral gyrus in the cerebral cortex. This is the target for
neurones of the Dorsal Column Medial Lemniscal pathway and the Ventral Spinothalamic pathway. Note
that many ascending somatosensory pathways include synapses in either the thalamus or the reticular
formation before they reach the cortex. Other ascending pathways, particularly those involved with
control of posture are projected to the cerebellum. These include the ventral and dorsal spinocerebellar
tracts. Another important target for afferent somatosensory neurones which enter the spinal cord are those
neurones involved with local segmental reflexes.

[edit] Brain

The primary somatosensory area in the human cortex is located in the postcentral gyrus of the parietal
lobe. The postcentral gyrus is the location of the primary somatosensory area, the main sensory receptive
area for the sense of touch. Like other sensory areas, there is a map of sensory space called a homunculus
at this location. For the primary somatosensory cortex, this is called the sensory homunculus. Areas of
this part of the human brain map to certain areas of the body, dependent on the amount or importance of
somatosensory input from that area. For example, there is a large area of cortex devoted to sensation in
the hands, while the back has a much smaller area. Interestingly, one study showed somatosensory cortex
was found to be 21% thicker in 24 migraine sufferers, on average than in 12 controls[3], although we do
not yet know what the significance of this is. Somatosensory information involved with proprioception
and posture also targets an entirely different part of the brain, the cerebellum.

[edit] Physiology
Initiation of probably all "somatosensation" begins with activation of some sort of physical "receptor".
These somatosensory receptors tend to lie in skin, organs or muscle. The structure of these receptors is
broadly similar in all cases, consisting of either a "free nerve ending" or a nerve ending embedded in a
specialised capsule. They can be activated by movement (mechanoreceptor), pressure (mechanoreceptor),
chemical (chemoreceptor) and/or temperature. In each case, the general principle of activation is similar;
the stimulus causes depolarisation of the nerve ending and then an action potential is initiated. This action
potential then (usually) travels inward towards the spinal cord.
[edit] Technology
The new research area of haptic technology can provide touch sensation in virtual and real environments.
This new discipline has started to provide critical insights into touch capabilities.

[edit] See also
• Cell signalling
• Special senses
• Cellular Cognition
• Muscle spindle

[edit] Notes
1. ^ Saladin KS. Anatomy and Physiology 3rd edd. 2004. McGraw-Hill, New York.
2. ^ Nolte J.The Human Brain 5th ed. 2002. Mosby Inc, Missouri.
3. ^ "Thickening in the somatosensory cortex of patients with migraine." Alexandre F.M. DaSilva,
Cristina Granziera, Josh Snyder, and Nouchine Hadjikhani. Neurology, Nov 2007; 69: 1990 -
1995.

[edit] References
• Emile L. Boulpaep; Walter F. Boron (2003). Medical Physiology. Saunders. pp. 352–358. ISBN 0-
7216-3256-4.

• Flanagan, J.R., Lederman, S.J. Neurobiology: Feeling bumps and holes, News and Views, Nature,
2001 Jul. 26;412(6845):389-91.

• Hayward V, Astley OR, Cruz-Hernandez M, Grant D, Robles-De-La-Torre G. Haptic interfaces
and devices. Sensor Review 24(1), pp. 16-29 (2004).

• Robles-De-La-Torre G., Hayward V. Force Can Overcome Object Geometry In the perception of
Shape Through Active Touch. Nature 412 (6845):445-8 (2001).

• Robles-De-La-Torre G. The Importance of the Sense of Touch in Virtual and Real Environments.
IEEE Multimedia 13(3), Special issue on Haptic NO User Interfaces for Multimedia Systems, pp.
24-30 (2006).

[edit] External links
• 'Somatosensory & Motor research' (Informa Healthcare)

[hide]
v•d•e
Nervous system: Sensory systems / senses

Visual system – sight • Auditory system – hearing • Chemoreception
Special senses
(Olfactory system – smell • Gustatory system – taste)
Touch Pain • Heat • Balance • Mechanoreception (Pressure, vibration, proprioception)

Other Sensory receptor
Retrieved from "http://en.wikipedia.org/wiki/Somatosensory_system"
Category: Somatic sensory system

Play (activity)
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Child playing with bubbles

Play refers to a range of voluntary, intrinsically motivated activities that are normally associated with
pleasure and enjoyment.[1] Play may consist of amusing, pretend or imaginary interpersonal and
intrapersonal interactions or interplay. The rights of play are evident throughout nature and are perceived
in people and animals, particularly in the cognitive development and socialization of children. Play often
entertains props, animals, or toys in the context of learning and recreation. Some play has clearly defined
goals and when structured with rules is entitled a game, whereas some play exhibits no such goals nor
rules and is considered to be "unstructured" in the literature.

Contents
[hide]

• 1 Working definitions
• 2 Childhood and play
• 3 See also
• 4 References
• 5 Further reading

• 6 External links

[edit] Working definitions
As a theoretical concept, play is challenging to define. Rather than collapsing all views of this quality into
a singular definition, play may be best envisioned as descriptive of a range of activities that may be
ascribed to humans and non-humans. In general discourse, people use the word "play" as a contrast to
other parts of their lives: sleep, eating, washing, work, rituals, etc. Different types of specialists may also
use the word "play" in different ways. Play therapists evoke the expansive definition of the term in Play
Therapy and Sandbox Play. Play is cast in the modal of Sacred Play within Transpersonal Psychology.

Sociologist David Reisman proffered that play is a quality (as different from an activity). Mark Twain
commented that play and work are words used to describe the same activity under different
circumstances. This viewpoint is reflected in the work of anthropologists who model a distinction
between "play" and "nonplay" in different cultures.

Playing Children, by Chinese Song Dynasty artist Su Hanchen, c. 1150 AD.

Concerted endeavor has been made to identify the qualities of play, but this task is not without its
ambiguities. For example, play is commonly oft-defined as a frivolous and nonserious activity; yet when
watching children at play, one is impressed at their transfixed seriousness and entrancing absorption with
which they engage in it. Other criteria of play include a relaxed pace and freedom versus compulsion. Yet
play seems to have its intrinsic constraints as in, "You're not playing fair."

People at the National Institute for Play are creating a clinical, scientific framework for play. On their
website they introduce seven patterns of play (along with reference sources for each) which indicate the
huge range of types of activities and states of being which play encompasses.

James Findlay, a Social Educator, defines play as a meta intelligence, suggesting that play is behind,
together with, and changes, the various multiple intelligences we have. [1]

When play is structured and goal orientated it is often done as a game. Play can also be seen as the
activity of rehearsing life events e.g. young animals play fighting. These and other concepts or rhetorics
of play are discussed at length by Brian Sutton-Smith in the book The Ambiguity of Play. Sometimes play
is dangerous, such as in extreme sports. This type of play could be considered stunt play, whether
engaging in play frighting, sky-diving, or riding a device at high speed in an unusual manner.

The seminal text in play studies is Homo Ludens by Johan Huizinga. Huizinga defined play as follows:
Summing up the formal characteristic of play, we might call it a free activity standing quite consciously outside
‘ordinary’ life as being ‘not serious’ but at the same time absorbing the player intensely and utterly. It is an activity
connected with no material interest, and no profit can be gained by it. It proceeds within its own proper boundaries
of time and space according to fixed rules and in an orderly manner. It promotes the formation of social groupings
that tend to surround themselves with secrecy and to stress the difference from the common world by disguise or
other means.

This definition of play as constituting a separate and independent sphere of human activity is sometimes
referred to as the "magic circle" notion of play, and attributed to Huizinga, who does make reference to
the term at some points in Homo Ludens. According to Huizinga, within play spaces, human behavior is
structured by very different rules: e.g. kicking (and only kicking) a ball in one direction or another, using
physical force to impede another player (in a way which might be illegal outside the context of the game).

Another classic in play theory is Man, Play and Games by Roger Caillois. Caillois borrows much of his
definition from Huizinga. Caillois coined several formal sub-categories of play, such as alea (games of
chance) and ilinx (vertigo or thrill-seeking play).

A notable contemporary play theorist is Jesper Juul who works on both pure play theory and the
application of this theory to Computer game studies. The theory of play and its relationship with rules and
game design is also extensively discussed by Katie Salen and Eric Zimmerman in their book: Rules of
Play : Game Design Fundamentals.

In computer games the word gameplay is often used to describe the concept of play. Play can also be
sexual play between two persons, e.g., Flirting." In music, to "play" may mean to produce sound on a
musical instrument, including performance or solitary reproduction of a particular musical composition
through one's personal use of such an instrument or by actuating an electrical or mechanical reproduction
device.

Symbolic play uses one thing to stand for another and shows the child's ability to create mental images.
There are three types of symbolic play, dramatic play, constructive play, and playing games with rules.

[edit] Childhood and play
Play is freely chosen, intrinsically motivated and personally directed. Playing has been long recognized as
a critical aspect of Child development. Some of the earliest studies of play started in the 1890s with G.
Stanley Hall, the father of the child study movement that sparked an interest in the developmental, mental
and behavioral world of babies and children. The American Academy of Pediatrics (AAP) published a
study in 2006 entitled: "The Importance of Play in Promoting Healthy Child Development and
Maintaining Strong Parent-Child Bonds". The report states: "free and unstructured play is healthy and - in
fact - essential for helping children reach important social, emotional, and cognitive developmental
milestones as well as helping them manage stress and become resilient" [2]

Many of the most prominent researchers in the field of psychology (Jean Piaget, William James, Sigmund
Freud, Carl Jung, Lev Vygotsky, etc.) have viewed play as endemic to the human species.

Play is explicitly recognized in Article 31 of The Convention on the Rights of the Child (adopted by the
General Assembly of the United Nations, November 29, 1989). which states:

1. Parties recognize the right of the child to rest and leisure, to engage in play and recreational
activities appropriate to the age of the child and to participate freely in cultural life and the arts.
2. Parties shall respect and promote the right of the child to participate fully in cultural and artistic
life and shall encourage the provision of appropriate and equal opportunities for cultural, artistic,
recreational and leisure activities.

Childhood 'play' is also seen by Sally Jenkinson (author of The Genius of Play) to be an intimate and
integral part of childhood development. "In giving primacy to adult knowledge, to our 'grown-up' ways of
seeing the world, have we forgotten how to value other kinds of wisdom? Do we still care about the small
secret corners of children's wisdom?"[3]

Modern research in the field of 'affective neuroscience' has uncovered important links between role
playing and neurogenesis in the brain.(Panksepp, Affective Neuroscience 98). Sociologist Roger Caillois
coined the phrase ilinx to describe the momentary disruption of perception that comes from forms of
physical play that disorient the senses, especially balance.

In addition evolutionary psychologists have begun to expound the phylogenetic relationship between
higher intelligence in humans and its relationship to play.

Stevanne Auerbach mentions the role of play therapy in treating children suffering from traumas,
emotional issues, and other problems.[4] She also emphasizes the importance of toys with high play value
for child development and the role of the parent in evaluating toys and being the child's play guide.

Sudbury model of democratic education schools assert that play is a big part of life at their schools where
it is seen as a serious business. They maintain that play is always serious for kids, as well as for adults
who haven't forgotten how to play, and much of the learning going on at these schools is done through
play. So they don't interfere with it. Hence play flourishes at all ages, and the graduates who leave these
schools go out into the world knowing how to give their all to whatever they're doing, and still
remembering how to laugh and enjoy life as it comes. [5]

[edit] See also
• Play (animal behaviour)
• Imaginary friends
• Play therapy
• Play value
• Playground

[edit] References
Wikimedia Commons has media related to: play

1. ^ Garvey, C. (1990). Play. Cambridge, MA: Harvard University Press.
2. ^ Ginsburg, Clinical Report, doi:10.1542/peds.2006-2697.
3. ^ Jenkinson, Sally (2001). The Genius of Play: Celebrating the Spirit of Childhood. Melbourne: Hawthorn
Press. ISBN 1-903458-04-8.
4. ^ Dr. Toy's Smart Play Smart Toys (How To Raise A Child With a HIgh PQ (Play Quotient)). Stevanne
Auerbach. 2004. ISBN 1-56767-652-9.
5. ^ Greenberg, D. (1987) "Play," Free at Last - The Sudbury Valley School.

[edit] Further reading
Sports and games portal
Please expand this article using the suggested source(s) below.
More information might be found in a section of the talk page.

• Caillois, R. (2001). Man, play, and games. Urbana and Chicago, University of Illinois Press
(originally published in 1958; translated from the French by Meyer Barash).
• Huizinga, J. (1955). Homo ludens; a study of the play-element in culture. Boston,, Beacon Press.
• Jenkinson, Sally (2001). The Genius of Play. Hawthorn Press
• Sutton-Smith, B. (1997). The ambiguity of play. Cambridge, Mass., Harvard University Press.
• The Genesis of Animal Play: Testing the Limits Gordon M. Burghardt [2]

[edit] External links
• Arquetipo Ludi (Spanish)
• IPA World Home (International Play Association: Promoting the Child's Right to Play)

Retrieved from "http://en.wikipedia.org/wiki/Play_(activity)"
Categories: Articles to be expanded with sources | Behavior | Learning | Play

Socialization
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article is about the sociological term. For the economic term, see Nationalization.

A family posing for a group photo socializes together.

The term socialization is used by sociologists, social psychologists and educationalists to refer to the
process of learning one’s culture and how to live within it. For the individual it provides the skills and
habits necessary for acting and participating within their society. For the society, inducting all individual
members into its moral norms, attitudes, values, motives, social roles, language and symbols is the ‘means
by which social and cultural continuity are attained’ (Clausen 1968: 5).

Contents
[hide]

• 1 Socialization
• 2 Agents of Socialization
o 2.1 Media and socialization
o 2.2 Total institutions
o 2.3 Gender socialization and gender roles
• 3 Resocialization
• 4 Racial Socialization
• 5 Socialization for animal species
o 5.1 Ferality
o 5.2 Cats
o 5.3 Dogs
• 6 References

• 7 See also

[edit] Socialization
Clausen claims that theories of socialization are to be found in Plato, Montaigne and Rousseau and he
identifies a dictionary entry from 1828 that defines ‘socialize’ as ‘to render social, to make fit for living in
society’ (1968: 20-1). However it was the response to a translation of a paper by George Simmel that
brought the term and the idea of acquiring social norms and values into the writing of American
sociologists F. P. Giddings and E. A. Ross in the 1890s. In the 1920s the theme of socialization was taken
up by Chicago sociologists, including Ernest Burgess, and the process of learning how to be a member of
society was explored in the work of Charles Cooley, W. I. Thomas and George Mead. Clausen goes on to
track the way the concept was incorporated into various branches of psychology and anthropology (1968:
31-52).

In the middle of the twentieth century, socialization was a key idea in the dominant American
functionalist tradition of sociology. Talcott Parsons (Parsons and Bales 1956) and a group of colleagues in
the US developed a comprehensive theory of society that responded to the emergence of modernity in
which the concept of socialization was a central component. One of their interests was to try to understand
the relationship between the individual and society – a distinctive theme in US sociology since the end of
the nineteenth century. Ely Chinoy, in a 1960s standard textbook on sociology, says that socialization
serves two major functions:

On the one hand, it prepares the individual for the roles he is to play, providing him with the necessary repertoire of
habits, beliefs, and values, the appropriate patterns of emotional response and the modes of perception, the requisite
skills and knowledge. On the other hand, by communicating the contents of culture from one generation to the
other, it provides for its persistence and continuity. (Chinoy, 1961: 75)

For many reasons – not least his excessive approval of modern American life as the model social system
and his inability to see how gender, race and class divisions discriminated against individuals in ways that
were unjustifiable – Parsonian functionalism faded in popularity in the 1970s. Reacting to the
functionalist notion of socialization English sociologist Graham White, writing in 1977 said:

… it is no longer enough to focus on the malleability and passivity of the individual in the face of all powerful
social influences. Without some idea about the individual’s own activity in shaping his social experience our
perspective of socialization becomes distorted. (White 1977: 5).

During the last quarter of the twentieth century the concept of ‘socialization’ has been much less central
to debates in sociology that have shifted their focus from identifying the functions of institutions and
systems to describing the cultural changes of postmodernity. But the idea of socialization has lived on,
particularly in debates about the family and education. The institutions of the family or the school are
often blamed for their failure to socialize individuals who go on to transgress social norms. On the other
hand, it is through a critique of functionalist ideas about socialization that there has been an increasing
acceptance of a variety of family forms, of gender roles and an increasing tolerance of variations in the
ways people express their social identity.

Social norms reveal the values behind socialization. Sociologists, such as Durkheim, have noted the
relationship between norms, values and roles during socialisation.

Primary socialization Primary socialization occurs when a child learns the attitudes, values, and actions
appropriate to individuals as members of a particular culture.

For example if a child saw his/her mother expressing a discriminatory opinion about a minority group,
then that child may think this behavior is acceptable and could continue to have this opinion about
minority groups.

Secondary socialization Secondary socialization refers to the process of learning what is appropriate
behavior as a member of a smaller group within the larger society. It is usually associated with teenagers
and adults, and involves smaller changes than those occurring in primary socialization. eg. entering a new
profession, relocating to a new environment or society.

Developmental socialization Developmental socialization is the process of learning behavior in a social
institution or developing your social skills.

Anticipatory socialization Anticipatory socialization refers to the processes of socialization in which a
person "rehearses" for future positions, occupations, and social relationships.

Resocialization Resocialization refers to the process of discarding former behavior patterns and accepting
new ones as part of a transition in one's life. This occurs throughout the human life cycle (Schaefer &
Lamm, 1992: 113). Resocialization can be an intense experience, with the individual experiencing a sharp
break with their past, and needing to learn and be exposed to radically different norms and values. An
example might be the experience of a young man or woman leaving home to join the military.

[edit] Agents of Socialization
Agents of socialization are the people and groups that influence our self-concept, emotions, attitudes, and
behavior.

1. The Family. Family is responsible for, among other things, determining one's attitudes toward
religion and establishing career goals.
2. Education. Education is the agency responsible for socializing groups of young people in
particular skills and values in society.
3. Peer groups. Peers refer to people who are roughly the same age and/or who share other social
characteristics (e.g., students in a college class).
4. The Mass Media.
5. Other Agents: Religion, Work Place, The State.

[edit] Media and socialization

Theorists like Parsons and textbook writers like Ely Chinoy (1960) and Harry M. Johnson (1961)
recognised that socialization didn’t stop when childhood ended. They realized that socialization continued
in adulthood, but they treated it as a form of specialised education. Johnson (1961), for example, wrote
about the importance of inculcating members of the US Coastguard with a set of values to do with
responding to commands and acting in unison without question.

Later scholars accused these theorists of socialization of not recognising the importance of the mass
media which, by the middle of the twentieth century were becoming more significant as a social force.
There was concern about the link between television and the education and socialization of children – it
continues today – but when it came to adults, the mass media were regarded merely as sources of
information and entertainment rather than moulders of personality. According to these scholars, they were
wrong to overlook the importance of mass media in continuing to transmit the culture to adult members of
society.

In the middle of the twentieth century the pace of cultural change was accelerating, yet Parsons and others
wrote of culture as something stable into which children needed to be introduced but which adults could
simply live within. As members of society we need to continually refresh our ‘repertoire of habits, beliefs,
and values, the appropriate patterns of emotional response and the modes of perception, the requisite
skills and knowledge’ as Chinoy (1961: 75) put it.

Some sociologists and theorists of culture have recognised the power of mass communication as a
socialization device. Dennis McQuail recognises the argument:

… the media can teach norms and values by way of symbolic reward and punishment for different kinds of
behaviour as represented in the media. An alternative view is that it is a learning process whereby we all learn how
to behave in certain situations and the expectations which go with a given role or status in society. Thus the media
are continually offering pictures of life and models of behaviour in advance of actual experience. (McQuail 2005:
494)

[edit] Total institutions

The term "total institutions" was coined in 1963 by Erving Goffman, designed to describe a society which
is socially isolated but still provides for all the needs of its members. Therefore, total institutions have the
ability to resocialize people either voluntarily or involuntarily. For example, the following would be
considered as total institutions: prisons, the military, mental hospitals and convents (Schaefer & Lamm,
1992: 113).

Goffman lists four characteristics of such institutions:

• All aspects of life are conducted in the same place and under the same single authority.
• Each phase of a member's daily activity is carried out in the immediate company of others. All
members are treated alike and all members do the same thing together.
• Daily activities are tightly scheduled. All activity is superimposed upon the individual by a system
of explicit formal rules.
• A single rational plan exists to fulfill the goals of the institution.

[edit] Gender socialization and gender roles

Henslin (1999:76) contends that "an important part of socialization is the learning of culturally defined
gender roles." Gender socialization refers to the learning of behavior and attitudes considered appropriate
for a given sex. Boys learn to be boys and girls learn to be girls. This "learning" happens by way of many
different agents of socialization. The family is certainly important in reinforcing gender roles, but so are
one’s friends, school, work and the mass media. Gender roles are reinforced through "countless subtle and
not so subtle ways" (1999:76).
[edit] Resocialization
Resocialization is a sociological concept dealing with the process of mentally and emotionally "re-
training" a person so that he or she can operate in an environment other than that which he or she is
accustomed to. Resocialization into a total institution involves a complete change of personality. Key
examples include the process of resocializing new recruits into the military so that they can operate as
soldiers (or, in other words, as members of a cohesive unit) and the reverse process, in which those who
have become accustomed to such roles return to society after military discharge.

Main article: resocialization

[edit] Racial Socialization
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unverifiable material may be challenged and
removed. (December 2008)

Racial socialization also refers to the to the process of learning one’s culture and how to live within it, but
refers more specifically to the socialization of minority ethnic groups. Racial socialization also buffers a
child’s awareness of racial discrimination. Perceived racial discrimination is associated with negative
mental health behaviors in adolescents such as low self esteem, depressive symptoms, psychological
distress, hopelessness, anxiety and risky behavior. Racially socialized children are aware of the presence
of racial barriers, and the oppression and injustice of racial discrimination can be actively resisted through
socialization, creating a stronger racial identity.

African American parents more likely to racially socialize their children of they are female, are married
(compared to never married), reside in Northeast (compared to the South), and if they reside in racially
mixed neighborhoods (compared to those residing in all-black neighborhoods). About one third of
African American parents reportedly do nothing to help their children understand what it means to be
black. This lack of education is manifested by the belief that racism is no longer a social problem or it is
too complicated for a child to understand, and that the discussion will discourage their child and/or allow
them to accept negative images of African Americans. This is a big problem because it reinforces racism
within all ethnic groups. Additionally, African American children whose parents emphasize socialization
through cultural heritage and racial pride report greater feeling of closeness to other African Americans,
have higher racial knowledge and awareness, and have higher test scores and better grades as a whole.

[edit] Socialization for animal species
The process of intentional socialization is central to training animals to be kept by humans in close
relationship with the human environment, including pets and working dogs.

[edit] Ferality

Feral animals can be socialized with varying degrees of success. Feral children are children who lack
socially accepted communication skills. Reports of feral children, such as those cited by Kingsley Davis,
have largely been shown to be exaggerations, or complete fabrications, with regards to the specific lack of
particular skills; for example, bipedalism.

[edit] Cats
For example, the cat returns readily to a feral state if it has not been socialized properly in its young life.
A feral cat usually acts defensively. People often unknowingly own one and think it is merely
"unfriendly."

Socializing cats older than six months can be very difficult. It is often said that they cannot be socialized.
This is not true, but the process takes two to four years of diligent food bribes and handling, and mostly
on the cat's terms. Eventually the cat may be persuaded to be comfortable with humans and the indoor
environment.

Kittens learn to be feral either from their mothers or through bad experiences. They are more easily
socialized when under six months of age. Socializing is done by keeping them confined in a small room
(ie. bathroom) and handling them for 3 or more hours each day. There are three primary methods for
socialization, used individually or in combination. The first method is to simply hold and pet the cat, so it
learns that such activities are not uncomfortable. The second is to use food bribes. The final method is to
distract the cat with toys while handling them. The cat may then be gradually introduced to larger spaces.
It is not recommended to let the cat back outside because that may cause it to revert to its feral state. The
process of socialization often takes three weeks to three months for a kitten.

Animal shelters either foster feral kittens to be socialized or kill them outright. The feral adults are usually
killed or euthanized, due to the large time commitment, but some shelters and vets will spay or neuter and
vaccinate a feral cat and then return it to the wild.

Socialized dogs can interact with other non-aggressive dogs of any size and shape and understand how to
communicate.

[edit] Dogs

In domesticated dogs, the process of socialization begins even before the puppy's eyes open. Socialization
refers to both its ability to interact acceptably with humans and its understanding of how to communicate
successfully with other dogs. If the mother is fearful of humans or of her environment, she can pass along
this fear to her puppies. For most dogs, however, a mother who interacts well with humans is the best
teacher that the puppies can have. In addition, puppies learn how to interact with other dogs by their
interaction with their mother and with other adult dogs in the house.

A mother's attitude and tolerance of her puppies will change as they grow older and become more active.
For this reason most experts today recommend leaving puppies with their mother until at least 8 to 10
weeks of age. This gives them a chance to experience a variety of interactions with their mother, and to
observe her behavior in a range of situations.

It is critical that human interaction takes place frequently and calmly from the time the puppies are born,
from simple, gentle handling to the mere presence of humans in the vicinity of the puppies, performing
everyday tasks and activities. As the puppies grow older, socialization occurs more readily the more
frequently they are exposed to other dogs, other people, and other situations. Dogs who are well
socialized from birth, with both dogs and other species (especially people) are much less likely to be
aggressive or to suffer from fear-biting.
[edit] References
• Chinoy, Ely (1961) Society: An Introduction to Sociology, New York: Random House.
• Clausen, John A. (ed.) (1968) Socialization and Society, Boston: Little Brown and Company.
• Johnson, Harry M. (1961) Sociology: A Systematic Introduction, London: Routledge and Kegan
Paul.
• McQuail, Dennis (2005) McQuail’s Mass Communication Theory: Fifth Edition, London: Sage.
• Parsons, Talcott and Bales, Robert (1956) Family, Socialization and Interaction Process, London:
Routledge and Kegan Paul.
• White, Graham (1977) Socialisation, London: Longman.
• Michael Paul Rhode, Smithsonian Dep. of Anthropology
• Bogard, Kimber. "Citizenship attitudes and allegiances in diverse youth." Cultural Diversity and
Ethnic minority Psychology14(4)(2008): 286-296.
• Mehan, Hugh. "Sociological Foundations Supporting the Study of Cultural Diversity." 1991.
National Center for Research on Cultural Diversity and Second Language Learning.
• Robert Feldman, Ph.D. at the University of Massachusetts at Amherst. Child Development Third
Edition

[edit] See also
• Acculturation
• Cultural assimilation
• Internalization
• Reciprocal socialization
• Social construction
• Social skills
• Structure and agency

Retrieved from "http://en.wikipedia.org/wiki/Socialization"
Category: Social psychology
Hidden category: Articles needing additional references from December 2008

Crying
From Wikipedia, the free encyclopedia

Jump to: navigation, search
For other uses, see Crying (disambiguation).

A child crying.
The term crying (pronounced [ˈkraɪɪŋ] from Middle English crien or Old French crier [1]) commonly
refers to the act of shedding tears as a response to an emotional state in humans. The act of crying has
been defined as "a complex secretomotor phenomenon characterized by the shedding of tears from the
lacrimal apparatus, without any irritation of the ocular structures".[2]

A neuronal connection between the tear duct and the areas of the human brain involved with emotion was
established. No other animals are thought to produce tears in response to emotional states,[3] although this
is disputed by some scientists.[4]

According to a study of over 300 adults, on average, men cry once every month, and women cry at least
five times per month,[5] especially before and during the menstrual cycle, when crying can increase up to 5
times the normal rate, often without obvious reasons (such as depression or sadness).[6]

Tears produced during emotional crying have a chemical composition which differs from other types of
tear: they contain significantly greater quantities of hormones prolactin, adrenocorticotropic hormone,
Leu-enkephalin[7] and the elements potassium and manganese.[8]

Contents
[hide]

• 1 Function
• 2 Disorders related to crying
• 3 References
• 4 Further reading

• 5 External links

[edit] Function
The question of the function or origin of emotional tears remains open. Theories range from the simple,
such as response to inflicted pain, to the more complex, including nonverbal communication in order to
elicit "helping" behaviour from others.[9]

In Hippocratic and medieval medicine, tears were associated with the bodily humours, and crying was
seen as purgation of excess humours from the brain.[10] William James thought of emotions as reflexes
prior to rational thought, believing that the physiological response, as if to stress or irritation, is a
precondition to cognitively becoming aware of emotions such as fear or anger.

William H. Frey II, a biochemist at the University of Minnesota, proposed that people feel "better" after
crying, due to the elimination of hormones associated with stress, specifically adrenocorticotropic
hormone.[11] This, paired with increased mucosal secretion during crying, could lead to a theory that
crying is a mechanism developed in humans to dispose of this stress hormone when levels grow too high.

Recent psychological theories of crying emphasize its relationship to the experience of perceived
helplessness.[12] From this perspective, an underlying experience of helplessness can usually explain why
people cry; for example, a person may cry after receiving surprisingly happy news, ostensibly because the
person feels powerless or unable to influence what is happening.

[edit] Disorders related to crying
• Bell's palsy, where faulty regeneration of the facial nerve causes sufferers to shed tears while
eating.[13]
• Cri du chat
• Familial dysautonomia, where there can be a lack of overflow tears (alacrima) during emotional
crying.[14]
• Pathological laughing and crying

[edit] References

Japanese people
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article is about the ethnic group. For the group of people holding Japanese citizenship, see
Demographics of Japan.
Not to be confused with Javanese people.

Japanese people
日本人

Shōtoku • Ieyasu • R. Hiratsuka • Akihito / Michiko
Samurai during Boshin War • Japanese family of today

Total population

About 130 million
Regions with significant populations
Japan 127 million
Significant Nikkei populations in:
Brazil 1,400,000 [16]
United States 1,200,000 [17]
Philippines 222,000 [18]
China (PRC) 115,000 [19]
Canada 85,000 [20]
Peru 81,000 [21]
United Kingdom 51,000 [22]
Argentina 30,000 [23]
Australia 27,000 [24]
Singapore 23,000 [25]
Mexico 20,000 [26]
Taiwan (ROC) 16,000 [27]
South Korea 15,000 [28]
Languages
Japanese · Ryukyuan · Ainu
Religion
Cultural Shinto and Buddhism

view • talk • edit

This article contains Japanese text. Without proper rendering
support, you may see question marks, boxes, or other symbols
instead of kanji and kana.

The Japanese people (日本人 nihonjin, nipponjin?) are the predominant ethnic group of Japan.[1][2][3][4][5]
Worldwide, approximately 130 million people are of Japanese descent; of these, approximately 127
million are residents of Japan. People of Japanese ancestry who live in other countries are referred to as
nikkeijin (日系人?). The term "Japanese people" may also be used in some contexts to refer to a locus of
ethnic groups including the Yamato people, Ainu people, and Ryukyuans.

Contents
[hide]

• 1 Culture
o 1.1 Language
o 1.2 Religion
o 1.3 Literature
o 1.4 Arts
• 2 Origins
o 2.1 Paleolithic era
o 2.2 Jōmon and Ainu people
o 2.3 Yayoi people
o 2.4 Controversy
• 3 Japanese colonialism
• 4 Japanese diaspora
• 5 See also
• 6 References

• 7 External links

[edit] Culture
[edit] Language

Main article: Japanese language

The Japanese language is a Japonic language that is usually treated as a language isolate, although it is
also related to the Okinawan language (Ryukyuan), and both are suggested to be part of the debatable
Altaic language family. The Japanese language has a tripartite writing system based upon Chinese
characters. Domestic Japanese people use primarily Japanese for daily interaction. The adult literacy rate
in Japan exceeds 99%.[6]

[edit] Religion

Main article: Religion in Japan

Japanese religion has traditionally been syncretic in nature, combining elements of Buddhism and Shinto.
Shinto, a polytheistic religion with no book of religious canon, is Japan's native folk religion. Shinto was
one of the traditional grounds for the right to the throne of the Japanese imperial family, and was codified
as the state religion in 1868 (State Shinto was abolished by the American occupation in 1945). Mahayana
Buddhism came to Japan in the sixth century and evolved into many different sects. Today the largest
form of Buddhism among Japanese people is the Jodo Shinshu sect founded by Shinran.

Most Japanese people (84% to 96%)[7] profess to believe in both Shinto and Buddhism. The Japanese
people's religious concerns are mostly directed towards mythology, traditions, and neighborhood activities
rather than as the single source of moral guidelines for one's life. Confucianism or Taoism is sometimes
considered the basis for morality.[citation needed]

[edit] Literature

Main article: Japanese literature

Bisque doll of Momotarō,
a character from Japanese literature and folklore.

Certain genres of writing originated in and are often associated with Japanese society. These include the
haiku, tanka, and I Novel, although modern writers generally avoid these writing styles. Historically,
many works have sought to capture or codify traditional Japanese cultural values and aesthetics. Some of
the most famous of these include Murasaki Shikibu's The Tale of Genji (1021), about Heian court culture;
Miyamoto Musashi's The Book of Five Rings (1645), concerning military strategy; Matsuo Bashō's Oku
no Hosomichi (1691), a travelogue; and Jun'ichirō Tanizaki's essay "In Praise of Shadows" (1933), which
contrasts Eastern and Western cultures.
Following the opening of Japan to the West in 1854, some works of this style were written in English by
natives of Japan; they include Bushido: The Soul of Japan by Nitobe Inazo (1900), concerning samurai
ethics, and The Book of Tea by Okakura Kakuzo (1906), which deals with the philosophical implications
of the Japanese tea ceremony. Western observers have often attempted to evaluate Japanese society as
well, to varying degrees of success; one of the most well-known and controversial works resulting from
this is Ruth Benedict's The Chrysanthemum and the Sword (1946).

Twentieth-century Japanese writers recorded changes in Japanese society through their works. Some of
the most notable authors included Natsume Sōseki, Jun'ichirō Tanizaki, Osamu Dazai, Yasunari
Kawabata, Fumiko Enchi, Yukio Mishima, and Ryotaro Shiba. In contemporary Japan, popular authors
such as Ryu Murakami, Haruki Murakami, and Banana Yoshimoto are highly regarded.

[edit] Arts

Main articles: Japanese art and Japanese architecture

Decorative arts in Japan date back to prehistoric times. Jōmon pottery includes examples with elaborate
ornamentation. In the Yayoi period, artisans produced mirrors, spears, and ceremonial bells known as
dōtaku. Later burial mounds, or kofun, preserve characteristic clay haniwa, as well as wall paintings.

Beginning in the Nara period, painting, calligraphy, and sculpture flourished under strong Confucian and
Buddhist influences from Korea and China. Among the architectural achievements of this period are the
Hōryū-ji and the Yakushi-ji, two Buddhist temples in Nara Prefecture. After the cessation of official
relations with the Tang dynasty in the ninth century, Japanese art and architecture gradually became less
influenced by China. Extravagant art and clothing was commissioned by nobles to decorate their court
life, and although the aristocracy was quite limited in size and power, many of these pieces are still extant.
After the Todai-ji was attacked and burned during the Gempei War, a special office of restoration was
founded, and the Todai-ji became an important artistic center. The leading masters of the time were Unkei
and Kaikei.

Painting advanced in the Muromachi period in the form of ink and wash painting under the influence of
Zen Buddhism as practiced by such masters as Sesshū Tōyō. Zen Buddhist tenets were also elaborated
into the tea ceremony during the Sengoku period. During the Edo period, the polychrome painting screens
of the Kano school were made influential thanks to their powerful patrons (including the Tokugawas).
Popular artists created ukiyo-e, woodblock prints for sale to commoners in the flourishing cities. Pottery
such as Imari ware was highly valued as far away as Europe.

In theater, Noh is a traditional, spare dramatic form that developed in tandem with kyogen farce. In stark
contrast to the restrained refinement of noh, kabuki, an "explosion of color," uses every possible stage
trick for dramatic effect. Plays include sensational events such as suicides, and many such works were
performed in both kabuki and bunraku puppet theaters.

Since the Meiji Restoration, Japan has absorbed elements of Western culture. Its modern decorative,
practical and performing arts works span a spectrum ranging from the traditions of Japan to purely
Western modes. Products of popular culture, including J-pop, manga, and anime have found audiences
around the world.

[edit] Origins
This section may require copy-editing for grammar, style, cohesion, tone or spelling. You can assist
by editing it now. A how-to guide is available. (April 2008)
Japan at the height of the last glaciation about 20,000 years ago
See also: History of Japan

A recent study by Michael F. Hammer has shown genetic similarity to a variety of populations in Asia.[8]
This and other genetic studies have also claimed that Y-chromosome patrilines crossed from Asian
mainland into the Japanese Archipelago, where they currently comprise a significant fraction of the extant
male lineages of the Japanese population.[9] These patrilines seem to have experienced extensive genetic
admixture with the long-established Jōmon period populations of Japan.[8]

A recent study for the origins of Japanese people is based on the "dual structure model" proposed by
Hanihara in 1991.[10] He concludes that modern Japanese lineages consist of the original Jōmon people
and immigrants from the Yayoi period. The Jōmon people originated in southeast Asia, moving to the
Japanese Archipelago in the Palaeolithic period. In past several decades, the Japanese people was
proposed to relate to Yi, Hani and Dai people based on folk customs or genetic evidences.[11]

Another southeast Asian group[clarification needed name?] moved to northeastern Asia. The population of this group
increased in the Neolithic period and some moved to the archipelago during the Yayoi period. The
miscegenation prevailed in Kyūshū, Shikoku and Honshū islands but not in Okinawa and Hokkaido,
respectively represented by the Ryukyuan and Ainu people. This theory was based on the study of the
development of human bones and teeth. The comparison of mitochondrial DNA between Jōmon people
and medieval Ainu also supports the theory.

Masatoshi Nei opposed the "dual structure model" and alleged that the genetic distance data shows the
origin of Japanese was in northeast Asia, moving to Japan perhaps more than thirty thousand years ago.[12]

The study on the population change in the ancient period was also discussed. The estimated number of
people in the late Jōmon period numbered about one hundred thousand, compared to that of the Nara
period which had a population of about three million. Taking the growth rates of hunting and agricultural
societies into account, it is calculated that about one and half million immigrants moved to Japan in the
period. This figure seems to be overestimated and is being recalculated today[citation needed].

[edit] Paleolithic era

Archaeological evidences indicates that Stone Age people lived in the Japanese Archipelago during the
Paleolithic period between 39,000 and 21,000 years ago[13] [14]. Japan was then connected to mainland Asia
by at least one land bridge, and nomadic hunter-gatherers crossed to Japan from East Asia, Siberia, and
possibly Kamchatka. Flint tools and bony implements of this era have been excavated in Japan[15].

[edit] Jōmon and Ainu people
Incipient Jōmon pottery

The world's oldest known pottery was developed by the Jōmon people in the Upper Paleolithic period,
14th millennium BCE. The name, "Jōmon" (縄文 Jōmon), which means "cord-impressed pattern", comes
from the characteristic markings found on the pottery. The Jōmon people were Mesolithic hunter-
gatherers, though at least one middle to late Jōmon site (Minami Mosote (南溝手?), ca. 1200-1000 BCE)
had a primitive rice-growing agriculture. They relied primarily on fish for protein. It is believed that the
Jōmon had very likely migrated from North Asia or Central Asia and became the Ainu of today. Research
suggests that the Ainu retain a certain degree of uniqueness in their genetic make-up, while having some
affinities with different regional populations in Japan as well as the Nivkhs of the Russian Far East. Based
on more than a dozen genetic markers on a variety of chromosomes and from archaeological data showing
habitation of the Japanese Archipelago dating back 30,000 years, it is argued that the Jōmon actually
came from northeastern Asia and settled on the islands far earlier than some have proposed.[16]

[edit] Yayoi people

Around 400-300 BCE, the Yayoi people began to enter the Japanese islands, intermingling with the
Jōmon. Most modern scholars say that the Yayoi emigrated from south China. [17] The Yayoi brought wet-
rice farming and advanced bronze and iron technology to Japan. Although the islands were already
abundant with resources for hunting and dry-rice farming, Yayoi farmers created more productive wet-
rice paddy field systems. This allowed the communities to support larger populations and spread over
time, in turn becoming the basis for more advanced institutions and heralding the new civilization of the
succeeding Kofun Period. In recent years, more archaeological and genetic evidence have been found in
both eastern China and western Japan to lend credibility to this argument. Between 1996 and 1999, a team
led by Satoshi Yamaguchi, a researcher at Japan's National Science Museum, compared Yayoi remains
found in Japan's Yamaguchi and Fukuoka prefectures with those from early Han Dynasty (202 BC-8) in
China's coastal Jiangsu province, and found many similarities between the skulls and limbs of Yayoi
people and the Jiangsu remains. Two Jiangsu skulls showed spots where the front teeth had been pulled, a
practice common in Japan in the Yayoi and preceding Jōmon period. The genetic samples from three of
the 36 Jiangsu skeletons also matched part of the DNA base arrangements of samples from the Yayoi
remains.

[edit] Controversy

This article may require cleanup to meet Wikipedia's quality standards.
Please improve this article if you can. (November 2008)

Currently, the most well-regarded theory is that present-day Japanese are descendants of both the
indigenous Jōmon people and the immigrant Yayoi people. The origins of the Jōmon and Yayoi peoples
have often been a subject of dispute, but it is now widely accepted that the Jōmon people were very
similar to the modern Ainu of northern Japan, their migrating path may be from southwestern China to
Mongolia to today's southeastern Russia and then to northeastern Japan, and lived in Japan since the time
of the last glacial age. They brought with them the origins of Japanese culture and religion. Han Chinese
and ethnic Korean groups are thought to be the origin of the Yayoi group which entered Japan from the
southwest, which brought with them the more advanced civilization than the native Jōmon people. Today
people have a visual observation that Jōmon people in Hokkaidō look less "Asian" than most Japanese
people including the royal family, and both Japanese and non-Japanese academics predominantly believe
that the Japanese are descended from both the Yayoi, who emigrated from the Korean peninsula, and the
long-established native Jōmon people, with whom the Yayoi intermarried. A clear consensus has not been
reached.[18]

[edit] Japanese colonialism
See also: Greater East Asia Co-Prosperity Sphere

Location Map of Japan

During the Japanese colonial period of 1867 to 1945, the phrase "Japanese people" was used to refer not
only to residents of the Japanese archipelago, but also to people from occupied territories who held
Japanese citizenship, such as Taiwanese people and Korean people. The official term used to refer to
ethnic Japanese during this period was "inland people" (内地人 naichijin?). Such linguistic distinctions
facilitated forced assimilation of colonized ethnic identities into a single Imperial Japanese identity. [19]

After World War II, many Nivkh people and Orok people from southern Sakhalin who held Japanese
citizenship were forced to repatriate to Hokkaidō by the Soviet Union. However, many Sakhalin Koreans
who had held Japanese citizenship until the end of the war were left stateless by the Soviet occupation.[20]

[edit] Japanese diaspora
See also: Japanese diaspora

The term nikkeijin (日系人?) is used to refer to Japanese people who either emigrated from Japan or are
descendants of a person who emigrated from Japan. The usage of this term excludes Japanese citizens
who are living abroad, but includes all descendants of nikkeijin who lack Japanese citizenship regardless
of their place of birth.

Emigration from Japan was recorded as early as the 12th century to the Philippines, but did not become a
mass phenomenon until the Meiji Era, when Japanese began to go to the United States, Canada, Peru,
Brazil and Argentina. There was also significant emigration to the territories of the Empire of Japan
during the colonial period; however, most such emigrants repatriated to Japan after the end of World War
II in Asia.[20]

According to the Association of Nikkei and Japanese Abroad, there are about 2.5 million nikkeijin living
in their adopted countries. The largest of these foreign communities are in the Brazilian states of São
Paulo and Paraná.[citation needed] There are also significant cohesive Japanese communities in the Philippines,
Peru, Argentina and in the American states of Hawaiʻi, California and Washington. Separately, the
number of Japanese citizens living abroad is over one million according to the Ministry of Foreign
Affairs.[citation needed] There is also a small group of Japanese descendants living in Caribbean countries such
as Cuba and the Dominican Republic where hundreds of these immigrants were brought in by Rafael L.
Trujillo in the 1930s.
[edit] See also
• Ethnic issues in Japan • Demographics of Japan
• Foreign-born Japanese o Ainu people
• Japantown o Burakumin
• List of Japanese people o Dekasegi
o Ryukyuan people
• Nihonjinron
o Yamato people

[edit] References

Chinese language
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article contains Chinese text. Without proper rendering
support, you may see question marks, boxes, or other symbols
instead of Chinese characters.

Chinese
汉语/漢語 Hànyǔ (Spoken),
中文 Zhōngwén (Written)

Spoken in: People's Republic of China (commonly known as "China"),
Republic of China (commonly known as "Taiwan"), Hong
Kong, Singapore, Malaysia, Macau, the Philippines, Australia,
Indonesia, Mauritius, Peru, Canada, the United States of
America, and other regions with Chinese communities

Region: (majorities): East Asia
(minorities): Southeast Asia, and other regions with Chinese
communities

Total speakers: approx 1.176 billion

Ranking: Chinese, all: 1

Mandarin: 1
Wu: 12
Cantonese: 18
Min: 22
Hakka: 33
Gan: 42
Language Sino-Tibetan
family: Chinese
Writing system: Chinese characters, Zhuyin fuhao
Official status
Official United Nations
language in:
People's Republic of China

• Hong Kong
• Macau

Republic of China
Singapore

Recognized as a regional language in
Mauritius
Canada
(Official status in the city of Vancouver, British Columbia)

Regulated by: In the PRC: National Language Regulating Committee[1]
In the ROC: National Languages Committee
In Singapore: Promote Mandarin Council/Speak Mandarin
Campaign[2]
Language codes
ISO 639-1: zh
ISO 639-2: chi (B) zho (T)

ISO 639-3: variously:
zho – Chinese (generic)
cdo – Min Dong
cjy – Jinyu
cmn – Mandarin
cpx – Pu Xian
czh – Huizhou
czo – Min Zhong
gan – Gan
hak – Hakka
hsn – Xiang
mnp – Min Bei
nan – Min Nan
wuu – Wu
yue – Cantonese
Note: This page may contain IPA phonetic symbols in Unicode.

Chinese or the Sinitic language(s) (汉语/漢語, pinyin: Hànyǔ; 华语/華語, Huáyǔ; or 中文, Zhōngwén)
can be considered a language or language family.[3] Originally the indigenous languages spoken by the
Han Chinese in China, it forms one of the two branches of Sino-Tibetan family of languages. About one-
fifth of the world’s population, or over one billion people, speak some form of Chinese as their native
language. The identification of the varieties of Chinese as "languages" or "dialects" is controversial.[4]

Spoken Chinese is distinguished by its high level of internal diversity, though all spoken varieties of
Chinese are tonal and analytic. There are between six and twelve main regional groups of Chinese
(depending on classification scheme), of which the most spoken, by far, is Mandarin (about 850 million),
followed by Wu (90 million), Min (70 million) and Cantonese (70 million). Most of these groups are
mutually unintelligible, though some, like Xiang and the Southwest Mandarin dialects, may share
common terms and some degree of intelligibility. Chinese is classified as a macrolanguage with 13 sub-
languages in ISO 639-3, though the identification of the varieties of Chinese as multiple "languages" or as
"dialects" of a single language is a contentious issue.

The standardized form of spoken Chinese is Standard Mandarin (Putonghua / Guoyu / Huayu), based on
the Beijing dialect, which is part of a larger group of North-Eastern and South-Western dialects, often
taken as a separate language, see Mandarin Chinese for more, this language can be referred to as 官话
Guānhuà or 北方话 Běifānghuà in Chinese. Standard Mandarin is the official language of the People's
Republic of China and the Republic of China (commonly known as 'Taiwan'), as well as one of four
official languages of Singapore. Chinese—de facto, Standard Mandarin—is one of the six official
languages of the United Nations. Of the other varieties, Standard Cantonese is common and influential in
Cantonese-speaking overseas communities, and remains one of the official languages of Hong Kong
(together with English) and of Macau (together with Portuguese). Min Nan, part of the Min language
group, is widely spoken in southern Fujian, in neighbouring Taiwan (where it is known as Taiwanese or
Hoklo) and in Southeast Asia (where it dominates in Singapore and Malaysia and is known as Hokkien).

According to news reports in March 2007, 86 percent of people in the People's Republic of China speak a
variant of spoken Chinese.[5] As a language family, the number of Chinese speakers is 1.136 billion. The
same news report indicates 53 percent of the population, or 700 million speakers, can effectively
communicate in Putonghua.

Contents
[hide]

• 1 Spoken Chinese
o 1.1 Standard Mandarin and diglossia
o 1.2 Linguistics
o 1.3 Language and nationality
• 2 Written Chinese
o 2.1 Chinese characters
• 3 History and evolution
• 4 Influences on other languages
• 5 Phonology
• 6 Phonetic transcriptions
o 6.1 Romanization
o 6.2 Other phonetic transcriptions
• 7 Grammar and morphology
o 7.1 Tones and homophones
• 8 Vocabulary
• 9 New words
o 9.1 Modern borrowings and loanwords
• 10 Learning Chinese
• 11 See also
• 12 References
• 13 Footnotes
• 14 External links
o 14.1 Dictionaries

o 14.2 Learning

[edit] Spoken Chinese
Main article: Spoken Chinese

A map below depicts the linguistic subdivisions ("languages" or "dialect groups") within China itself. The
traditionally-recognized seven main groups, in order of population size are:

Total
Name Hanyu Pinyin Trad. Simp. Notes
Speakers
Běifānghuà / 北方話 / 官 北方话 / 官 includes Standard
Mandarin c. 850 million
Guānhuà 話 话 Mandarin
Wu Wúyǔ 吳語 吴语 c. 90 million includes Shanghainese
Yue includes Standard
Yuèyǔ 粵語 粤语 c. 80 million
(Cantonese) Cantonese
Min Mǐnyǔ 閩語 闽语 c. 50 million includes Taiwanese
Xiāngyǔ / 湘語 / 湖南 湘语 / 湖南
Xiang c. 35 million
Húnánhuà 話 话
客家話 / 客 客家话 / 客
Hakka Kèjiāhuà / Kèhuà c. 35 million
話 话
Gànyǔ / 贛語 / 江西 赣语 / 江西
Gan c. 20 million
Jiāngxīhuà 話 话

Chinese linguists have recently distinguished:

Total
Name Hanyu Pinyin Trad. Simp. Notes
Speakers
Jin Jìnyǔ 晉語 晋语 from Mandarin
Huizhou Huīzhōuhuà 徽州話 徽州话 from Wu
Ping Pínghuà 平話 平话 partly from Yue
There are also many smaller groups that are not yet classified, such as: Danzhou dialect, spoken in
Danzhou, on Hainan Island; Xianghua (乡话), not to be confused with Xiang (湘), spoken in western
Hunan; and Shaozhou Tuhua, spoken in northern Guangdong. The Dungan language, spoken in Central
Asia, is very closely related to Mandarin. However, it is not generally considered "Chinese" since it is
written in Cyrillic and spoken by Dungan people outside China who are not considered ethnic Chinese.
See List of Chinese dialects for a comprehensive listing of individual dialects within these large, broad
groupings.

In general, the above language-dialect groups do not have sharp boundaries, though Mandarin is the
predominant Sinitic language in the North and the Southwest, and the rest are mostly spoken in Central or
Southeastern China. Frequently, as in the case of the Guangdong province, native speakers of major
variants overlapped. As with many areas that were linguistically diverse for a long time, it is not always
clear how the speeches of various parts of China should be classified. The Ethnologue lists a total of 14,
but the number varies between seven and seventeen depending on the classification scheme followed. For
instance, the Min variety is often divided into Northern Min (Minbei, Fuchow) and Southern Min
(Minnan, Amoy-Swatow); linguists have not determined whether their mutual intelligibility is small
enough to sort them as separate languages.

The varieties of spoken Chinese in China and Taiwan

In general, mountainous South China displays more linguistic diversity than the flat North China. In parts
of South China, a major city's dialect may only be marginally intelligible to close neighbours. For
instance, Wuzhou is about 120 miles upstream from Guangzhou, but its dialect is more like Standard
Cantonese spoken in Guangzhou, than is that of Taishan, 60 miles southwest of Guangzhou and separated
by several rivers from it (Ramsey, 1987).

[edit] Standard Mandarin and diglossia

Main article: Standard Mandarin

Putonghua / Guoyu, often called "Mandarin", is the official standard language used by the People's
Republic of China, the Republic of China, and Singapore (where it is called "Huayu"). It is based on the
Beijing dialect, which is the dialect of Mandarin as spoken in Beijing. The governments intend for
speakers of all Chinese speech varieties to use it as a common language of communication. Therefore it is
used in government agencies, in the media, and as a language of instruction in schools.

In mainland China and Taiwan, diglossia has been a common feature: it is common for a Chinese to be
able to speak two or even three varieties of the Sinitic languages (or “dialects”) together with Standard
Mandarin. For example, in addition to putonghua a resident of Shanghai might speak Shanghainese and,
if they did not grow up there, his or her local dialect as well. A native of Guangzhou may speak Standard
Cantonese and putonghua, a resident of Taiwan, both Taiwanese and putonghua/guoyu. A person living in
Taiwan may commonly mix pronunciations, phrases, and words from Standard Mandarin and Taiwanese,
and this mixture is considered socially appropriate under many circumstances. In Hong Kong, Standard
Mandarin is beginning to take its place beside English and Standard Cantonese, the official languages.

[edit] Linguistics

Main article: Identification of the varieties of Chinese
Linguists often view Chinese as a language family, though owing to China's socio-political and cultural
situation, and the fact that all spoken varieties use one common written system, it is customary to refer to
these generally mutually unintelligible variants as "the Chinese language". The diversity of Sinitic
variants is comparable to the Romance languages.

From a purely descriptive point of view, "languages" and "dialects" are simply arbitrary groups of similar
idiolects, and the distinction is irrelevant to linguists who are only concerned with describing regional
speeches technically. However, the idea of a single language has major overtones in politics and cultural
self-identity, and explains the amount of emotion over this issue. Most Chinese and Chinese linguists
refer to Chinese as a single language and its subdivisions dialects, while others call Chinese a language
family.

Chinese itself has a term for its unified writing system, Zhongwen (中文), while the closest equivalent
used to describe its spoken variants would be Hanyu (汉语,“spoken language[s] of the Han Chinese) –
this term could be translated to either “language” or “languages” since Chinese possesses no grammatical
numbers. In the Chinese language, there is much less need for a uniform speech-and-writing continuum,
as indicated by two separate character morphemes 语 yu and 文 wen. Ethnic Chinese often consider these
spoken variations as one single language for reasons of nationality and as they inherit one common
cultural and linguistic heritage in Classical Chinese. Han native speakers of Wu, Min, Hakka, and
Cantonese, for instance, may consider their own linguistic varieties as separate spoken languages, but the
Han Chinese race as one – albeit internally very diverse – ethnicity. To Chinese nationalists, the idea of
Chinese as a language family may suggest that the Chinese identity is much more fragmentary and
disunified than it actually is and as such is often looked upon as culturally and politically provocative.
Additionally, in Taiwan, it is closely associated with Taiwanese independence, where some supporters of
Taiwanese independence promote the local Taiwanese Minnan-based spoken language.

Within the People’s Republic of China and Singapore, it is common for the government to refer to all
divisions of the Sinitic language(s) beside Standard Mandarin as fangyan (“regional tongues”, often
translated as “dialects”). Modern-day Chinese speakers of all kinds communicate using one formal
standard written language, although this modern written standard is modeled after Mandarin, generally the
modern Beijing substandard.

[edit] Language and nationality

The term sinophone, coined in analogy to anglophone and francophone, refers to those who speak the
Chinese language natively, or prefer it as a medium of communication. The term is derived from Sinae,
the Latin word for ancient China.

[edit] Written Chinese
Main article: Chinese written language
See also: Classical Chinese and Vernacular Chinese

The relationship among the Chinese spoken and written languages is a complex one. Its spoken variations
evolved at different rates, while written Chinese itself has changed much less. Classical Chinese literature
began in the Spring and Autumn period, although written records have been discovered as far back as the
14th to 11th centuries BCE Shang dynasty oracle bones using the oracle bone scripts.

The Chinese orthography centers around Chinese characters, hanzi, which are written within imaginary
rectangular blocks, traditionally arranged in vertical columns, read from top to bottom down a column,
and right to left across columns. Chinese characters are morphemes independent of phonetic change. Thus
the number "one", yi in Mandarin, yat in Cantonese and chit̍ and "yit = first" in Hokkien (form of Min),
all share an identical character ("一"). Vocabularies from different major Chinese variants have diverged,
and colloquial non-standard written Chinese often makes use of unique "dialectal characters", such as 冇
and 係 for Cantonese and Hakka, which are considered archaic or unused in standard written Chinese.

Written colloquial Cantonese has become quite popular in online chat rooms and instant messaging
amongst Hong-Kongers and Cantonese-speakers elsewhere. Use of it is considered highly informal, and
does not extend to any formal occasion.

Also, in Hunan, some women write their local language in Nü Shu, a syllabary derived from Chinese
characters. The Dungan language, considered by some a dialect of Mandarin, is also nowadays written in
Cyrillic, and was formerly written in the Arabic alphabet, although the Dungan people live outside China.

[edit] Chinese characters

Main article: Chinese character

Chinese characters evolved over time from earliest forms of hieroglyphs. The idea that all Chinese
characters are either pictographs or ideographs is an erroneous one: most characters contain phonetic
parts, and are composites of phonetic components and semantic Radicals. Only the simplest characters,
such as ren 人 (human), ri 日 (sun), shan 山 (mountain), shui 水 (water), may be wholly pictorial in
origin. In 100 CE, the famed scholar Xǚ Shèn in the Hàn Dynasty classified characters into 6 categories,
namely pictographs, simple ideographs, compound ideographs, phonetic loans, phonetic compounds and
derivative characters. Of these, only 4% were categorized as pictographs, and 80-90% as phonetic
complexes consisting of a semantic element that indicates meaning, and a phonetic element that arguably
once indicated the pronunciation. There are about 214 radicals recognized in the Kangxi Dictionary.

Modern characters are styled after the standard script (楷书/楷書 kǎishū) (see styles, below). Various
other written styles are also used in East Asian calligraphy, including seal script (篆书/篆書 zhuànshū),
cursive script (草书/草書 cǎoshū) and clerical script (隶书/隸書 lìshū). Calligraphy artists can write in
traditional and simplified characters, but tend to use traditional characters for traditional art.

Various styles of Chinese calligraphy.

There are currently two systems for Chinese characters. The traditional system, still used in Hong Kong,
Taiwan, Macau and Chinese speaking communities (except Singapore and Malaysia) outside mainland
China, takes its form from standardized character forms dating back to the late Han dynasty. The
Simplified Chinese character system, developed by the People's Republic of China in 1954 to promote
mass literacy, simplifies most complex traditional glyphs to fewer strokes, many to common caoshu
shorthand variants.
Singapore, which has a large Chinese communities, is the first – and at present the only – foreign nation
to officially adopt simplified characters, although it has also become the de facto standard for younger
ethnic Chinese in Malaysia. The Internet provides the platform to practice reading the alternative system,
be it traditional or simplified.

A well-educated Chinese today recognizes approximately 6,000-7,000 characters; some 3,000 characters
are required to read a Mainland newspaper. The PRC government defines literacy amongst workers as a
knowledge of 2,000 characters, though this would be only functional literacy. A large unabridged
dictionary, like the Kangxi Dictionary, contains over 40,000 characters, including obscure, variant and
archaic characters; less than a quarter of these characters are now commonly used.

[edit] History and evolution
Most linguists classify all varieties of modern spoken Chinese as part of the Sino-Tibetan language family
and believe that there was an original language, termed Proto-Sino-Tibetan, from which the Sinitic and
Tibeto-Burman languages descended. The relation between Chinese and other Sino-Tibetan languages is
an area of active research, as is the attempt to reconstruct Proto-Sino-Tibetan. The main difficulty in this
effort is that, while there is enough documentation to allow one to reconstruct the ancient Chinese sounds,
there is no written documentation that records the division between proto-Sino-Tibetan and ancient
Chinese. In addition, many of the older languages that would allow us to reconstruct Proto-Sino-Tibetan
are very poorly understood and many of the techniques developed for analysis of the descent of the Indo-
European languages from PIE don't apply to Chinese because of "morphological paucity" especially after
Old Chinese [6].

Categorization of the development of Chinese is a subject of scholarly debate. One of the first systems
was devised by the Swedish linguist Bernhard Karlgren in the early 1900s; most present systems rely
heavily on Karlgren's insights and methods.

Old Chinese (T:上古漢語; S:上古汉语; P:Shànggǔ Hànyǔ), sometimes known as "Archaic Chinese", was
the language common during the early and middle Zhōu Dynasty (1122 BCE - 256 BCE), texts of which
include inscriptions on bronze artifacts, the poetry of the Shījīng, the history of the Shūjīng, and portions
of the Yìjīng (I Ching). The phonetic elements found in the majority of Chinese characters provide hints to
their Old Chinese pronunciations. The pronunciation of the borrowed Chinese characters in Japanese,
Vietnamese and Korean also provide valuable insights. Old Chinese was not wholly uninflected. It
possessed a rich sound system in which aspiration or rough breathing differentiated the consonants, but
probably was still without tones. Work on reconstructing Old Chinese started with Qīng dynasty
philologists. Some early Indo-European loan-words in Chinese have been proposed, notably 蜜 mì
"honey", 獅 shī "lion," and perhaps also 馬 mǎ "horse", 犬 quǎn "dog", and 鵝 é "goose".[7]

Middle Chinese (T:中古漢語; S:中古汉语; P:Zhōnggǔ Hànyǔ) was the language used during the Suí,
Táng, and Sòng dynasties (6th through 10th centuries CE). It can be divided into an early period, reflected
by the 切韻 "Qièyùn" rhyme table (601 CE), and a late period in the 10th century, reflected by the 廣韻
"Guǎngyùn" rhyme table. Linguists are more confident of having reconstructed how Middle Chinese
sounded. The evidence for the pronunciation of Middle Chinese comes from several sources: modern
dialect variations, rhyming dictionaries, foreign transliterations, "rhyming tables" constructed by ancient
Chinese philologists to summarize the phonetic system, and Chinese phonetic translations of foreign
words. However, all reconstructions are tentative; some scholars have argued that trying to reconstruct,
say, modern Cantonese from modern Cantopop rhymes would give a fairly inaccurate picture of the
present-day spoken language.

The development of the spoken Chinese languages from early historical times to the present has been
complex. Most Chinese people, in Sìchuān and in a broad arc from the northeast (Manchuria) to the
southwest (Yunnan), use various Mandarin dialects as their home language. The prevalence of Mandarin
throughout northern China is largely due to north China's plains. By contrast, the mountains and rivers of
middle and southern China promoted linguistic diversity.

Until the mid-20th century, most southern Chinese only spoke their native local variety of Chinese. As
Nanjing was the capital during the early Ming dynasty, Nanjing Mandarin became dominant at least until
the later years of the officially Manchu-speaking Qing Empire. Since the 17th century, the Empire had set
up orthoepy academies (T:正音書院; S:正音书院; P:Zhèngyīn Shūyuàn) to make pronunciation conform
to the Qing capital Beijing's standard, but had little success. During the Qing's last 50 years in the late
19th century, the Beijing Mandarin finally replaced Nanjing Mandarin in the imperial court. For the
general population, though, a single standard of Mandarin did not exist. The non-Mandarin speakers in
southern China also continued to use their various languages for every aspect of life. The new Beijing
Mandarin court standard was used solely by officials and civil servants and was thus fairly limited.

This situation did not change until the mid-20th century with the creation (in both the PRC and the ROC,
but not in Hong Kong) of a compulsory educational system committed to teaching Standard Mandarin. As
a result, Mandarin is now spoken by virtually all young and middle-aged citizens of mainland China and
on Taiwan. Standard Cantonese, not Mandarin, was used in Hong Kong during the time of its British
colonial period (owing to its large Cantonese native and migrant populace) and remains today its official
language of education, formal speech, and daily life, but Mandarin is becoming increasingly influential
after the 1997 handover.

Chinese was once the Lingua franca for East Asia countries for centuries, before the rise of European
influences in 19th century.

[edit] Influences on other languages
Throughout history Chinese culture and politics has had a great influence on unrelated languages such as
Korean and Japanese. Korean and Japanese both have writing systems employing Chinese characters
(Hanzi), which are called Hanja and Kanji, respectively.

The Vietnamese term for Chinese writing is Hán tự. It was the only available method for writing
Vietnamese until the 14th century, used almost exclusively by Chinese-educated Vietnamese élites. From
the 14th to the late 19th century, Vietnamese was written with Chữ nôm, a modified Chinese script
incorporating sounds and syllables for native Vietnamese speakers. Chữ nôm was completely replaced by
a modified Latin script created by the Jesuit missionary priest Alexander de Rhodes, which incorporates a
system of diacritical marks to indicate tones, as well as modified consonants. The Vietnamese language
exhibits multiple elements similar to Cantonese in regard to the specific intonations and sharp consonant
endings. There is also a slight influence from Mandarin, including the sharper vowels and "kh" (IPA:x)
sound missing from other Asiatic languages.

In South Korea, the Hangul alphabet is generally used, but Hanja is used as a sort of boldface. In North
Korea, Hanja has been discontinued. Since the modernization of Japan in the late 19th century, there has
been debate about abandoning the use of Chinese characters, but the practical benefits of a radically new
script have so far not been considered sufficient.

In derived Chinese characters or Zhuang logograms to write songs, even though Zhuang is not a Chinese
dialect. Since the 1950s, the Zhuang language has been written in a modified Latin alphabet.[8]

Languages within the influence of Chinese culture also have a very large number of loanwords from
Chinese. Fifty percent or more of Korean vocabulary is of Chinese origin and the influence on Japanese
and Vietnamese has been considerable. At least five percent of all words in Tagalog are of Chinese origin.
Chinese has also lent a great deal of many grammatical features to these and neighboring languages,
notably the lack of gender and the use of classifiers.[citation needed] Japanese has also a lot of loanwords from
Chinese, as does Vietnamese.

Loan words from Chinese also exist in European languages such as English. Examples of such words are
"tea" from the Minnan pronunciation of 茶 (POJ: tê), "ketchup" from the Cantonese pronunciation of 茄
汁 (ke chap), and "kumquat" from the Cantonese pronunciation of 金橘 (kam kuat).

[edit] Phonology

This article contains IPA phonetic symbols. Without proper
rendering support, you may see question marks, boxes, or other
symbols instead of Unicode characters.

For more specific information on phonology of Chinese see the respective main articles of each
spoken variety.

The phonological structure of each syllable consists of a nucleus consisting of a vowel (which can be a
monophthong, diphthong, or even a triphthong in certain varieties) with an optional onset or coda
consonant as well as a tone. There are some instances where a vowel is not used as a nucleus. An example
of this is in Cantonese, where the nasal sonorant consonants /m/ and /ŋ/ can stand alone as their own
syllable.

Across all the spoken varieties, most syllables tend to be open syllables, meaning they have no coda, but
syllables that do have codas are restricted to /m/, /n/, /ŋ/, /p/, /t/, /k/, or /ʔ/. Some varieties allow most
of these codas, whereas others, such as Mandarin, are limited to only two, namely /n/ and /ŋ/. Consonant
clusters do not generally occur in either the onset or coda. The onset may be an affricate or a consonant
followed by a semivowel, but these are not generally considered consonant clusters.

The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a
reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a
dramatic decrease in sounds and so have far more multisyllabic words than most other spoken varieties.
The total number of syllables in some varieties is therefore only about a thousand, including tonal
variation, which is only about an eighth as many as English[9].

All varieties of spoken Chinese use tones. A few dialects of north China may have as few as three tones,
while some dialects in south China have up to 6 or 10 tones, depending on how one counts. One exception
from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like
modern Japanese.

A very common example used to illustrate the use of tones in Chinese are the four main tones of Standard
Mandarin applied to the syllable "ma." The tones correspond to these five characters:

This article contains Ruby annotation. Without proper rendering support,
you may see transcriptions in parentheses after the character instead of ruby
glosses.


• 媽/妈 "mother" — high level

• 麻 "hemp" or "torpid" — high rising

• 馬/马 "horse" — low falling-rising

• 罵/骂 "scold" — high falling
ma
• 嗎/吗 "question particle" — neutral

Listen to the tones

This is a recording of the four main tones. Fifth, or neutral, tone is not included.
Problems listening to this file? See media help.

[edit] Phonetic transcriptions
The Chinese had no uniform phonetic transcription system until the mid-20th century, although
enunciation patterns were recorded in early rime books and dictionaries. Early Sanskrit and Pali Indian
translators were the first to attempt describing the sounds and enunciation patterns of the language in a
foreign language. After 15th century CE Jesuits and Western court missionaries’ efforts result in some
rudimentary Latin transcription systems, based on the Nanjing Mandarin dialect.

[edit] Romanization

Main article: Romanization of Chinese

Romanization is the process of transcribing a language in the Latin alphabet. There are many systems of
romanization for the Chinese languages due to the Chinese's own lack of phonetic transcription until
modern times. Chinese is first known to have been written in Latin characters by Western Christian
missionaries in the 16th century.

Today the most common romanization standard for Standard Mandarin is Hanyu Pinyin (漢語拼音/汉语
拼音), often known simply as pinyin, introduced in 1956 by the People's Republic of China, later adopted
by Singapore (see Chinese language romanisation in Singapore). Pinyin is almost universally employed
now for teaching standard spoken Chinese in schools and universities across America, Australia and
Europe. Chinese parents also use Pinyin to teach their children the sounds and tones for words with which
the child is unfamiliar. The Pinyin is usually shown below a picture of the thing the word represents, and
alongside the Pinyin is the Chinese symbol.

The second-most common romanization system, the Wade-Giles, was invented by Thomas Wade in 1859,
later modified by Herbert Giles in 1892. As it approximates the phonology of Mandarin Chinese into
English consonants and vowels (hence an Anglicization), it may be particularly helpful for beginner
speakers of native English background. Wade-Giles is found in academic use in the United States,
particularly before the 1980s, and until recently was widely used in Taiwan (Taipei city now officially
uses Hanyu Pinyin and the rest of the island officially uses Tōngyòng Pinyin 通用拼音).

When used within European texts, the tone transcriptions in both pinyin and Wade-Giles are often left out
for simplicity; Wade-Giles' extensive use of apostrophes is also usually omitted. Thus, most Western
readers will be much more familiar with Beijing than they will be with Běijīng (pinyin), and with Taipei
than T'ai²-pei³ (Wade-Giles).
Here are a few examples of Hanyu Pinyin and Wade-Giles, for comparison:

Mandarin Romanization Comparison

Hanyu
Characters Wade-Giles Notes
Pinyin

中国/中國 Chung1-kuo² Zhōngguó "China"

北京 Pei³-ching1 Běijīng Capital of the People's Republic of China

台北 T'ai²-pei³ Táiběi Capital of the Republic of China

Mao² Tse²-
毛泽东/毛澤東 Máo Zédōng Former Communist Chinese leader
tung1

Chiang³ Former Nationalist Chinese leader (better known to English
蒋介石/蔣介石 Jiǎng Jièshí
Chieh4-shih² speakers as Chiang Kai-shek, with Cantonese pronunciation)

孔子 K'ung³ Tsu³ Kǒng Zǐ "Confucius"

Other systems of romanization for Chinese include Gwoyeu Romatzyh, the French EFEO, the Yale
(invented during WWII for U.S. troops), as well as separate systems for Cantonese, Minnan, Hakka, and
other Chinese languages or dialects.

[edit] Other phonetic transcriptions

Chinese languages have been phonetically transcribed into many other writing systems over the centuries.
The 'Phags-pa script, for example, has been very helpful in reconstructing the pronunciations of pre-
modern forms of Chinese.

Zhuyin (注音, also known as bopomofo), a semi-syllabary is still widely used in Taiwan's elementary
schools to aid standard pronunciation. Although bopomofo characters are reminiscent of katakana script,
there is no source to substantiate the claim that Katakana was the basis for the zhuyin system. A
comparison table of zhuyin to pinyin exists in the zhuyin article. Syllables based on pinyin and zhuyin can
also be compared by looking at the following articles:

• Pinyin table
• Zhuyin table

There are also at least two systems of cyrillization for Chinese. The most widespread is the Palladius
system.
[edit] Grammar and morphology
Main article: Chinese grammar

Modern Chinese has often been erroneously classed as a "monosyllabic" language. While most of the
morphemes are single syllable, modern Chinese today is much less a monosyllabic language in that
nouns, adjectives and verbs are largely di-syllabic. The tendency to create disyllabic words in the modern
Chinese languages, particularly in Mandarin, has been particularly pronounced when compared to
Classical Chinese. Classical Chinese is a highly isolating language, with each idea (morpheme) generally
corresponding to a single syllable and a single character; Modern Chinese though, has the tendency to
form new words through disyllabic, trisyllabic and tetra-character agglutination. In fact, some linguists
argue that classifying modern Chinese as an isolating language is misleading, for this reason alone.

Chinese morphology is strictly bound to a set number of syllables with a fairly rigid construction which
are the morphemes, the smallest blocks of the language. While many of these single-syllable morphemes (
zì, 字 in Chinese) can stand alone as individual words, they more often than not form multi-syllabic
compounds, known as cí (词/詞), which more closely resembles the traditional Western notion of a word.
A Chinese cí (“word”) can consist of more than one character-morpheme, usually two, but there can be
three or more.

For example:

• Yun 云 -“cloud”
• Hanbaobao 汉堡包 –“hamburger”
• Wo 我 –“I, me”
• Renmin 人民 –“people”
• Diqiu 地球 –“earth(globosity)”
• Shandian 闪电 –“lightning”
• Meng 梦 –“dream”

All varieties of modern Chinese are analytic languages, in that they depend on syntax (word order and
sentence structure) rather than morphology — i.e., changes in form of a word — to indicate the word's
function in a sentence. In other words, Chinese has few grammatical inflections – it possesses no tenses,
no voices, no numbers (singular, plural; though there are plural markers, for example for personal
pronouns), only a few articles (i.e., equivalents to "the, a, an" in English), and no gender.

They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin Chinese, this
involves the use of particles like le 了, hai 还, yijing 已经, etc.

Chinese features Subject Verb Object word order, and like many other languages in East Asia, makes
frequent use of the topic-comment construction to form sentences. Chinese also has an extensive system
of measure words, another trait shared with neighbouring languages like Japanese and Korean. See
Chinese measure words for an extensive coverage of this subject.

Other notable grammatical features common to all the spoken varieties of Chinese include the use of
serial verb construction, pronoun dropping and the related subject dropping.

Although the grammars of the spoken varieties share many traits, they do possess differences. See
Chinese grammar for the grammar of Standard Mandarin (the standardized Chinese spoken language),
and the articles on other varieties of Chinese for their respective grammars.
[edit] Tones and homophones

Official modern Mandarin has only 400 spoken monosyllables but over 10,000 written characters, so there
are many homophones only distinguishable by the four tones. Even this is often not enough unless the
context and exact phrase or cí is identified.

The mono-syllable jī, first tone in standard Mandarin, corresponds to the following characters: 雞/鸡
chicken, 機/机 machine, 基 basic, 擊/击 (to) hit, 饑/饥 hunger, and 積/积 sum. In speech, the glyphing of
a monosyllable to its meaning must be determined by context or by relation to other morphemes (e.g.
"some" as in the opposite of "none"). Native speakers may state which words or phrases their names are
found in, for convenience of writing: 名字叫嘉英,嘉陵江的嘉,英國的英 Míngzi jiào Jiāyīng, Jiālíng
Jiāng de jiā, Yīngguó de yīng "My name is Jiāyīng, the Jia for Jialing River and the ying for the short
form in Chinese of UK."

Southern Chinese varieties like Cantonese and Hakka preserved more of the rimes of Middle Chinese and
have more tones. The previous examples of jī, for instance, for "stimulated", "chicken", and "machine",
have distinct pronunciations in Cantonese (romanized using jyutping): gik1, gai1, and gei1, respectively.
For this reason, southern varieties tend to employ fewer multi-syllabic words.

[edit] Vocabulary
The entire Chinese character corpus since antiquity comprises well over 20,000 characters, of which only
roughly 10,000 are now commonly in use. However Chinese characters should not be confused with
Chinese words, there are many times more Chinese words than there are characters as most Chinese
words are made up of two or more different characters.

Estimates of the total number of Chinese words and phrases vary greatly. The Hanyu Da Zidian, an all-
inclusive compendium of Chinese characters, includes 54,678 head entries for characters, including bone
oracle versions. The Zhonghua Zihai 中华字海 (1994) contains 85,568 head entries for character
definitions, and is the largest reference work based purely on character and its literary variants.

The most comprehensive pure linguistic Chinese-language dictionary, the 12-volumed Hanyu Da Cidian
汉语大词典, records more than 23,000 head Chinese characters, and gives over 370,000 definitions. The
1999 revised Cihai, a multi-volume encyclopedic dictionary reference work, gives 122,836 vocabulary
entry definitions under 19,485 Chinese characters, including proper names, phrases and common
zoological, geographical, sociological, scientific and technical terms.

The latest 2007 5th edition of Xiandai Hanyu Cidian 现代汉语词典, an authoritative one-volume
dictionary on modern standard Chinese language as used in mainland China, has 65,000 entries and
defines 11,000 head characters.

[edit] New words
Like any other language, Chinese has absorbed a sizeable amount of loanwords from other cultures. Most
Chinese words are formed out of native Chinese morphemes, including words describing imported objects
and ideas. However, direct phonetic borrowing of foreign words has gone on since ancient times. Words
borrowed from along the Silk Road since Old Chinese include 葡萄 "grape," 石榴 "pomegranate" and 狮
子/獅子 "lion." Some words were borrowed from Buddhist scriptures, including 佛 "Buddha" and 菩萨/
菩薩 "bodhisattva." Other words came from nomadic peoples to the north, such as 胡同 "hutong." Words
borrowed from the peoples along the Silk Road, such as 葡萄 "grape" (pútáo in Mandarin) generally have
Persian etymologies. Buddhist terminology is generally derived from Sanskrit or Pāli, the liturgical
languages of North India. Words borrowed from the nomadic tribes of the Gobi, Mongolian or northeast
regions generally have Altaic etymologies, such as 琵琶 "pípa", the Chinese lute, or 酪 "cheese" or
"yoghurt", but from exactly which Altaic source is not always entirely clear.

[edit] Modern borrowings and loanwords

Foreign words continue to enter the Chinese language by transcription according to their pronunciations.
This is done by employing Chinese characters with similar pronunciations. For example, "Israel" becomes
以色列 (pinyin: yǐsèliè), Paris 巴黎. A rather small number of direct transliterations have survived as
common words, including 沙發 shāfā "sofa," 马达/馬達 mǎdá "motor," 幽默 yōumò "humour," 逻辑/邏
輯 luójí "logic," 时髦/時髦 shímáo "smart, fashionable" and 歇斯底里 xiēsīdǐlǐ "hysterics." The bulk of
these words were originally coined in the Shanghainese dialect during the early 20th century and were
later loaned into Mandarin, hence their pronunciations in Mandarin may be quite off from the English. For
example, 沙发/沙發 and 马达/馬達 in Shanghainese actually sound more like the English "sofa" and
"motor."

Today, it is much more common to use existing Chinese morphemes to coin new words in order to
represent imported concepts, such as technical expressions. Any Latin or Greek etymologies are dropped,
making them more comprehensible for Chinese but introducing more difficulties in understanding foreign
texts. For example, the word telephone was loaned phonetically as 德律风/德律風 ( Shanghainese:
télífon [təlɪfoŋ], Standard Mandarin: délǜfēng) during the 1920s and widely used in Shanghai, but later
the Japanese 电话/電話 (diànhuà "electric speech"), built out of native Chinese morphemes, became
prevalent. Other examples include 电视/電視 (diànshì "electric vision") for television, 电脑/電腦
(diànnǎo "electric brain") for computer; 手机/手機 (shǒujī "hand machine") for cellphone, and 蓝牙/藍芽
(lányá "blue tooth") for Bluetooth. 網誌(wǎng zhì"internet logbook") for blog in Cantonese or people in
Hong Kong and Macau. Occasionally half-transliteration, half-translation compromises are accepted, such
as 汉堡包/漢堡包 (hànbǎo bāo, "Hamburg bun") for hamburger. Sometimes translations are designed so
that they sound like the original while incorporating Chinese morphemes, such as 拖拉机/拖拉機 (tuōlājī,
"tractor," literally "dragging-pulling machine"), or 马利奥/馬利奧 for the video game character Mario.
This is often done for commercial purposes, for example 奔腾/奔騰 (bēnténg "running leaping") for
Pentium and 赛百味/賽百味 (Sàibǎiwèi "better-than hundred tastes") for Subway restaurants.

Since the 20th century, another source has been Japan. Using existing kanji, which are Chinese characters
used in the Japanese language, the Japanese re-moulded European concepts and inventions into wasei-
kango (和製漢語, literally Japanese-made Chinese), and re-loaned many of these into modern Chinese.
Examples include diànhuà (电话/電話, denwa, "telephone"), shèhuì (社会, shakai, "society"), kēxué (科
学/科學, kagaku, "science") and chōuxiàng (抽象, chūshō, "abstract"). Other terms were coined by the
Japanese by giving new senses to existing Chinese terms or by referring to expressions used in classical
Chinese literature. For example, jīngjì (经济/經濟, keizai), which in the original Chinese meant "the
workings of the state", was narrowed to "economy" in Japanese; this narrowed definition was then
reimported into Chinese. As a result, these terms are virtually indistinguishable from native Chinese
words: indeed, there is some dispute over some of these terms as to whether the Japanese or Chinese
coined them first. As a result of this toing-and-froing process, Chinese, Korean, Japanese and Vietnamese
share a corpus linguistics of terms describing modern terminology, in parallel to a similar corpus of terms
built from Greco-Latin terms shared among European languages.

Taiwanese and Taiwanese Mandarin continue to be influenced by Japanese eg. 便当/便當 “lunchbox or
boxed lunch” (from bento) and 料理 “prepared cuisine”, have passed into common currency.

Western foreign words have great influence on Chinese language since the 20th century, through
transliterations. From French came 芭蕾 (bāléi, "ballet"), 香槟 (xiāngbīn, "champagne"), via Italian 咖啡
(kāfēi, "caffè"). The English influence is particularly pronounced. From early 20th century Shanghainese,
many English words are borrowed .eg. the above-mentioned 沙發 (shāfā "sofa"), 幽默 (yōumò
"humour"), and 高尔夫 (gāoěrfū, "golf"). Later United States soft influences gave rise to 迪斯科 (dísīkè,
"disco"), 可乐 (kělè, "cola") and 迷你 (mínǐ, "mini(skirt)"). Contemporary colloquial Cantonese has
distinct loanwords from English like cartoon 卡通 (cartoon), 基佬 (gay people), 的士 (taxi), 巴士 (bus).
With the rising popularity of the Internet, there is a current vogue in China for coining English
transliterations, eg. 粉絲 (fěnsī, "fans"), 駭客 (hèikè, "hacker"), 部落格(bùluōgé,blog) in Taiwanese
Mandarin.

[edit] Learning Chinese
See also: Chinese as a foreign language

Since the People's Republic of China’s economic and political rise in recent years, standard Mandarin has
become an increasingly popular subject of study amongst the young in the Western world, as in the UK.[10]

In 1991 there were 2,000 foreign learners taking China's official Chinese Proficiency Test (comparable to
English's Cambridge Certificate), while in 2005, the number of candidates had risen sharply to
117,660[citation needed].

[edit] See also
China portal

• Chinese characters
• Chinese honorifics
• Chinese measure word
• Chinese number gestures
• Chinese numerals
• Chinese punctuation
• Chinese exclamative particles
• Four-character idiom
• Han unification
• Haner language
• HSK test
• Languages of China
• North American Conference on Chinese Linguistics
• Nü shu

[edit] References
English language
From Wikipedia, the free encyclopedia

Jump to: navigation, search

English

Pronunciation: /ˈɪŋɡlɪʃ/[1]

Spoken in: (see below)

Total speakers: First language:
309–400 million
Second language:
199–1,400
million[2][3]
Overall: 1.8
billion[3]

Ranking: 3 (native
speakers)[4]
Total: 1 or 2 [5]

Language Indo-European
family: Germanic
West Germanic
Anglo–Frisian
Anglic
English

Writing system: Latin (English
variant)

Official status

Official 53 countries
language in: United Nations
European Union
Commonwealth
of Nations

Regulated by: No official
regulation

Language codes

ISO 639-1: en

ISO 639-2: eng

ISO 639-3: eng

Countries where English is a majority language are
dark blue; countries where it is an official but not a
majority language are light blue. English is also one
of the official languages of the European Union.

Note: This page may contain IPA phonetic
symbols in Unicode.

English is a West Germanic language originating in England and is the first language for most people in
the United Kingdom, the United States, Canada, Australia, New Zealand, Ireland and the Anglophone
Caribbean. It is used extensively as a second language and as an official language throughout the world,
especially in Commonwealth countries and in many international organisations.

Contents
[hide]
• 1 Significance
• 2 History
• 3 Classification and related languages
• 4 Geographical distribution
o 4.1 Countries in order of total speakers
o 4.2 English as a global language
o 4.3 Dialects and regional varieties
o 4.4 Constructed varieties of English
• 5 Phonology
o 5.1 Vowels
o 5.2 Consonants
 5.2.1 Voicing and aspiration
o 5.3 Supra-segmental features
 5.3.1 Tone groups
 5.3.2 Characteristics of intonation
• 6 Grammar
• 7 Vocabulary
o 7.1 Number of words in English
o 7.2 Word origins
 7.2.1 Dutch origins
 7.2.2 French origins
• 8 Idiomatic
• 9 Writing system
o 9.1 Basic sound-letter correspondence
o 9.2 Written accents
• 10 Formal written English
• 11 Basic and simplified versions
• 12 See also
• 13 Notes
• 14 References
• 15 External links

o 15.1 Dictionaries

Significance
Modern English, sometimes described as the first global lingua franca,[6][7] is the dominant international
language in communications, science, business, aviation, entertainment, radio and diplomacy.[8] The initial
reason for its enormous spread beyond the bounds of the British Isles, where it was originally a native
tongue, was the British Empire, and by the late nineteenth century its reach was truly global.[9] It is the
dominant language in the United States, whose growing economic and cultural influence and status as a
global superpower since World War II have significantly accelerated adoption of English as a language
across the planet.[7]

A working knowledge of English has become a requirement in a number of fields, occupations and
professions such as medicine and as a consequence over a billion people speak English to at least a basic
level (see English language learning and teaching).

Linguists such as David Crystal recognize that one impact of this massive growth of English, in common
with other global languages, has been to reduce native linguistic diversity in many parts of the world
historically, most particularly in Australasia and North America, and its huge influence continues to play
an important role in language attrition. By a similar token, historical linguists, aware of the complex and
fluid dynamics of language change, are always alive to the potential English contains through the vast size
and spread of the communities that use it and its natural internal variety, such as in its creoles and pidgins,
to produce a new family of distinct languages over time.[citation needed]

English is one of six official languages of the United Nations.

History
Main article: History of the English language

English is a West Germanic language that originated from the Anglo-Frisian and Lower Saxon dialects
brought to Britain by Germanic settlers and Roman auxiliary troops from various parts of what is now
northwest Germany and the Northern Netherlands[citation needed]. One of these German tribes were the
Angles[10], who may have come from Angeln, and Bede wrote that their whole nation came to Britain [11],
leaving their former land empty. The names 'England' (or 'Aenglaland') and English are derived from from
the name of this tribe.
The Anglo Saxons began invading around 449 AD from the regions of Denmark and Jutland[12][13]. Before
the Anglo-Saxons arrived in England the native population spoke Brythonic, a Celtic language. [14].
Although the most significant changes in dialect occurred after the Norman invasion of 1066, the
language retained its name and the pre-Norman invasion dialect is now known as Old English[15].

Initially, Old English was a diverse group of dialects, reflecting the varied origins of the Anglo-Saxon
Kingdoms of Great Britain[citation needed]. One of these dialects, Late West Saxon, eventually came to
dominate. The original Old English language was then influenced by two waves of invasion. The first was
by language speakers of the Scandinavian branch of the Germanic family; they conquered and colonized
parts of the British Isles in the 8th and 9th centuries. The second was the Normans in the 11th century,
who spoke Old Norman and ultimately developed an English variety of this called Anglo-Norman. These
two invasions caused English to become "mixed" to some degree (though it was never a truly mixed
language in the strict linguistic sense of the word; mixed languages arise from the cohabitation of
speakers of different languages, who develop a hybrid tongue for basic communication).

Cohabitation with the Scandinavians resulted in a significant grammatical simplification and lexical
supplementation of the Anglo-Frisian core of English; the later Norman occupation led to the grafting
onto that Germanic core of a more elaborate layer of words from the Italic branch of the European
languages. This Norman influence entered English largely through the courts and government. Thus,
English developed into a "borrowing" language of great flexibility and with a huge vocabulary.

The emergence and spread of the British Empire and the emergence of the United States as a superpower
helped to spread the English language around the world.

Classification and related languages
The English language belongs to the western sub-branch of the Germanic branch of the Indo-European
family of languages. The closest living relative of English is Scots, spoken primarily in Scotland and parts
of Northern Ireland, which is viewed by linguists as either a separate language or a group of dialects of
English. The next closest relative to English after Scots is Frisian, spoken in the Northern Netherlands and
Northwest Germany, followed by the other West Germanic languages (Dutch and Afrikaans, Low
German, German), and then the North Germanic languages (Swedish, Danish, Norwegian, Icelandic, and
Faroese). With the exception of Scots, none of these languages are mutually intelligible with English,
because of divergences in lexis, syntax, semantics, and phonology.[citation needed]

Lexical differences with the other Germanic languages arise predominately because of the heavy usage of
Latin (for example, "exit", vs. Dutch uitgang) and French ("change" vs. German Änderung, "movement"
vs. German Bewegung) words in English. The syntax of German and Dutch is also significantly different
from English, with different rules for setting up sentences (for example, German Ich habe noch nie etwas
auf dem Platz gesehen, vs. English "I have still never seen anything in the square"). Semantics causes a
number of false friends between English and its relatives. Phonology differences obscure words which
actually are genetically related ("enough" vs. German genug), and sometimes both semantics and
phonology are different (German Zeit, "time", is related to English "tide", but the English word has come
to mean gravitational effects on the ocean by the moon).[citation needed]

Many written French words are also intelligible to an English speaker (though pronunciations are often
quite different) because English absorbed a large vocabulary from Norman and French, via Anglo-
Norman after the Norman Conquest and directly from French in subsequent centuries. As a result, a large
portion of English vocabulary is derived from French, with some minor spelling differences (word
endings, use of old French spellings, etc.), as well as occasional divergences in meaning of so-called false
friends. The pronunciation of most French loanwords in English (with exceptions such as mirage or
phrases like coup d’état) has become completely anglicized and follows a typically English pattern of
stress.[citation needed] Some North Germanic words also entered English due to the Danish invasion shortly
before then (see Danelaw); these include words such as "sky", "window", "egg", and even "they" (and its
forms) and "are" (the present plural form of "to be").[citation needed]

Geographical distribution
See also: List of countries by English-speaking population
[show]
v•d•e
The English-speaking world

Approximately 375 million people speak English as their first language.[16] English today is probably the
third largest language by number of native speakers, after Mandarin Chinese and Spanish.[17][18] However,
when combining native and non-native speakers it is probably the most commonly spoken language in the
world, though possibly second to a combination of the Chinese languages, depending on whether or not
distinctions in the latter are classified as "languages" or "dialects."[5][19] Estimates that include second
language speakers vary greatly from 470 million to over a billion depending on how literacy or mastery is
defined.[20][21] There are some who claim that non-native speakers now outnumber native speakers by a
ratio of 3 to 1.[22]

Pie chart showing the relative numbers of native English speakers in the major English-speaking countries
of the world

The countries with the highest populations of native English speakers are, in descending order: United
States (215 million),[23] United Kingdom (58 million),[24] Canada (18.2 million),[25] Australia (15.5
million),[26] Ireland (3.8 million),[24] South Africa (3.7 million),[27] and New Zealand (3.0-3.7 million).[28]
Countries such as Jamaica and Nigeria also have millions of native speakers of dialect continua ranging
from an English-based creole to a more standard version of English. Of those nations where English is
spoken as a second language, India has the most such speakers ('Indian English') and linguistics professor
David Crystal claims that, combining native and non-native speakers, India now has more people who
speak or understand English than any other country in the world.[29] Following India is the People's
Republic of China.[30]

Countries in order of total speakers

As an
Percent of First
Rank Country Total additional Comment
population language
language
Source: US Census 2006:
Language Use and English-
Speaking Ability: 2006, Table
1. Figure for second language
United speakers are respondents who
1 251,388,301 83% 215,423,557 35,964,744 reported they do not speak
States
English at home but know it
"very well" or "well". Note:
figures are for population age 5
and older
65,000,000
second Figures include both those who
language speak English as a second
language and those who speak
speakers.
2 India 90,000,000 8% 178,598 it as a third language. 1991
25,000,000 figures.[31][32] The figures
third include English speakers, but
language not English users.[33]
speakers
Figures are for speakers of
Nigerian Pidgin, an English-
based pidgin or creole. Ihemere
gives a range of roughly 3 to 5
million native speakers; the
midpoint of the range is used in
3 Nigeria 79,000,000 53% 4,000,000 >75,000,000 the table. Ihemere, Kelechukwu
Uchechukwu. 2006. "A Basic
Description and Analytic
Treatment of Noun Clauses in
Nigerian Pidgin." Nordic
Journal of African Studies
15(3): 296–313.
United
4 59,600,000 98% 58,100,000 1,500,000 Source: Crystal (2005), p. 109.
Kingdom
Total speakers: Census 2000,
text above Figure 7. 63.71% of
the 66.7 million people aged 5
years or more could speak
English. Native speakers:
Census 1995, as quoted by
5 Philippines 45,900,000 52% 27,000 42,500,000 Andrew Gonzalez in The
Language Planning Situation in
the Philippines, Journal of
Multilingual and Multicultural
Development, 19 (5&6), 487-
525. (1998)
6 Canada 25,246,220 76% 17,694,830 7,551,390 Source: 2001 Census -
Knowledge of Official
Languages and Mother Tongue.
The native speakers figure
comprises 122,660 people with
both French and English as a
mother tongue, plus 17,572,170
people with English and not
French as a mother tongue.
Source: 2006 Census.[34] The
figure shown in the first
language English speakers
column is actually the number
of Australian residents who
speak only English at home.
7 Australia 18,172,989 92% 15,581,329 2,591,660 The additional language column
shows the number of other
residents who claim to speak
English "well" or "very well".
Another 5% of residents did not
state their home language or
English proficiency.

English is the primary language in Anguilla, Antigua and Barbuda, Australia (Australian English), the
Bahamas, Barbados, Bermuda, Belize (Belizean Kriol), the British Indian Ocean Territory, the British
Virgin Islands, Canada (Canadian English), the Cayman Islands, the Falkland Islands, Gibraltar, Grenada,
Guam, Guernsey (Channel Island English), Guyana, Ireland (Hiberno-English), Isle of Man (Manx
English), Jamaica (Jamaican English), Jersey, Montserrat, Nauru, New Zealand (New Zealand English),
Pitcairn Islands, Saint Helena, Saint Kitts and Nevis, Saint Vincent and the Grenadines, Singapore, South
Georgia and the South Sandwich Islands, Trinidad and Tobago, the Turks and Caicos Islands, the United
Kingdom, the U.S. Virgin Islands, and the United States.

In many other countries, where English is not the most spoken language, it is an official language; these
countries include Botswana, Cameroon, Dominica, Fiji, the Federated States of Micronesia, Ghana,
Gambia, India, Kenya, Kiribati, Lesotho, Liberia, Madagascar, Malta, the Marshall Islands, Mauritius,
Namibia, Nigeria, Pakistan, Palau, Papua New Guinea, the Philippines (Philippine English), Puerto Rico,
Rwanda, the Solomon Islands, Saint Lucia, Samoa, Seychelles, Sierra Leone, Sri Lanka, Swaziland,
Tanzania, Uganda, Zambia, and Zimbabwe. It is also one of the 11 official languages that are given equal
status in South Africa (South African English). English is also the official language in current dependent
territories of Australia (Norfolk Island, Christmas Island and Cocos Island) and of the United States
(Northern Mariana Islands, American Samoa and Puerto Rico)[35], former British colony of Hong Kong,
and Netherlands Antilles.

English is an important language in several former colonies and protectorates of the United Kingdom but
falls short of official status, such as in Malaysia, Brunei, United Arab Emirates and Bahrain. English is
also not an official language in either the United States or the United Kingdom.[36][37] Although the United
States federal government has no official languages, English has been given official status by 30 of the 50
state governments.[38] English is not a de jure official language of Israel; however, the country has
maintained official language use a de facto role for English since the British mandate.[39]

English as a global language

See also: English in computing, International English, and World language

Because English is so widely spoken, it has often been referred to as a "world language," the lingua
franca of the modern era.[7] While English is not an official language in most countries, it is currently the
language most often taught as a second language around the world. Some linguists[who?] believe that it is no
longer the exclusive cultural sign of "native English speakers", but is rather a language that is absorbing
aspects of cultures worldwide as it continues to grow. It is, by international treaty, the official language
for aerial and maritime communications.[citation needed] English is an official language of the United Nations
and many other international organizations, including the International Olympic Committee.

English is the language most often studied as a foreign language in the European Union (by 89% of
schoolchildren), followed by French (32%), German (18%), and Spanish (8%).[40] In the EU, a large
fraction of the population reports being able to converse to some extent in English. Among non-English
speaking countries, a large percentage of the population claimed to be able to converse in English in the
Netherlands (87%), Sweden (85%), Denmark (83%), Luxembourg (66%), Finland (60%), Slovenia
(56%), Austria (53%), Belgium (52%), and Germany (51%).[41] Norway and Iceland also have a large
majority of competent English-speakers.[citation needed]

Books, magazines, and newspapers written in English are available in many countries around the world.
English is also the most commonly used language in the sciences.[7] In 1997, the Science Citation Index
reported that 95% of its articles were written in English, even though only half of them came from authors
in English-speaking countries.

Dialects and regional varieties

Main article: List of dialects of the English language

The expansion of the British Empire and—since WWII—the primacy of the United States have spread
English throughout the globe.[7] Because of that global spread, English has developed a host of English
dialects and English-based creole languages and pidgins.

The major varieties of English include, in most cases, several subvarieties, such as Cockney within British
English; Newfoundland English within Canadian English; and African American Vernacular English
("Ebonics") and Southern American English within American English. English is a pluricentric language,
without a central language authority like France's Académie française; and therefore no one variety is
popularly considered "correct" or "incorrect".

Scots developed—largely independently[citation needed]—from the same origins, but following the Acts of
Union 1707 a process of language attrition began, whereby successive generations adopted more and
more features from English causing dialectalisation. Whether it is now a separate language or a dialect of
English better described as Scottish English is in dispute. The pronunciation, grammar and lexis of the
traditional forms differ, sometimes substantially, from other varieties of English.

Because of the wide use of English as a second language, English speakers have many different accents,
which often signal the speaker's native dialect or language. For the more distinctive characteristics of
regional accents, see Regional accents of English, and for the more distinctive characteristics of regional
dialects, see List of dialects of the English language.

Just as English itself has borrowed words from many different languages over its history, English
loanwords now appear in a great many languages around the world, indicative of the technological and
cultural influence of its speakers. Several pidgins and creole languages have formed using an English
base, such as Jamaican Patois, Nigerian Pidgin, and Tok Pisin. There are many words in English coined to
describe forms of particular non-English languages that contain a very high proportion of English words.
Franglais, for example, describes various mixes of French and English, spoken in the Channel Islands and
Canada. In Wales, which is part of the United Kingdom, the languages of Welsh and English are
sometimes mixed together by fluent or comfortable Welsh speakers, the result of which is called
Wenglish.
Constructed varieties of English

• Basic English is simplified for easy international use. Manufacturers and other international
businesses to write manuals and communicate use it. Some English schools in Asia teach it as a
practical subset of English for use by beginners.
• Special English is a simplified version of English used by the Voice of America. It uses a
vocabulary of only 1500 words.
• English reform is an attempt to improve collectively upon the English language.
• Seaspeak and the related Airspeak and Policespeak, all based on restricted vocabularies, were
designed by Edward Johnson in the 1980s to aid international cooperation and communication in
specific areas. There is also a tunnelspeak for use in the Channel Tunnel.
• Euro-English is a concept of standardising English for use as a second language in continental
Europe.
• Manually Coded English – a variety of systems have been developed to represent the English
language with hand signals, designed primarily for use in deaf education. These should not be
confused with true sign languages such as British Sign Language and American Sign Language
used in Anglophone countries, which are independent and not based on English.
• E-Prime excludes forms of the verb to be.

Euro-English (also EuroEnglish or Euro-English) terms are English translations of European concepts
that are not native to English-speaking countries. Because of the United Kingdom's (and even the
Republic of Ireland's) involvement in the European Union, the usage focuses on non-British concepts.
This kind of Euro-English was parodied when English was "made" one of the constituent languages of
Europanto.

Phonology
Main article: English phonology

Vowels

This section may require cleanup to meet Wikipedia's quality standards.
Please improve this article if you can. (December 2008)

IPA Description word

monophthongs

i/iː Close front unrounded vowel bead

ɪ Near-close near-front unrounded vowel bid

ɛ Open-mid front unrounded vowel bed

æ Near-open front unrounded vowel bad
ɒ Open back rounded vowel box1

ɔ/ɑ Open-mid back rounded vowel pawed
2

ɑ/ɑː Open back unrounded vowel bra

ʊ Near-close near-back vowel good

u/uː Close back rounded vowel booed

Open-mid back unrounded vowel, near-open central
ʌ/ɐ/ɘ bud
vowel

ɝ/ɜː Open-mid central unrounded vowel bird3

ə Schwa Rosa's4

ɨ Close central unrounded vowel roses5

diphthongs

Close-mid front unrounded vowel
e(ɪ)/eɪ bayed6
Close front unrounded vowel

Close-mid back rounded vowel
o(ʊ)/əʊ bode6
Near-close near-back vowel

Open front unrounded vowel
aɪ cry
Near-close near-front unrounded vowel

aʊ Open front unrounded vowel bough
Near-close near-back vowel

Open-mid back rounded vowel
ɔɪ boy
Close front unrounded vowel

Near-close near-back vowel
ʊɚ/ʊə boor9
Schwa

ɛɚ/ɛə/e Open-mid front unrounded vowel
fair10
ɚ Schwa

Notes:

It is the vowels that differ most from region to region.

Where symbols appear in pairs, the first corresponds to American English, General American accent; the
second corresponds to British English, Received Pronunciation.

1. American English lacks this sound; words with this sound are pronounced with /ɑ/ or /ɔ/. See
Lot-cloth split.
2. Some dialects of North American English do not have this vowel. See Cot-caught merger.
3. The North American variation of this sound is a rhotic vowel.
4. Many speakers of North American English do not distinguish between these two unstressed
vowels. For them, roses and Rosa's are pronounced the same, and the symbol usually used is
schwa /ə/.
5. This sound is often transcribed with /i/ or with /ɪ/.
6. The diphthongs /eɪ/ and /oʊ/ are monophthongal for many General American speakers, as /eː/
and /oː/.
7. The letter <U> can represent either /u/ or the iotated vowel /ju/. In BRP, if this iotated vowel /ju/
occurs after /t/, /d/, /s/ or /z/, it often triggers palatalization of the preceding consonant, turning it
to /ʨ/, /ʥ/, /ɕ/ and /ʑ/ respectively, as in tune, during, sugar, and azure. In American English,
palatalization does not generally happen unless the /ju/ is followed by r, with the result that /(t,
d,s, z)jur/ turn to /tʃɚ/, /dʒɚ/, /ʃɚ/ and /ʒɚ/ respectively, as in nature, verdure, sure, and
treasure.
8. Vowel length plays a phonetic role in the majority of English dialects, and is said to be phonemic
in a few dialects, such as Australian English and New Zealand English. In certain dialects of the
modern English language, for instance General American, there is allophonic vowel length: vowel
phonemes are realized as long vowel allophones before voiced consonant phonemes in the coda of
a syllable. Before the Great Vowel Shift, vowel length was phonemically contrastive.
9. This sound only occurs in non-rhotic accents. In some accents, this sound may be /ɔː/ instead of
/ʊə/. See English-language vowel changes before historic r.
10. This sound only occurs in non-rhotic accents. In some accents, the schwa offglide of /ɛə/ may be
dropped, monophthising and lengthening the sound to /ɛː/.

See also IPA chart for English dialects for more vowel charts.
Consonants

This is the English consonantal system using symbols from the International Phonetic Alphabet (IPA).

Labio- Post- Labial-
Bilabial Dental Alveolar Palatal Velar Glottal
dental alveolar velar

Nasal m n ŋ1

Plosive p b t d k ɡ

Affricate tʃ dʒ 4

Fricative f v θ ð3 s z ʃ ʒ4 ç5 x6 h

Flap ɾ2

Approximan
ɹ4 j ʍ w7
t

Lateral l

1. The velar nasal [ŋ] is a non-phonemic allophone of /n/ in some northerly British accents,
appearing only before /k/ and /g/. In all other dialects it is a separate phoneme, although it only
occurs in syllable codas.
2. The alveolar tap [ɾ] is an allophone of /t/ and /d/ in unstressed syllables in North American
English and Australian English.[42] This is the sound of tt or dd in the words latter and ladder,
which are homophones for many speakers of North American English. In some accents such as
Scottish English and Indian English it replaces /ɹ/. This is the same sound represented by single r
in most varieties of Spanish.
3. In some dialects, such as Cockney, the interdentals /θ/ and /ð/ are usually merged with /f/ and /v/,
and in others, like African American Vernacular English, /ð/ is merged with dental /d/. In some
Irish varieties, /θ/ and /ð/ become the corresponding dental plosives, which then contrast with the
usual alveolar plosives.
4. The sounds /ʃ/, /ʒ/, and /ɹ/ are labialised in some dialects. Labialisation is never contrastive in
initial position and therefore is sometimes not transcribed. Most speakers of General American
realize <r> (always rhoticized) as the retroflex approximant /ɻ/, whereas the same is realized in
Scottish English, etc. as the alveolar trill.
5. The voiceless palatal fricative /ç/ is in most accents just an allophone of /h/ before /j/; for instance
human /çjuːmən/. However, in some accents (see this), the /j/ is dropped, but the initial consonant
is the same.
6. The voiceless velar fricative /x/ is used by Scottish or Welsh speakers of English for Scots/Gaelic
words such as loch /lɒx/ or by some speakers for loanwords from German and Hebrew like Bach
/bax/ or Chanukah /xanuka/. /x/ is also used in South African English. In some dialects such as
Scouse (Liverpool) either [x] or the affricate [kx] may be used as an allophone of /k/ in words
such as docker [dɒkxə]. Most native speakers have a great deal of trouble pronouncing it
correctly when learning a foreign language. Most speakers use the sounds [k] and [h] instead.
7. Voiceless w [ʍ] is found in Scottish and Irish English, as well as in some varieties of American,
New Zealand, and English English. In most other dialects it is merged with /w/, in some dialects
of Scots it is merged with /f/.

Voicing and aspiration

Voicing and aspiration of stop consonants in English depend on dialect and context, but a few general
rules can be given:

• Voiceless plosives and affricates (/ p/, / t/, / k/, and / tʃ/) are aspirated when they are word-initial
or begin a stressed syllable – compare pin [pʰɪn] and spin [spɪn], crap [kʰɹ̥æp] and scrap
[skɹæp].
o In some dialects, aspiration extends to unstressed syllables as well.
o In other dialects, such as Indian English, all voiceless stops remain unaspirated.
• Word-initial voiced plosives may be devoiced in some dialects.
• Word-terminal voiceless plosives may be unreleased or accompanied by a glottal stop in some
dialects (e.g. many varieties of American English) – examples: tap [tʰæp̚], sack [sæk̚].
• Word-terminal voiced plosives may be devoiced in some dialects (e.g. some varieties of American
English) – examples: sad [sæd̥], bag [bæɡ̊]. In other dialects, they are fully voiced in final
position, but only partially voiced in initial position.

Supra-segmental features

Tone groups

English is an intonation language. This means that the pitch of the voice is used syntactically, for
example, to convey surprise and irony, or to change a statement into a question.

In English, intonation patterns are on groups of words, which are called tone groups, tone units, intonation
groups or sense groups. Tone groups are said on a single breath and, as a consequence, are of limited
length, more often being on average five words long or lasting roughly two seconds. For example:

- /duː juː niːd ˈɛnɪˌθɪŋ/ Do you need anything?
- /aɪ dəʊnt | nəʊ/ I don't, no
- /aɪ dəʊnt nəʊ/ I don't know (contracted to, for example, - /aɪ dəʊnəʊ/ or /aɪ dənəʊ/ I
dunno in fast or colloquial speech that de-emphasises the pause between don't and know even
further)

Characteristics of intonation

English is a strongly stressed language, in that certain syllables, both within words and within phrases, get
a relative prominence/loudness during pronunciation while the others do not. The former kind of syllables
are said to be accentuated/stressed and the latter are unaccentuated/unstressed.

Hence in a sentence, each tone group can be subdivided into syllables, which can either be stressed
(strong) or unstressed (weak). The stressed syllable is called the nuclear syllable. For example:
That | was | the | best | thing | you | could | have | done!

Here, all syllables are unstressed, except the syllables/words best and done, which are stressed. Best is
stressed harder and, therefore, is the nuclear syllable.

The nuclear syllable carries the main point the speaker wishes to make. For example:

John had not stolen that money. (... Someone else had.)
John had not stolen that money. (... Someone said he had. or ... Not at that time, but later he did.)
John had not stolen that money. (... He acquired the money by some other means.)
John had not stolen that money. (... He had stolen some other money.)
John had not stolen that money. (... He had stolen something else.)

Also

I did not tell her that. (... Someone else told her)
I did not tell her that. (... You said I did. or ... but now I will)
I did not tell her that. (... I did not say it; she could have inferred it, etc)
I did not tell her that. (... I told someone else)
I did not tell her that. (... I told her something else)

This can also be used to express emotion:

Oh, really? (...I did not know that)
Oh, really? (...I disbelieve you. or ... That is blatantly obvious)

The nuclear syllable is spoken more loudly than the others and has a characteristic change of pitch. The
changes of pitch most commonly encountered in English are the rising pitch and the falling pitch,
although the fall-rising pitch and/or the rise-falling pitch are sometimes used. In this opposition between
falling and rising pitch, which plays a larger role in English than in most other languages, falling pitch
conveys certainty and rising pitch uncertainty. This can have a crucial impact on meaning, specifically in
relation to polarity, the positive–negative opposition; thus, falling pitch means, "polarity known", while
rising pitch means "polarity unknown". This underlies the rising pitch of yes/no questions. For example:

When do you want to be paid?
Now? (Rising pitch. In this case, it denotes a question: "Can I be paid now?" or "Do you desire to
pay now?")
Now. (Falling pitch. In this case, it denotes a statement: "I choose to be paid now.")

Grammar
Main article: English grammar

English grammar has minimal inflection compared with most other Indo-European languages. For
example, Modern English, unlike Modern German or Dutch and the Romance languages, lacks
grammatical gender and adjectival agreement. Case marking has almost disappeared from the language
and mainly survives in pronouns. The patterning of strong (e.g. speak/spoke/spoken) versus weak verbs
inherited from its Germanic origins has declined in importance in modern English, and the remnants of
inflection (such as plural marking) have become more regular.
At the same time, the language has become more analytic, and has developed features such as modal
verbs and word order as resources for conveying meaning. Auxiliary verbs mark constructions such as
questions, negative polarity, the passive voice and progressive aspect.

Vocabulary
The English vocabulary has changed considerably over the centuries.[43]

Look up Appendix:List of Proto-Indo-European roots in
Wiktionary, the free dictionary.

Like many languages deriving from Proto-Indo-European (PIE), many of the most common words in
English can trace back their origin (through the Germanic branch) to PIE. Such words include the basic
pronouns I, from Old English ic, (cf. Latin ego, Greek ego, Sanskrit aham), me (cf. Latin me, Greek eme,
Sanskrit mam), numbers (e.g. one, two, three, cf. Latin unus, duo, tres, Greek oinos "ace (on dice)", duo,
treis), common family relationships such as mother, father, brother, sister etc (cf. Greek "meter", Latin
"mater", Sanskrit "matṛ"; mother), names of many animals (cf. Sankrit mus, Greek mys, Latin mus;
mouse), and many common verbs (cf. Greek gignōmi, Latin gnoscere, Hittite kanes; to know).

Germanic words (generally words of Old English or to a lesser extent Norse origin) tend to be shorter
than the Latinate words of English and more common in ordinary speech. This includes nearly all the
basic pronouns, prepositions, conjunctions, modal verbs etc. that form the basis of English syntax and
grammar. The longer Latinate words are often regarded as more elegant or educated. However, the
excessive use of Latinate words is considered at times to be either pretentious or an attempt to obfuscate
an issue. George Orwell's essay "Politics and the English Language", considered an important
scrutinization of the English language, is critical of this, as well as other perceived misuse of the
language.

An English speaker is in many cases able to choose between Germanic and Latinate synonyms: come or
arrive; sight or vision; freedom or liberty. In some cases, there is a choice between a Germanic derived
word (oversee), a Latin derived word (supervise), and a French word derived from the same Latin word
(survey). Such synonyms harbor a variety of different meanings and nuances, enabling the speaker to
express fine variations or shades of thought. Familiarity with the etymology of groups of synonyms can
give English speakers greater control over their linguistic register. See: List of Germanic and Latinate
equivalents in English.

An exception to this and a peculiarity perhaps unique to English is that the nouns for meats are commonly
different from, and unrelated to, those for the animals from which they are produced, the animal
commonly having a Germanic name and the meat having a French-derived one. Examples include: deer
and venison; cow and beef; swine/pig and pork, or sheep and mutton. This is assumed to be a result of the
aftermath of the Norman invasion, where a French-speaking elite were the consumers of the meat,
produced by Anglo-Saxon lower classes.[citation needed]

Since the majority of words used in informal settings will normally be Germanic, such words are often the
preferred choices when a speaker wishes to make a point in an argument in a very direct way. A majority
of Latinate words (or at least a majority of content words) will normally be used in more formal speech
and writing, such as a courtroom or an encyclopedia article.[citation needed] However, there are other Latinate
words that are used normally in everyday speech and do not sound formal; these are mainly words for
concepts that no longer have Germanic words, and are generally assimilated better and in many cases do
not appear Latinate. For instance, the words mountain, valley, river, aunt, uncle, move, use, push and stay
are all Latinate.
English easily accepts technical terms into common usage and often imports new words and phrases.
Examples of this phenomenon include: cookie, Internet and URL (technical terms), as well as genre, über,
lingua franca and amigo (imported words/phrases from French, German, modern Latin, and Spanish,
respectively). In addition, slang often provides new meanings for old words and phrases. In fact, this
fluidity is so pronounced that a distinction often needs to be made between formal forms of English and
contemporary usage.

See also: sociolinguistics.

Number of words in English

The General Explanations at the beginning of the Oxford English Dictionary states:

The Vocabulary of a widely diffused and highly cultivated living language is not a fixed
“ quantity circumscribed by definite limits... there is absolutely no defining line in any
direction: the circle of the English language has a well-defined centre but no discernible
circumference. ”
The vocabulary of English is undoubtedly vast, but assigning a specific number to its size is more a matter
of definition than of calculation. Unlike other languages, such as French, German, Spanish and Italian
there is no Academy to define officially accepted words and spellings. Neologisms are coined regularly in
medicine, science and technology and other fields, and new slang is constantly developed. Some of these
new words enter wide usage; others remain restricted to small circles. Foreign words used in immigrant
communities often make their way into wider English usage. Archaic, dialectal, and regional words might
or might not be widely considered as "English".

The Oxford English Dictionary, 2nd edition (OED2) includes over 600,000 definitions, following a rather
inclusive policy:

It embraces not only the standard language of literature and conversation, whether current at
“ the moment, or obsolete, or archaic, but also the main technical vocabulary, and a large

measure of dialectal usage and slang (Supplement to the OED, 1933).[44]

The editors of Webster's Third New International Dictionary, Unabridged (475,000 main headwords) in
their preface, estimate the number to be much higher. It is estimated that about 25,000 words are added to
the language each year.[45]

Word origins

Main article: Lists of English words of international origin

One of the consequences of the French influence is that the vocabulary of English is, to a certain extent,
divided between those words which are Germanic (mostly West Germanic, with a smaller influence from
the North Germanic branch) and those which are "Latinate" (Latin-derived, either directly or from
Norman French or other Romance languages).

Numerous sets of statistics have been proposed to demonstrate the origins of English vocabulary. None,
as of yet, is considered definitive by most linguists.
A computerised survey of about 80,000 words in the old Shorter Oxford Dictionary (3rd ed.) was
published in Ordered Profusion by Thomas Finkenstaedt and Dieter Wolff (1973)[46] that estimated the
origin of English words as follows:

Influences in English vocabulary

• Langue d'oïl, including French and Old Norman: 28.3%
• Latin, including modern scientific and technical Latin: 28.24%
• Other Germanic languages (including words directly inherited from Old English): 25%
• Greek: 5.32%
• No etymology given: 4.03%
• Derived from proper names: 3.28%
• All other languages contributed less than 1%

A survey by Joseph M. Williams in Origins of the English Language of 10,000 words taken from several
thousand business letters gave this set of statistics:[47]

• French (langue d'oïl): 41%
• "Native" English: 33%
• Latin: 15%
• Danish: 2%
• Dutch: 1%
• Other: 10%

However, 83% of the 1,000 most-common, and all of the 100 most-common English words are Germanic.
[48]

Dutch origins

Main article: List of English words of Dutch origin

Words describing the navy, types of ships, and other objects or activities on the water are often from
Dutch origin. Yacht (jacht) and cruiser (kruiser) are examples.

French origins

Main article: List of French words and phrases used by English speakers

There are many words of French origin in English, such as competition, art, table, publicity, police, role,
routine, machine, force, and many others that have been and are being anglicised; they are now
pronounced according to English rules of phonology, rather than French. A large portion of English
vocabulary is of French or Langues d'oïl origin, most derived from, or transmitted via, the Anglo-Norman
spoken by the upper classes in England for several hundred years after the Norman conquest of England.

Idiomatic
The multiple origins of words used in English, and the willingness of English speakers to innovate and be
creative, has resulted in many expressions that seem somewhat odd to newcomers to the language, but
which can convey complex meanings in sometimes-colorful ways.
Consider, for example, the common idiom of using the same word to mean an activity and those engaged
in that activity, and sometimes also a verb. Here are a few examples:

Activity Actor(s) Verb

aggregation aggregation aggregate

assembly assembly assemble

congregation congregation congregate

court court court

delegation delegation delegate

hospital hospital (none)

gathering gathering gather

hunt hunt hunt

march march march

militia militia (none)

ministry ministry minister

movement movement move

muster muster muster

police police police

service service serve
university university (none)

viking viking vik (archaic)

wedding wedding wed

Writing system
Main articles: English alphabet and English orthography

English has been written using the Latin alphabet since around the ninth century. (Before that, Old
English had been written using Anglo-Saxon runes.) The spelling system, or orthography, is multilayered,
with elements of French, Latin and Greek spelling on top of the native Germanic system; it has grown to
vary significantly from the phonology of the language. The spelling of words often diverges considerably
from how they are spoken.

Though letters and sounds may not correspond in isolation, spelling rules that take into account syllable
structure, phonetics, and accents are 75% or more reliable.[49] Some phonics spelling advocates claim that
English is more than 80% phonetic.[50]

In general, the English language, being the product of many other languages and having only been
codified orthographically in the 16th century, has fewer consistent relationships between sounds and
letters than many other languages. The consequence of this orthographic history is that reading can be
challenging.[51] It takes longer for students to become completely fluent readers of English than of many
other languages, including French, Greek, and Spanish.[52]

Basic sound-letter correspondence

See also: Hard and soft C and Hard and soft G

Only the consonant letters are pronounced in a relatively regular way:

IPA Alphabetic representation Dialect-specific

p p

b b

t t, th (rarely) thyme, Thames th thing (African American, New
York)

d d th that (African American, New York)

c (+ a, o, u, consonants), k, ck, ch, qu (rarely) conquer, kh
k
(in foreign words)

g g, gh, gu (+ a, e, i), gue (final position)

m m

n n

ŋ n (before g or k), ng

th thing (many forms of English
f f, ph, gh (final, infrequent) laugh, rough
language in England)

v v th with (Cockney, Estuary English)

θ th thick, think, through

ð th that, this, the

s s, c (+ e, i, y), sc (+ e, i, y), ç (façade)

z, s (finally or occasionally medially), ss (rarely) possess,
z
dessert, word-initial x xylophone

sh, sch, ti (before vowel) portion, ci/ce (before vowel)
suspicion, ocean; si/ssi (before vowel) tension, mission; ch
ʃ
(esp. in words of French origin); rarely s/ss before u sugar,
issue; chsi in fuchsia only
medial si (before vowel) division, medial s (before "ur")
ʒ pleasure, zh (in foreign words), z before u azure, g (in words
of French origin) (+e, i, y) genre

occasionally ch loch (Scottish
x kh, ch, h (in foreign words)
English, Welsh English)

h h (syllable-initially, otherwise silent)

t (+ u, ue, eu) tune, Tuesday,
Teutonic (several dialects - see
tʃ ch, tch, t before u future, culture
Phonological history of English
consonant clusters)

d (+ u, ue, ew) dune, due, dew
dʒ j, g (+ e, i, y), dg (+ e, i, consonant) badge, judg(e)ment (several dialects - another example of
yod coalescence)

ɹ r, wr (initial) wrangle

j y (initially or surrounded by vowels)

l l

w w

Scottish and Irish English, as well as
ʍ wh (pronounced hw) some varieties of American, New
Zealand, and English English

Written accents

Main article: English words with diacritics

Unlike most other Germanic languages, English has almost no diacritics except in foreign loanwords (like
the acute accent in café), and in the uncommon use of a diaeresis mark (often in formal writing) to
indicate that two vowels are pronounced separately, rather than as one sound (e.g. naïve, Zoë). It may be
acceptable to leave out the marks, depending on the target audience, or the context in which the word is
used.
Some English words retain the diacritic to distinguish them from others, such as animé, exposé, lamé, öre,
øre, pâté, piqué, and rosé, though these are sometimes also dropped (résumé/resumé is usually spelled
resume in the United States). There are loan words which occasionally use a diacritic to represent their
pronunciation that is not in the original word, such as maté, from Spanish yerba mate, following the
French usage, but they are extremely rare.

Formal written English
Main article: Formal written English

A version of the language almost universally agreed upon by educated English speakers around the world
is called formal written English. It takes virtually the same form no matter where in the English-speaking
world it is written. In spoken English, by contrast, there are a vast number of differences between dialects,
accents, and varieties of slang, colloquial and regional expressions. In spite of this, local variations in the
formal written version of the language are quite limited, being restricted largely to the spelling differences
between British and American English.

Basic and simplified versions
To make English easier to read, there are some simplified versions of the language. One basic version is
named Basic English, a constructed language with a small number of words created by Charles Kay
Ogden and described in his book Basic English: A General Introduction with Rules and Grammar (1930).
The language is based on a simplified version of English. Ogden said that it would take seven years to
learn English, seven months for Esperanto, and seven weeks for Basic English, comparable with Ido.
Thus companies who need to make complex books for international use employ Basic English, and by
language schools that need to give people some knowledge of English in a short time.

Ogden did not put any words into Basic English that could be said with a few other words and he worked
to make the words work for speakers of any other language. He put his set of words through a large
number of tests and adjustments. He also made the grammar simpler, but tried to keep the grammar
normal for English users.

The concept gained its greatest publicity just after the Second World War as a tool for world peace.
Although it was not built into a program, similar simplifications were devised for various international
uses.

Another version, Simplified English, exists, which is a controlled language originally developed for
aerospace industry maintenance manuals. It offers a carefully limited and standardised[who?] subset of
English. Simplified English has a lexicon of approved words and those words can only be used in certain
ways. For example, the word close can be used in the phrase "Close the door" but not "do not go close to
the landing gear".

See also
• Changes to Old English vocabulary
• English for Academic Purposes
• English language learning and teaching
• Language Report
• Teaching English as a foreign language
Notes
1. ^ "English, a. and n." The Oxford English Dictionary. 2nd ed. 1989. OED Online. Oxford
University Press. 6 September 2007 http://dictionary.oed.com/cgi/entry/50075365
2. ^ see: Ethnologue (1984 estimate); The Triumph of English, The Economist, Dec. 20, 2001;
Ethnologue (1999 estimate); "20,000 Teaching Jobs" (in English). Oxford Seminars. Retrieved on
2007-02-18.;
3. ^ a b "Lecture 7: World-Wide English". EHistLing. Retrieved on 2007-03-26.
4. ^ Ethnologue, 1999
5. ^ a b Languages of the World (Charts), Comrie (1998), Weber (1997), and the Summer Institute for
Linguistics (SIL) 1999 Ethnologue Survey. Available at The World's Most Widely Spoken
Languages
6. ^ "Global English: gift or curse?". Retrieved on 2005-04-04.
7. ^ a b c d e David Graddol (1997). "The Future of English?". The British Council. Retrieved on 2007-
04-15.
8. ^ "The triumph of English". The Economist (2001-12-20). Retrieved on 2007-03-26.
9. ^ "Lecture 7: World-Wide English". EHistLing. Retrieved on 2007-03-26.
10. ^ Anglik English language resource
11. ^ [1]
12. ^ Linguistics research center Texas University,
13. ^ The Germanic Invasions of Western Europe, Calgary University
14. ^ English Language Expert
15. ^ History of English, Chapter 5 "From Old to Middle English"
16. ^ Curtis, Andy. Color, Race, And English Language Teaching: Shades of Meaning. 2006, page
192.
17. ^ Ethnologue, 1999
18. ^ CIA World Factbook, Field Listing - Languages (World).
19. ^ Mair, Victor H. (1991). "What Is a Chinese "Dialect/Topolect"? Reflections on Some Key Sino-
English Linguistic Terms". Sino-Platonic Papers. http://sino-
platonic.org/complete/spp029_chinese_dialect.pdf.
20. ^ "English language". Columbia University Press (2005). Retrieved on 2007-03-26.
21. ^ 20,000 Teaching
22. ^ Not the Queen's English, Newsweek International, 7 March edition, 2007.
23. ^ "U.S. Census Bureau, Statistical Abstract of the United States: 2003, Section 1 Population" (pdf)
(in English) 59 pages. U.S. Census Bureau. Table 47 gives the figure of 214,809,000 for those five
years old and over who speak exclusively English at home. Based on the American Community
Survey, these results exclude those living communally (such as college dormitories, institutions,
and group homes), and by definition exclude native English speakers who speak more than one
language at home.
24. ^ a b The Cambridge Encyclopedia of the English Language, Second Edition, Crystal, David;
Cambridge, UK: Cambridge University Press, [1995 (2003-08-03).]
25. ^ Population by mother tongue and age groups, 2006 counts, for Canada, provinces and territories
– 20% sample data, Census 2006, Statistics Canada.
26. ^ Census Data from Australian Bureau of Statistics Main Language Spoken at Home. The figure
is the number of people who only speak English at home.
27. ^ Census in Brief, page 15 (Table 2.5), 2001 Census, Statistics South Africa.
28. ^ Languages spoken, 2006 Census, Statistics New Zealand. No figure is given for the number of
native speakers, but it would be somewhere between the number of people who spoke English
only (3,008,058) and the total number of English speakers (3,673,623), if one ignores the 197,187
people who did not provide a usable answer.
29. ^ Subcontinent Raises Its Voice, Crystal, David; Guardian Weekly: Friday 19 November 2004.
30. ^ Yong Zhao; Keith P. Campbell (1995). "English in China". World Englishes 14 (3): 377–390.
Hong Kong contributes an additional 2.5 million speakers (1996 by-census]).
31. ^ Census of India's Indian Census, Issue 10, 2003, pp 8-10, (Feature: Languages of West Bengal
in Census and Surveys, Bilingualism and Trilingualism).
32. ^ Tropf, Herbert S. 2004. India and its Languages. Siemens AG, Munich
33. ^ For the distinction between "English Speakers," and "English Users," please see: TESOL-India
(Teachers of English to Speakers of Other Languages)], India: World's Second Largest English-
Speaking Country. Their article explains the difference between the 350 million number
mentioned in a previous version of this Wikipedia article and a more plausible 90 million number:
"Wikipedia's India estimate of 350 million includes two categories - "English
“ Speakers" and "English Users". The distinction between the Speakers and Users is that
Users only know how to read English words while Speakers know how to read
English, understand spoken English as well as form their own sentences to converse in
English. The distinction becomes clear when you consider the China numbers. China
has over 200~350 million users that can read English words but, as anyone can see on
the streets of China, only handful of million who are English speakers." ”
34. ^ Australian Bureau of Statistics
35. ^ Nancy Morris (1995), Puerto Rico: Culture, Politics, and Identity, Praeger/Greenwood, pp. 62,
ISBN 0275952282, http://books.google.com/books?id=vyQDYqz2kFsC&pg=RA1-
PA62&lpg=RA1-PA62&dq=%22puerto+rico
%22+official+language+1993&source=web&ots=AZKLran6u3&sig=8fkQ9gwM0B0kwVYMNt
Xr-_9dnro
36. ^ Languages Spoken in the U.S., National Virtual Translation Center, 2006.
37. ^ U.S. English Foundation, Official Language Research – United Kingdom.
38. ^ U.S. ENGLISH,Inc
39. ^ Multilingualism in Israel, Language Policy Research Center
40. ^ The Official EU languages
41. ^ European Union
42. ^ Cox, Felicity (2006). "Australian English Pronunciation into the 21st century". Prospect 21: 3–
21. http://www.shlrc.mq.edu.au/~felicity/Papers/Prospect_Erratum_v1.pdf. Retrieved on 22 July
2007.
43. ^ For the processes and triggers of English vocabulary changes cf. English and General Historical
Lexicology (by Joachim Grzega and Marion Schöner)
44. ^ It went on to clarify,
Hence we exclude all words that had become obsolete by 1150 [the end of the Old
“ English era] . . . Dialectal words and forms which occur since 1500 are not admitted,
except when they continue the history of the word or sense once in general use,
illustrate the history of a word, or have themselves a certain literary currency. ”
45. ^ Kister, Ken. "Dictionaries defined." Library Journal, 6/15/92, Vol. 117 Issue 11, p43, 4p, 2bw
46. ^ Finkenstaedt, Thomas; Dieter Wolff (1973). Ordered profusion; studies in dictionaries and the
English lexicon. C. Winter. ISBN 3-533-02253-6.
47. ^ Joseph M. Willams, Origins of the English Language at Amazon.com
48. ^ Old English Online
49. ^ Abbott, M. (2000). Identifying reliable generalizations for spelling words: The importance of
multilevel analysis. The Elementary School Journal 101(2), 233-245.
50. ^ Moats, L. M. (2001). Speech to print: Language essentials for teachers. Baltimore, MD: Paul H.
Brookes Company.
51. ^ Diane McGuinness, Why Our Children Can’t Read (New York: Touchstone, 1997) pp. 156-169
52. ^ Ziegler, J. C., & Goswami, U. (2005). Reading acquisition, developmental dyslexia, and skilled
reading across languages. Psychological Bulletin, 131(1), 3-29.
References
• Baugh, Albert C.; Thomas Cable (2002). A history of the English language (5th ed. ed.).
Routledge. ISBN 0-415-28099-0.
• Bragg, Melvyn (2004). The Adventure of English: The Biography of a Language. Arcade
Publishing. ISBN 1-55970-710-0.
• Crystal, David (1997). English as a Global Language. Cambridge: Cambridge University Press.
ISBN 0-521-53032-6.
• Crystal, David (2004). The Stories of English. Allen Lane. ISBN 0-7139-9752-4.
• Crystal, David (2003). The Cambridge encyclopedia of the English language (2nd ed. ed.).
Cambridge University Press. ISBN 0-521-53033-4.
• Halliday, MAK (1994). An introduction to functional grammar (2nd ed. ed.). London: Edward
Arnold. ISBN 0-340-55782-6.
• Hayford, Harrison; Howard P. Vincent (1954). Reader and Writer. Houghton Mifflin Company.
[2]
• McArthur, T. (ed.) (1992). The Oxford Companion to the English Language. Oxford University
Press. ISBN 0-19-214183-X.
• Robinson, Orrin (1992). Old English and Its Closest Relatives. Stanford Univ. Press. ISBN 0-
8047-2221-8.
• Kenyon, John Samuel and Knott, Thomas Albert, A Pronouncing Dictionary of American English,
G & C Merriam Company, Springfield, Mass, USA,1953.

External links

At Wikiversity, you can learn about: Topic:English Language

English language edition of Wikipedia, the free encyclopedia
Wikimedia Commons has media related to: English language
Wikiquote has a collection of quotations related to: English language

• English language at Ethnologue
• National Clearinghouse for English Language Acquisition
• Accents of English from Around the World Hear and compare how the same 110 words are
pronounced in 50 English accents from around the world - instantaneous playback online
• The Global English Survey Project A survey tracking how non-native speakers around the world
use English
• 6000 English words recorded by a native speaker
• More than 20000 English words recorded by a native speaker

Dictionaries

English language edition of Wiktionary, the free dictionary/thesaurus

• Merriam-Webster's online dictionary
• Oxford's online dictionary
• dict.org
• English language word roots, prefixes and suffixes (affixes) dictionary
• Collection of English bilingual dictionaries
Retrieved from "http://en.wikipedia.org/wiki/English_language"
Categories: English language | English idioms | Languages of The Gambia | Languages of American
Samoa | Languages of Australia | Languages of Bangladesh | Languages of Belize | Languages of
Botswana | Languages of the British Virgin Islands | Languages of Cameroon | Languages of Canada |
Languages of Fiji | Languages of Ghana | Languages of Guam | Languages of Guyana | Languages of
Hong Kong | Languages of India | Languages of Ireland | Languages of Jamaica | Languages of Kenya |
Languages of Kiribati | Languages of Lesotho | Languages of Liberia | Languages of Macau | Languages
of Madagascar | Languages of Malawi | Languages of Malta | Languages of Mauritius | Languages of
Namibia | Languages of Nauru | Languages of New Zealand | Languages of Nigeria | Languages of Niue |
Languages of Pakistan | Languages of Palau | Languages of Papua New Guinea | Languages of Pitcairn |
Languages of Russia | Languages of Rwanda | Languages of Sierra Leone | Languages of Singapore |
Languages of South Africa | Languages of Sri Lanka | Languages of Sudan | Languages of Swaziland |
Languages of the Philippines | Languages of the Seychelles | Languages of the Solomon Islands |
Languages of the United Kingdom | Languages of the United States Virgin Islands | Languages of the
United States | Languages of Tokelau | Languages of Trinidad and Tobago | Languages of Uganda |
Languages of Vanuatu | Languages of Zambia | Languages of Zimbabwe
Hidden categories: Semi-protected | Articles containing English language text | All articles with
unsourced statements | Articles with unsourced statements since June 2008 | Articles with unsourced
statements since October 2008 | Articles with unsourced statements since April 2008 | Articles with
specifically-marked weasel-worded phrases | Articles with unsourced statements since July 2008 | Articles
with unsourced statements since May 2008 | Cleanup from December 2008 | All pages needing cleanup |
Articles with unsourced statements since December 2007

Views

• Article
• Discussion
• View source
• History
History of the English language
From Wikipedia, the free encyclopedia

Jump to: navigation, search

English is a West Germanic language which originated from the Anglo-Frisian dialects brought to Britain
by Germanic settlers and Roman auxiliary troops from various parts of what is now northwest Germany
and the northern Netherlands. Initially, Old English was a diverse group of dialects, reflecting the varied
origins of the Anglo-Saxon Kingdoms of England. One of these dialects, Late West Saxon, eventually
came to dominate. The original Old English language was then influenced by two waves of invasion: the
first by speakers of the Scandinavian branch of the Germanic language family, who conquered and
colonized parts of Britain in the 8th and 9th centuries; the second by the Normans in the 11th century,
who spoke Old Norman and ultimately developed an English variety of this called Anglo-Norman. These
two invasions caused English to become "mixed" to some degree, though it was never a truly mixed
language in the strict linguistic sense of the word, as mixed languages arise from the cohabitation of
speakers of different languages, who develop a hybrid tongue for basic communication.

Cohabitation with the Scandinavians resulted in a significant grammatical simplification and lexical
enrichment of the Anglo-Frisian core of English; the later Norman occupation led to the grafting onto that
Germanic core of a more elaborate layer of words from the Romance languages. This Norman influence
entered English largely through the courts and government. Thus, English developed into a "borrowing"
language of great flexibility, resulting in an enormous and varied vocabulary.

Contents
[hide]

• 1 Proto-English
• 2 Old English
• 3 Middle English
• 4 Early Modern English
• 5 Historic English text samples
o 5.1 Old English
o 5.2 Middle English
o 5.3 Early Modern English
o 5.4 Modern English
• 6 See also

• 7 References

[edit] Proto-English
The Germanic tribes that gave rise to the English language (the Angles, Saxons, Frisians, Jutes and
perhaps even the Franks), both traded and fought with the Latin-speaking Roman Empire in the centuries-
long process of the Germanic peoples' expansion into Western Europe. Many Latin words for common
objects entered the vocabulary of these Germanic peoples before any of their tribes reached Britain;
examples include camp, cheese, cook, fork, inch, kettle, kitchen, linen, mile, mill, mint (coin), noon,
pillow, pin, pound, punt (boat), street and wall. The Romans also gave the English language words which
they had themselves borrowed from other languages: anchor, butter, chest, devil, dish, sack and wine.

Our main source for the culture of the Germanic peoples (the ancestors of the English) in ancient times is
Tacitus' Germania. While remaining quite conversant with Roman civilisation and its economy, including
serving in the Roman military, they retained political independence. We can be certain that Germanic
settlement in Britain was not intensified until the time of Hengist and Horsa in the fifth century, since had
the English arrived en-masse under Roman rule, they would have been thoroughly Christianised as a
matter of course. As it was, the Angles, Saxons and Jutes arrived as pagans, independent of Roman
control.

According to the Anglo-Saxon Chronicle, around the year 449, Vortigern (or Gwrtheyrn from the Welsh
tradition), King of the Britons, invited the "Angle kin" (Angles led by Hengest and Horsa) to help him in
conflicts with the Picts. In return, the Angles were granted lands in the southeast of England. Further aid
was sought, and in response "came men of Ald Seaxum of Anglum of Iotum" (Saxons, Angles and Jutes).
The Chronicle talks of a subsequent influx of settlers who eventually established seven kingdoms, known
as the heptarchy. Modern scholarship considers most of this story to be legendary and politically
motivated, and the identification of the tribes with the Angles, Saxons and Jutes is no longer accepted as
an accurate description (Myres, 1986, p. 46ff), especially since the Anglo-Saxon language is more similar
to the Frisian languages than any of the others.

[edit] Old English
The first page of the Beowulf manuscript
Main article: Old English language

The invaders' Germanic language displaced the indigenous Brythonic languages of what became England.
The original Celtic languages remained in Scotland, Wales and Cornwall. The dialects spoken by the
Anglo-Saxons formed what is now called Old English. Later, it was strongly influenced by the North
Germanic language Norse, spoken by the Vikings who invaded and settled mainly in the north-east of
England (see Jórvík and Danelaw). The new and the earlier settlers spoke languages from different
branches of the Germanic family; many of their lexical roots were the same or similar, although their
grammars were more distinct, including the prefix, suffix and inflection patterns for many words. The
Germanic language of these Old English-speaking inhabitants was influenced by contact with Norse
invaders, which might have been responsible for some of the morphological simplification of Old
English, including the loss of grammatical gender and explicitly marked case (with the notable exception
of the pronouns). The most famous surviving work from the Old English period is a fragment of the epic
poem "Beowulf" composed by an unknown poet; it is thought to have been substantially modified,
probably by Christian clerics long after its composition.

The period when England was ruled by Anglo-Saxon kings, with the assistance of their clergy, was an era
in which the Old English language was not only alive, but thriving. Since it was used for legal, political,
religious and other intellectual purposes, Old English is thought to have coined new words from native
Anglo-Saxon roots, rather than to have "borrowed" foreign words. (This point is made in a standard text,
The History of the English Language, by Baugh).

The introduction of Christianity added another wave of Latin and some Greek words.

The Old English period formally ended with the Norman conquest, when the language was influenced to
an even greater extent by the Norman-speaking Normans.

The use of Anglo-Saxon to describe a merging of Anglian and Saxon languages and cultures is a
relatively modern development. According to Lois Fundis (Stumpers-L, Fri, 14 Dec 2001), "The first
citation for the second definition of 'Anglo-Saxon', referring to early English language or a certain dialect
thereof, comes during the reign of Elizabeth I, from a historian named Camden, who seems to be the
person most responsible for the term becoming well-known in modern times".

[edit] Middle English
Main article: Middle English

For about 300 years following the Norman Conquest in 1066, the Norman kings and their high nobility
spoke only one of the langues d'oïl called Anglo-Norman, whilst English continued to be the language of
the common people. Various contemporary sources suggest that within fifty years of the invasion, most of
the Normans outside the royal court spoke English[citation needed], with French remaining the prestige language
of government and law, largely out of social inertia. For example, Orderic Vitalis, a historian born in 1075
and the son of a Norman knight, said that he learned French only as a second language[citation needed]. A
tendency for French-derived words to have more formal connotations has continued to the present day;
most modern English speakers would consider a "cordial reception" (from French) to be more formal than
a "hearty welcome" (Germanic). Another example is the very unusual construction of the words for
animals being separate from the words for their food products e.g. beef and pork (from the French boeuf
and porc) being the products of the Germanically-named animals 'cow' and 'pig'.

While the Anglo-Saxon Chronicle continued until 1154, most other literature from this period was in Old
Norman or Latin. A large number of Norman words were taken into Old English, with many doubling for
Old English words. The Norman influence is the hallmark of the linguistic shifts in English over the
period of time following the invasion, producing what is now referred to as Middle English. English was
also influenced by the Celtic languages it was displacing, most notably with the introduction of the
continuous aspect, a feature found in many modern languages, but developed earlier and more thoroughly
in English.[1][2] English spelling was also influenced by Norman in this period, with the /θ/ and /ð/ sounds
being spelled th rather than with the Old English letters þ (thorn) and ð (eth), which did not exist in
Norman. The most famous writer from the Middle English period was Geoffrey Chaucer and of his
works, The Canterbury Tales is the best known.

English literature started to reappear around 1200, when a changing political climate and the decline in
Anglo-Norman made it more respectable. The Provisions of Oxford, released in 1258, were the first
English government document to be published in the English language since the Conquest.[3] Edward III
became the first king to address Parliament in English when he did so in 1362.[4] By the end of that
century, even the royal court had switched to English. Anglo-Norman remained in use in limited circles
somewhat longer, but it had ceased to be a living language.

[edit] Early Modern English
Main article: Early Modern English

Modern English is often dated from the Great Vowel Shift, which took place mainly during the 15th
century. English was further transformed by the spread of a standardised London-based dialect in
government and administration and by the standardising effect of printing. By the time of William
Shakespeare (mid-late 16th century), the language had become clearly recognizable as Modern English.

English has continuously adopted foreign words, especially from Latin and Greek, since the Renaissance.
(In the 17th century, Latin words were often used with the original inflections, but these eventually
disappeared). As there are many words from different languages and English spelling is variable, the risk
of mispronunciation is high, but remnants of the older forms remain in a few regional dialects, most
notably in the West Country.

In 1755, Samuel Johnson published the first significant English dictionary, his Dictionary of the English
Language.

[edit] Historic English text samples
[edit] Old English

Beowulf lines 1 to 11, approximately AD 900

Hwæt! Wē Gār-
in geārdagum,
Dena
þēodcyninga, þrym gefrūnon,
hū ðā æþelingas ellen fremedon.
sceaþena
Oft Scyld Scēfing
þrēatum,
monegum meodosetla
mǣgþum, oftēah,
Syððan ǣrest
egsode eorlas.
wearð
hē þæs frōfre
fēasceaft funden,
gebād,
wēox under weorðmyndum
wolcnum, þāh,
oðþæt him þāra
ǣghwylc ymbsittendra
ofer hronrāde hȳran scolde,
þæt wæs gōd
gomban gyldan.
cyning!

Which, as translated by Francis Gummere, means:

Lo, praise of the prowess of people-kings
of spear-armed Danes, in days long sped,
we have heard, and what honor the athelings won!
Oft Scyld the Scefing from squadroned foes,
from many a tribe, the mead-bench tore,
awing the earls. Since erst he lay
friendless, a foundling, fate repaid him:
for he waxed under welkin, in wealth he throve,
till before him the folk, both far and near,
who house by the whale-path, heard his mandate,
gave him gifts: a good king he!

Here is a sample prose text, the beginning of The Voyages of Ohthere and Wulfstan. The full text can be
found at The Voyages of Ohthere and Wulfstan, at Wikisource.

Ōhthere sǣde his hlāforde, Ælfrēde cyninge, ðæt hē ealra Norðmonna norþmest būde. Hē cwæð þæt hē būde on
þǣm lande norþweardum wiþ þā Westsǣ. Hē sǣde þēah þæt þæt land sīe swīþe lang norþ þonan; ac hit is eal
wēste, būton on fēawum stōwum styccemǣlum wīciað Finnas, on huntoðe on wintra, ond on sumera on fiscaþe be
þǣre sǣ. Hē sǣde þæt hē æt sumum cirre wolde fandian hū longe þæt land norþryhte lǣge, oþþe hwæðer ǣnig
mon be norðan þǣm wēstenne būde. Þā fōr hē norþryhte be þǣm lande: lēt him ealne weg þæt wēste land on ðæt
stēorbord, ond þā wīdsǣ on ðæt bæcbord þrīe dagas. Þā wæs hē swā feor norþ swā þā hwælhuntan firrest faraþ. Þā
fōr hē þā giet norþryhte swā feor swā hē meahte on þǣm ōþrum þrīm dagum gesiglau. Þā bēag þæt land, þǣr
ēastryhte, oþþe sēo sǣ in on ðæt lond, hē nysse hwæðer, būton hē wisse ðæt hē ðǣr bād westanwindes ond hwōn
norþan, ond siglde ðā ēast be lande swā swā hē meahte on fēower dagum gesiglan. Þā sceolde hē ðǣr bīdan
ryhtnorþanwindes, for ðǣm þæt land bēag þǣr sūþryhte, oþþe sēo sǣ in on ðæt land, hē nysse hwæþer. Þā siglde
hē þonan sūðryhte be lande swā swā hē meahte on fīf dagum gesiglan. Ðā læg þǣr ān micel ēa ūp on þæt land. Ðā
cirdon hīe ūp in on ðā ēa for þǣm hīe ne dorston forþ bī þǣre ēa siglan for unfriþe; for þǣm ðæt land wæs eall
gebūn on ōþre healfe þǣre ēas. Ne mētte hē ǣr nān gebūn land, siþþan hē from his āgnum hām fōr; ac him wæs
ealne weg wēste land on þæt stēorbord, būtan fiscerum ond fugelerum ond huntum, ond þæt wǣron eall Finnas;
ond him wæs āwīdsǣ on þæt bæcbord. Þā Boermas heafdon sīþe wel gebūd hira land: ac hīe ne dorston þǣr on
cuman. Ac þāra Terfinna land wæs eal wēste, būton ðǣr huntan gewīcodon, oþþe fisceras, oþþe fugeleras.

This may be translated as:

Ohthere said to his lord, King Alfred, that he of all Norsemen lived north-most. He quoth that he lived in the land
northward along the North Sea. He said though that the land was very long from there, but it is all wasteland,
except that in a few places here and there Finns [i.e. Sami] encamp, hunting in winter and in summer fishing by the
sea. He said that at some time he wanted to find out how long the land lay northward or whether any man lived
north of the wasteland. Then he traveled north by the land. All the way he kept the waste land on his starboard and
the wide sea on his port three days. Then he was as far north as whale hunters furthest travel. Then he traveled still
north as far as he might sail in another three days. Then the land bowed east (or the sea into the land — he did not
know which). But he knew that he waited there for west winds (and somewhat north), and sailed east by the land so
as he might sail in four days. Then he had to wait for due-north winds, because the land bowed south (or the sea
into the land — he did not know which). Then he sailed from there south by the land so as he might sail in five
days. Then a large river lay there up into the land. Then they turned up into the river, because they dared not sail
forth past the river for hostility, because the land was all settled on the other side of the river. He had not
encountered earlier any settled land since he travelled from his own home, but all the way waste land was on his
starboard (except fishers, fowlers and hunters, who were all Finns). And the wide sea was always on his port. The
Bjarmians have cultivated their land very well, but they did not dare go in there. But the Terfinn’s land was all
waste except where hunters encamped, or fishers or fowlers.

[edit] Middle English

From The Canterbury Tales by Geoffrey Chaucer, 14th century

Whan that Aprill, with his shoures soote
The droghte of March hath perced to the roote
And bathed every veyne in swich licour,
Of which vertu engendred is the flour;
Whan Zephirus eek with his sweete breeth
Inspired hath in every holt and heeth
The tendre croppes, and the yonge sonne
Hath in the Ram his halfe cours yronne,
And smale foweles maken melodye,
That slepen al the nyght with open eye
(So priketh hem Nature in hir corages);
Thanne longen folk to goon on pilgrimages

Glossary:

• soote: sweet
• swich licour: such liquid
• Zephirus: the west wind (Zephyrus)
• eek: also (Dutch ook; German auch)
• holt: wood (German Holz)
• the Ram: Aries, the first sign of the Zodiac
• yronne: run
• priketh hem Nature: Nature pricks them
• hir corages: their hearts

[edit] Early Modern English
From Paradise Lost by John Milton, 1667

Of man's first disobedience, and the fruit
Of that forbidden tree, whose mortal taste
Brought death into the world, and all our woe,
With loss of Eden, till one greater Man
Restore us, and regain the blissful seat,
Sing, Heavenly Muse, that on the secret top
Of Oreb, or of Sinai, didst inspire
That shepherd, who first taught the chosen seed,
In the beginning how the Heavens and Earth
Rose out of chaos: or if Sion hill
Delight thee more, and Siloa's brook that flowed
Fast by the oracle of God, I thence
Invoke thy aid to my adventurous song,
That with no middle Flight intends to soar
Above the Aonian mount, whyle it pursues
Things unattempted yet in prose or rhyme.

[edit] Modern English

Taken from Oliver Twist, 1838, by Charles Dickens

The evening arrived; the boys took their places. The master, in his cook's uniform, stationed himself at the copper;
his pauper assistants ranged themselves behind him; the gruel was served out; and a long grace was said over the
short commons. The gruel disappeared; the boys whispered each other, and winked at Oliver; while his next
neighbours nudged him. Child as he was, he was desperate with hunger, and reckless with misery. He rose from the
table; and advancing to the master, basin and spoon in hand, said: somewhat alarmed at his own temerity:

'Please, sir, I want some more'.

The master was a fat, healthy man; but he turned very pale. He gazed in stupefied astonishment on the small rebel
for some seconds, and then clung for support to the copper. The assistants were paralysed with wonder; the boys
with fear. 'What!' said the master at length, in a faint voice.

'Please, sir', replied Oliver, 'I want some more'.

The master aimed a blow at Oliver's head with the ladle; pinioned him in his arm; and shrieked aloud for the beadle.

[edit] See also
• Phonological history of the English language
• American and British English differences
• English phonology
• English studies
• List of dialects of the English language
• List of Germanic and Latinate equivalents
• Lists of English words of international origin
• Languages in the United Kingdom
• Middle English creole hypothesis
• History of the Scots language
• Changes to Old English vocabulary
[edit] References
Vietnamese language
From Wikipedia, the free encyclopedia

(Redirected from Vietnamese Language)
Jump to: navigation, search

Vietnamese
Tiếng Việt

Pronunciation: tiə̯ŋ˧˥vḭə̯t˨˩ (Northern)
tiə̯ŋ˧˥jiə̯k˨˩˨ (Southern)

Spoken in: Vietnam

Region: Southeast Asia
Total speakers: 70-73 million native (includes 3
million overseas)
80+ million total

Ranking: 13–17 (native); in a near tie with
Korean, Telugu, Marathi and Tamil

Language family: Austro-Asiatic
Mon-Khmer
Vietic
Viet-Muong
Vietnamese

Writing system: Latin alphabet (quốc ngữ)

Official status

Official language Vietnam
in:

Regulated by: No official regulation

Language codes

ISO 639-1: vi

ISO 639-2: vie

ISO 639-3: vie

Major Vietnamese-speaking communities

Note: This page may contain IPA phonetic symbols in Unicode.

Vietnamese (tiếng Việt, or less commonly Việt ngữ[1]), formerly known under French colonization as
Annamese (see Annam), is the national and official language of Vietnam. It is the mother tongue of the
Vietnamese people (người Việt or người Kinh), who constitute 86% of Vietnam's population, and of about
three million overseas Vietnamese, most of whom live in the United States. It is also spoken as a second
language by some ethnic minorities of Vietnam. It is part of the Austroasiatic language family, of which it
has the most speakers by a significant margin (several times larger than the other Austroasiatic languages
put together). Much vocabulary has been borrowed from Chinese, especially words that denote abstract
ideas in the same way European languages borrow from Latin and Greek, and it was formerly written
using the Chinese writing system, albeit in a modified format and was pronounced in the Vietnamese way.
The Vietnamese writing system in use today is an adapted version of the Latin alphabet, with additional
diacritics for tones and certain letters.

Contents
[hide]

• 1 Geographic distribution
• 2 Genealogical classification
• 3 Language policy
• 4 History
• 5 Language variation
• 6 Vocabulary
• 7 Sounds
o 7.1 Vowels
o 7.2 Tones
o 7.3 Consonants
• 8 Grammar
• 9 Writing system
• 10 Pragmatics and ethnography of communication
o 10.1 Word play
o 10.2 Computer support
• 11 Examples
• 12 See also
• 13 Notes
• 14 Bibliography
o 14.1 General
o 14.2 Sound system
o 14.3 Pragmatics/Language variation
o 14.4 Historical/Comparative
o 14.5 Orthography
o 14.6 Pedagogical
• 15 External links
o 15.1 Dictionaries
o 15.2 Software resources
o 15.3 Vietnamese pedagogy

o 15.4 Other resources

[edit] Geographic distribution
As the national language of the majority ethnic group, Vietnamese is spoken throughout Vietnam by the
Vietnamese people, as well as by ethnic minorities. It is also spoken in overseas Vietnamese communities,
most notably in the United States, where it has more than one million speakers and is the seventh most-
spoken language (it is 3rd in Texas, 4th in Arkansas and Louisiana, and 5th in California[2]). In Australia,
it is the sixth most-spoken language.

According to the Ethnologue, Vietnamese is also spoken by substantial numbers of people in Australia,
Bulgaria, Cambodia, Canada, China, Côte d'Ivoire, Cuba, Czech Republic, Finland, France, French
Guiana, Germany, Laos, Martinique, the Netherlands, New Caledonia, Norway, the Philippines, Poland,
Russia, Denmark, Sweden, Senegal, Thailand, the United Kingdom, the United States, Japan, South
Korea, Vanuatu and Taiwan.

[edit] Genealogical classification
Vietnamese was identified more than 150 years ago[3] to be part of the Mon-Khmer branch of the
Austroasiatic language family (a family that also includes Khmer, spoken in Cambodia, as well as various
tribal and regional languages, such as the Munda languages spoken in eastern India, and others in
southern China). Later, Mường was found to be more closely related to Vietnamese than other Mon-
Khmer languages, and a Việt-Mường sub-grouping was established. As data on more Mon-Khmer
languages was acquired, other minority languages (such as Thavưng, Chứt languages, Hung, etc.) were
found to share Việt-Mường characteristics, and the Việt-Mường term was renamed to Vietic. The older
term Việt-Mường now refers to a lower sub-grouping (within an eastern Vietic branch) consisting of
Vietnamese dialects, Mường dialects, and Nguồn (of Quảng Bình Province).[4]

[edit] Language policy
While spoken by the Vietnamese people for millennia, written Vietnamese did not become the official
administrative language of Vietnam until the 20th century. For most of its history, the entity now known
as Vietnam used written classical Chinese for governing purposes, while written Vietnamese in the form
of Chữ nôm was used for poetry and literature. It was also used for administrative purposes during the
brief Ho and Tay Son Dynasties. During French colonialism, French superseded Chinese in
administration. It was not until independence from France that Vietnamese was used officially. It is the
language of instruction in schools and universities and is the language for official business.

[edit] History
It seems likely that in the distant past, Vietnamese shared more characteristics common to other languages
in the Austroasiatic family, such as an inflectional morphology and a richer set of consonant clusters,
which have subsequently disappeared from the language. However, Vietnamese appears to have been
heavily influenced by its location in the Southeast Asian sprachbund, with the result that it has acquired or
converged toward characteristics such as isolating morphology and tonogenesis. These characteristics,
which may or may not have been part of proto-Austroasiatic, nonetheless have become part of many of
the phylogenetically unrelated languages of Southeast Asia; for example, Thai (one of the Tai-Kadai
languages), Tsat (a member of the Malayo-Polynesian group within Austronesian), and Vietnamese each
developed tones as a phonemic feature, although their respective ancestral languages were not originally
tonal.[citation needed] The Vietnamese language has strong similarities with Cantonese with regard to the
specific intonations and unreleased plosive consonant endings.

The ancestor of the Vietnamese language was originally based in the area of the Red River in what is now
northern Vietnam, and during the subsequent expansion of the Vietnamese language and people into what
is now central and southern Vietnam (through conquest of the ancient nation of Champa and the Khmer
people of the Mekong Delta in the vicinity of present-day Ho Chi Minh City (Saigon)), characteristic
tonal variations have emerged.

Vietnamese was linguistically influenced primarily by Chinese, which came to predominate politically in
the 2nd century B.C.E. With the rise of Chinese political dominance came radical importation of Chinese
vocabulary and grammatical influence. As Chinese was, for a prolonged period, the only medium of
literature and government, as well as the primary written language of the ruling class in Vietnam, much of
the Vietnamese lexicon in all realms consists of Hán Việt (Sino-Vietnamese) words. In fact, as the
vernacular language of Vietnam gradually grew in prestige toward the beginning of the second
millennium, the Vietnamese language was written using Chinese characters (using both the original
Chinese characters, called Hán tự, as well as a system of newly created and modified characters called
Chữ nôm) adapted to write Vietnamese, in a similar pattern as used in Japan (kanji), Korea (hanja), and
other countries in the Sinosphere. The Nôm writing reached its zenith in the 18th century when many
Vietnamese writers and poets composed their works in Chữ Nôm, most notably Nguyễn Du and Hồ Xuân
Hương (dubbed "the Queen of Nôm poetry").

As contact with the West grew, the Quốc Ngữ system of Romanized writing was developed in the 17th
century by Portuguese and other Europeans involved in proselytizing and trade in Vietnam. When France
invaded Vietnam in the late 19th century, French gradually replaced Chinese as the official language in
education and government. Vietnamese adopted many French terms, such as đầm (dame, from madame),
ga (train station, from gare), sơ mi (shirt, from chemise), and búp bê (doll, from poupée). In addition,
many Sino-Vietnamese terms were devised for Western ideas imported through the French. However, the
Romanized script did not come to predominate until the beginning of the 20th century, when education
became widespread and a simpler writing system was found more expedient for teaching and
communication with the general population.

[edit] Language variation
There are various mutually intelligible regional varieties (or dialects), the main four being:[5]

Names under French
Dialect region Localities
colonization

Hanoi, Haiphong, and various provincial
Northern Vietnamese Tonkinese
forms

North-central (or Area IV) Nghệ An (Vinh, Thanh Chương), Thanh
High Annamese
Vietnamese Hoá, Quảng Bình, Hà Tĩnh

Central Vietnamese Huế, Quảng Nam High Annamese

Southern Vietnamese Saigon, Mekong (Far West) Cochinchinese

Listen to this audio clip of Vietnamese · (info)

The first article of the Universal Declaration of Human Rights spoken by Nghiem Mai Phuong, native speaker of a northern
variety. (audio help)
Listen to this audio clip of Vietnamese

Ho Chi Minh reading his Declaration of Independence. Ho Chi Minh is from Nghe An Province, speaking a northern-central
variety. (audio help)

Vietnamese has traditionally been divided into three dialect regions: North, Central, and South. However,
Michel Fergus and Nguyễn Tài Cẩn offer evidence for considering a North-Central region separate from
Central. The term Haut-Annam refers to dialects spoken from northern Nghệ An Province to southern
(former) Thừa Thiên Province that preserve archaic features (like consonant clusters and undiphthongized
vowels) that have been lost in other modern dialects.

These dialect regions differ mostly in their sound systems (see below), but also in vocabulary (including
basic vocabulary, non-basic vocabulary, and grammatical words) and grammar.[6] The North-central and
Central regional varieties, which have a significant amount of vocabulary differences, are generally less
mutually intelligible to Northern and Southern speakers. There is less internal variation within the
Southern region than the other regions due to its relatively late settlement by Vietnamese speakers (in
around the end of the 15th century). The North-central region is particularly conservative. Along the
coastal areas, regional variation has been neutralized to a certain extent while more mountainous regions
preserve more variation. As for sociolinguistic attitudes, the North-central varieties are often felt to be
"peculiar" or "difficult to understand" by speakers of other dialects.

It should be noted that the large movements of people between North and South beginning in the mid-
20th century and continuing to this day have resulted in a significant number of Southern residents
speaking in the Northern accent/dialect and to a lesser extent, Northern residents speaking in the Southern
accent/dialect. Following the Geneva Accords of 1954 that called for the "temporary" division of the
country, almost a million Northern speakers (mainly from Hanoi and the surrounding Red River Delta
areas) moved South (mainly to Saigon, now Ho Chi Minh City, and the surrounding areas.) About a third
of that number of people made the move in the reverse direction.

Following the reunification of Vietnam in 1975-76, Northern and North-Central speakers from the
densely populated Red River Delta and the traditionally poorer provinces of Nghe An, Ha Tinh and
Quang Binh have continued to move South to look for better economic opportunities. Additionally,
government and military personnel are posted to various locations throughout the country, often away
from their home regions. More recently, the growth of the free market system have resulted in business
people and tourists traveling to distant parts of Vietnam. These movements have resulted in some small
blending of the dialects but more significantly, have made the Northern dialect more easily understood in
the South and vice versa. It is also interesting to note that most Southerners, when singing
modern/popular Vietnamese songs, would do so in the Northern accent. This is true in Vietnam as well as
in the overseas Vietnamese communities.

Regional variation in grammatical words[7]

Northern Central Southern English gloss

này ni nầy "this"

thế này ri vầy "thus, this way"

ấy nớ, tê đó "that"

thế, thế ấy rứa, rứa tê vậy đó "thus, so, that way"
kia tê đó "that yonder"

kìa tề đó "that yonder (far away)"

đâu mô đâu "where"

nào mô nào "which"

sao, thế
răng sao "how"
nào

tôi tui tui "I, me (polite)"

tao tau tao, qua "I, me (arrogant, familiar)"

chúng tôi bầy tui tụi tui "we, us (but not you, polite)"

chúng tao bầy choa tụi tao "we, us (but not you, arrogant, familiar)"

mày mi mầy "you (thou) (arrogant, familiar)"

chúng mày bây, bọn bây tụi mầy "you guys, y'all (arrogant, familiar)"

nó hắn, nghỉ nó "he/him, she/her, it (arrogant, familiar)"

chúng nó bọn hắn tụi nó "they/them (arrogant, familiar)"

ông ấy ôông nớ ổng "he/him, that gentleman, sir"

bà ấy mệ nớ, mụ nớ, bà nớ bả "she/her, that lady, madam"

cô ấy o nớ cổ "she/her, that unmarried young lady"
chị ấy ả nớ chỉ "she/her, that young lady"

anh ấy eng nớ ảnh "he/him, that young man (of equal status)"

The syllable-initial ch and tr digraphs are pronounced distinctly in North-central, Central, and Southern
varieties, but are merged in Northern varieties (i.e. they are both pronounced the same way). The North-
central varieties preserve three distinct pronunciations for d, gi, and r whereas the North has a three-way
merger and the Central and South have a merger of d and gi while keeping r distinct. At the end of
syllables, palatals ch and nh have merged with alveolars t and n, which, in turn, have also partially
merged with velars c and ng in Central and Southern varieties.

Regional consonant correspondences

Syllable position Orthography[8] Northern North-central Central Southern

x [s] [s] [s]
[s]
s [ʂ] [ʂ] [ʂ]

ch [ʨ] [ʨ] [ʨ]
[ʨ]
tr [ʈʂ] [ʈʂ] [ʈʂ]
syllable-initial
r [ɹ] [ɹ] [ɹ]

d [z] [ɟ]

gi [z] [j] [j]

v [9] [v] [v]

syllable-final c [k] [k] [k] [k]

t [t] [t]
t
[k, t]
after e

t
[k, t]
after ê

t [t]
after i
[t]

ch [c] [c]

ng [ŋ] [ŋ]
[ŋ] [ŋ]
n

[n] [n]
n
after i, ê
[n] [n]

nh [ɲ] [ɲ]

In addition to the regional variation described above, there is also a merger of l and n in certain rural
varieties:

l, n variation

Orthography "Mainstream" varieties Rural varieties

n [n]
[n]
l [l]

Variation between l and n can be found even in mainstream Vietnamese in certain words. For example,
the numeral "five" appears as năm by itself and in compound numerals like năm mươi "fifty" but appears
as lăm in mười lăm "fifteen". (See Vietnamese syntax: Cardinal numerals.) In some northern varieties,
this numeral appears with an initial nh instead of l: hai mươi nhăm "twenty-five" vs. mainstream hai mươi
lăm.[10]
The consonant clusters that were originally present in Middle Vietnamese (of the 17th century) have been
lost in almost all modern Vietnamese varieties (but retained in other closely related Vietic languages).
However, some speech communities have preserved some of these archaic clusters: "sky" is blời with a
cluster in Hảo Nho (Yên Mô prefecture, Ninh Binh Province) but trời in Southern Vietnamese and giời in
Hanoi Vietnamese (initial single consonants /ʈʂ, z/, respectively).

Generally, the Northern varieties have six tones while those in other regions have five tones. The hỏi and
ngã tones are distinct in North and some North-central varieties (although often with different pitch
contours) but have merged in Central, Southern, and some North-central varieties (also with different
pitch contours). Some North-central varieties (such as Hà Tĩnh Vietnamese) have a merger of the ngã and
nặng tones while keeping the hỏi tone distinct. Still other North-central varieties have a three-way merger
of hỏi, ngã, and nặng resulting in a four-tone system. In addition, there are several phonetic differences
(mostly in pitch contour and phonation type) in the tones among dialects.

Regional tone correspondences

North-central

Tone Northern Central Southern
Thanh
Vinh Hà Tĩnh
Chương

ngan
33 35 35 35, 353 35 33
g

huyề
21̤ ̤ 33 33 33 33 21
n

sắc 35 11 11, 13̰ ̰ 13̰ 13̰ ̰ 35

hỏi 313̰ ̰ 31 31̰ʔ
31 312 214
ngã 3ʔ5 13̰ ̰
22̰
nặng 21̰ʔ 22 22̰ ̰ 22̰ ̰ 212

The table above shows the pitch contour of each tone using Chao tone number notation (where 1 = lowest
pitch, 5 = highest pitch); glottalization (creaky, stiff, harsh) is indicated with the < ˷ > symbol; breathy
voice with < ̤ >; glottal stop with < ʔ >; sub-dialectal variants are separated with commas. (See also the
tone section below.)

[edit] Vocabulary
As a result of a thousand years of Chinese occupation, much of the Vietnamese lexicon relating to science
and politics is derived from Chinese. As much as 70% of the vocabulary has Chinese roots, although
many compound words are Sino-Vietnamese, composed of native Vietnamese words combined with
Chinese borrowings. One can usually distinguish between a native Vietnamese word and a Chinese
borrowing if it can be reduplicated or its meaning doesn't change when the tone is shifted. As a result of
French colonization, Vietnamese also has words borrowed from the French language, for example cà phê
(from French café). Recently many words have been borrowed from English, for example TV
(pronounced tivi), phông for font. Sometimes these borrowings are calques literally translated into
Vietnamese (e.g. phần mềm for software, lit. "soft part").

[edit] Sounds
Main article: Vietnamese phonology

[edit] Vowels

Like other southeast Asian languages, Vietnamese has a comparatively large number of vowels. Below is
a vowel chart of Hanoi Vietnamese.

Front Central Back

High i [i] ư [ɨ] u [u]

Upper Mid ê [e] ô [o]
â [ə] / ơ
[əː]
Lower Mid e [ɛ] o [ɔ]

Low ă [a] / a [aː]

Front, central, and low vowels (i, ê, e, ư, â, ơ, ă, a) are unrounded, whereas the back vowels (u, ô, o) are
rounded. The vowels â [ə] and ă [a] are pronounced very short, much shorter than the other vowels.
Thus, ơ and â are basically pronounced the same except that ơ [əː][11] is long while â [ə] is short — the
same applies to the low vowels long a [aː] and short ă [a].[12]

In addition to single vowels (or monophthongs), Vietnamese has diphthongs[13] and triphthongs. The
diphthongs consist of a main vowel component followed by a shorter semivowel offglide to a high front
position [ɪ], a high back position [ʊ], or a central position [ə].[14]

Diphthong Diphthong with
Vowel Diphthong with Triphthong with Triphthong with
with back centering
nucleus front offglide front offglide back offglide
offglide offglide
i – iu~yu [iʊ̯] ia~iê~yê~ya [iə̯] – iêu [iə̯ʊ]̯

ê – êu [eʊ̯] – – –

e – eo [ɛʊ̯] – – –

ư ưi [ɨɪ̯] ưu [ɨʊ̯] ưa~ươ [ɨə̯] ươi [ɨə̯ɪ]̯ ươu [ɨə̯ʊ]̯

â ây [əɪ̯] âu [əʊ̯] – – –

ơ ơi [əːɪ̯] – – – –

ă ay [aɪ̯] au [aʊ̯] – – –

a ai [aːɪ̯] ao [aːʊ̯] – – –

u ui [uɪ̯] – ua~uô [uə̯] uôi [uə̯ɪ]̯ –

ô ôi [oɪ̯] – – – –

o oi [ɔɪ̯] – – – –

The centering diphthongs are formed with only the three high vowels (i, ư, u) as the main vowel. They are
generally spelled as ia, ưa, ua when they end a word and are spelled iê, ươ, uô, respectively, when they
are followed by a consonant. There are also restrictions on the high offglides: the high front offglide
cannot occur after a front vowel (i, ê, e) nucleus and the high back offglide cannot occur after a back
vowel (u, ô, o) nucleus[15].

The correspondence between the orthography and pronunciation is complicated. For example, the offglide
[ɪ̯] is usually written as i however, it may also be represented with y. In addition, in the diphthongs [aɪ̯]
and [aːɪ̯] the letters y and i also indicate the pronunciation of the main vowel: ay = ă + [ɪ̯], ai = a + [ɪ̯].
Thus, tay "hand" is [taɪ̯] while tai "ear" is [taːɪ̯]. Similarly, u and o indicate different pronunciations of
the main vowel: au = ă + [ʊ̯], ao = a + [ʊ̯]. Thus, thau "brass" is [taʊ̯] while thao "raw silk" is [taːʊ̯].

The four triphthongs are formed by adding front and back offglides to the centering diphthongs. Similarly
to the restrictions involving diphthongs, a triphthong with front nucleus cannot have a front offglide (after
the centering glide) and a triphthong with a back nucleus cannot have a back offglide.
With regards to the front and back offglides [ɪ̯, ʊ̯], many phonological descriptions analyze these as
consonant glides /j, w/. Thus, a word such as đâu "where", phonetically [ɗəʊ̯], would be phonemicized
as /ɗəw/.

[edit] Tones

Pitch contours and duration of the six Northern Vietnamese tones (but not Hanoi) as uttered by a male
speaker. Fundamental frequency is plotted over time. From Nguyễn & Edmondson (1998).

Vietnamese vowels are all pronounced with an inherent tone[16]. Tones differ in:

• length (duration)
• pitch contour (i.e. pitch melody)
• pitch height
• phonation

Tone is indicated by diacritics written above or below the vowel (most of the tone diacritics appear above
the vowel; however, the nặng tone dot diacritic goes below the vowel).[17] The six tones in the northern
varieties (including Hanoi) are:

Sample
Name Description Diacritic Example
vowel

ngang 'level' mid level (no mark) ma 'ghost' a (help·info)

huyền ` (grave
low falling (often breathy) mà 'but' à (help·info)
'hanging' accent)

´ (acute
sắc 'sharp' high rising má 'cheek, mother (southern)' á (help·info)
accent)

hỏi 'asking' mid dipping-rising ̉ (hook) mả 'tomb, grave' ả (help·info)

ngã mã 'horse (Sino-Vietnamese),
high breaking-rising ˜ (tilde) ã (help·info)
'tumbling' code'

low falling constricted (short
nặng 'heavy' ̣ (dot below) mạ 'rice seedling' ạ (help·info)
length)

Other dialects of Vietnamese have fewer tones (typically only five). See the language variation section
above for a brief survey of tonal differences among dialects.
In Vietnamese poetry, tones are classed into two groups:

Tone group Tones within tone group

bằng "level, flat" ngang and huyền

trắc "oblique, sharp" sắc, hỏi, ngã, and nặng

Words with tones belonging to particular tone group must occur in certain positions with the poetic verse.

[edit] Consonants

The consonants that occur in Vietnamese are listed below in the Vietnamese orthography with the
phonetic pronunciation to the right.

Labial Alveolar Retroflex Palatal Velar Glottal

ch
voiceless p [p] t [t] tr [ʈʂ~ʈ] c/k [k]
[c~tɕ]

Stop
aspirated th [tʰ]

voiced b [ɓ] đ [ɗ] d [ɟ]

voiceless ph [f] x [s] s [ʂ] kh [x] h [h]

Fricative

voiced v [v] gi [z] r [ʐ~ɹ] g/gh [ɣ]

ng/ngh
Nasal m [m] n [n] nh [ɲ]
[ŋ]

u/o
Approximant l [l] y/i [j]
[w]
Some consonant sounds are written with only one letter (like "p"), other consonant sounds are written
with a two-letter digraph (like "ph"), and others are written with more than one letter or digraph (the velar
stop is written variously as "c", "k", or "q").

Not all dialects of Vietnamese have the same consonant in a given word (although all dialects use the
same spelling in the written language). See the language variation section above for further elaboration.

The analysis of syllable-final orthographic ch and nh in Hanoi Vietnamese has had different analyses.
One analysis has final ch, nh as being phonemes /c, ɲ/ contrasting with syllable-final t, c /t, k/ and n, ng
/n, ŋ/ and identifies final ch with the syllable-initial ch /c/. The other analysis has final ch and nh as
predictable allophonic variants of the velar phonemes /k/ and /ŋ/ that occur before upper front vowels i
/i/ and ê /e/. (See Vietnamese phonology: Analysis of final ch, nh for further details.)

[edit] Grammar
Main articles: Vietnamese syntax and Vietnamese morphology

Vietnamese, like many languages in Southeast Asia, is an analytic (or isolating) language. Vietnamese
does not use morphological marking of case, gender, number or tense (and, as a result, has no
finite/nonfinite distinction).[18] Also like other languages in the region, Vietnamese syntax conforms to
Subject Verb Object word order, is head-initial (displaying modified-modifier ordering), and has a noun
classifier system. Additionally, it is pro-drop, wh-in-situ, and allows verb serialization.

Some Vietnamese sentences with English word glosses and translations are provided below.

Mai là sinh viên.
Mai be student
"Mai is a student."
Giáp rất cao.
Giap very tall
"Giap is very tall."
Người đó là anh nó.
person that be brother he
"That person is his brother."
Con chó này chẳng bao giờ sủa cả.
CLASSIFIER dog this not ever bark at.all
"This dog never barks at all."
Nó chỉ ăn cơm Việt Nam thôi.
he only eat food Vietnam only
"He only eats Vietnamese food."
Cái thằng chồng em nó chẳng ra gì.
FOCUS CLASSIFIER husband I (as wife) he not turn.out what
"That husband of mine, he is good for nothing."
Tôi thích cái con ngựa đen.
I (generic) like FOCUS CLASSIFIER horse black
"I like the black horse."

[edit] Writing system
Main article: Vietnamese alphabet

Currently, the written language uses the Vietnamese alphabet (quốc ngữ or "national script", literally
"national language"), based on the Latin alphabet. Originally a Romanization of Vietnamese, it was
codified in the 17th century by a French Jesuit missionary named Alexandre de Rhodes (1591–1660),
based on works of earlier Portuguese missionaries (Gaspar do Amaral and António Barbosa). The use of
the script was gradually extended from its initial domain in Christian writing to become more popular
among the general public.

Under French colonial rule, the script became official and required for all public documents in 1910 by
issue of a decree by the French Résident Supérieur of the protectorate of Tonkin. By the end of first half
20th century virtually all writings were done in quốc ngữ.

Changes in the script were made by French scholars and administrators and by conferences held after
independence during 1954–1974. The script now reflects a so-called Middle Vietnamese dialect that has
vowels and final consonants most similar to northern dialects and initial consonants most similar to
southern dialects (Nguyễn 1996). This Middle Vietnamese is presumably close to the Hanoi variety as
spoken sometime after 1600 but before the present.

Before French rule, the first two Vietnamese writing systems were based on Chinese script:

• the standard Chinese character set called chữ nho (scholar's characters, 字儒): used to write
Literary Chinese
• a complicated variant form known as chữ nôm (southern/vernacular characters, 字喃) with
characters not found in the Chinese character set; this system was better adapted to the unique
phonetic aspects of Vietnamese which differed from Chinese

The authentic Chinese writing, chữ nho, was in more common usage, whereas chữ nôm was used by
members of the educated elite (one needs to be able to read chữ nho in order to read chữ nôm). Both
scripts have fallen out of common usage in modern Vietnam, and chữ nôm is nearly extinct.

[edit] Pragmatics and ethnography of communication
Please help improve this section by expanding it. Further information might be found on the talk
page. (May 2008)

• ethnography of communication
• politeness (see Sophana (2004, 2005))
• pragmatics
• sociolinguistics
• speech acts

[edit] Word play

A language game known as nói lái is used by Vietnamese speakers and is often considered clever. Nói lái
involves switching the tones in a pair of words and also the order of the two words or the first consonant
and rime of each word; the resulting nói lái pair preserves the original sequence of tones. Some examples:

Original phrase Phrase after nói lái Structural change
transformation

đái dầm "(child) wet their
→ dấm đài (nonsense words) word order and tone switch
pants"

chửa hoang "pregnancy out of hoảng chưa "aren't you
→ word order and tone switch
wedlock" scared?"

bồi tây "servant in French initial consonant, rime, and
bầy tôi "all the king's subjects" →
household" tone switch

initial consonant and rime
bí mật "secrets" → bật mí "revealing secrets"
switch

The resulting transformed phrase often has a different meaning but sometimes may just be a nonsensical
word pair. Nói lái can be used to obscure the original meaning and thus soften the discussion of a socially
sensitive issue, as with dấm đài and hoảng chưa (above) or, when implied (and not overtly spoken), to
deliver a hidden subtextual message, as with bồi tây[19]. Naturally, nói lái can be used for a humorous
effect.[20]

Another word game somewhat reminiscent of pig latin is played by children. Here a nonsense syllable
(chosen by the child) is prefixed onto a target word's syllables, then their initial consonants and rimes are
switched with the tone of the original word remaining on the new switched rime.

Nonsense Intermediate form with Resulting
Target word
syllable prefixed syllable "secret" word

phở "beef or chicken
la → la phở → lơ phả
noodle soup"

la ăn "to eat" → la ăn → lăn a

la hoàn cảnh "environment" → la hoàn la cảnh → loan hà lanh cả

choan hìm chanh
chim hoàn cảnh "environment" → chim hoàn chim cảnh →
kỉm

This language game is often used as a "secret" or "coded" language useful for obscuring messages from
adult comprehension.
[edit] Computer support

The Unicode character set contains all Vietnamese characters and the Vietnamese currency symbol. On
systems that do not support Unicode, many 8-bit Vietnamese code pages are available such as VISCII or
CP1258. Where ASCII must be used, Vietnamese letters are often typed using the VIQR convention,
though this is largely unnecessary nowadays, with the increasing ubiquity of Unicode. There are many
software tools that help type true Vietnamese text on US keyboards such as WinVNKey, Unikey on
Windows, or MacVNKey on Macintosh.

[edit] Examples
See "The Tale of Kieu" for an extract of the first six lines of Truyện Kiều, an epic narrative poem by the
celebrated poet Nguyễn Du, 阮攸), which is often considered the most significant work of Vietnamese
literature. It was originally written in Nôm (titled Đoạn Trường Tân Thanh 斷腸新聲) and is widely
taught in Vietnam today.

[edit] See also
• Chữ nho
• Chữ nôm
• Sino-Tibetan languages
• Sino-Vietnamese vocabulary
• Vietic languages
• Vietnamese alphabet
• Vietnamese literature
• Vietnamese morphology
• Vietnamese phonology
• Vietnamese syntax

[edit] Notes
1. ^ Another variant, tiếng Việt Nam, is rarely used by native speakers and is likely a neologism from
translating literally from a foreign language. It is most often used by non-native speakers and mostly found
in documents translated from another language.
2. ^ "Detailed List of Languages Spoken at Home for the Population 5 Years and Over by State: 2000"
(PDF). 2000 United States Census. United States Census Bureau (2003). Retrieved on April 11, 2006.
3. ^ "Mon-Khmer languages: The Vietic branch". SEAlang Projects. Retrieved on November 8.
4. ^ Even though this is supported by etymological comparison, some linguists[who?] still believe that Viet-
Muong is a separate family, genealogically unrelated to Mon-Khmer languages.)
5. ^ Sources on Vietnamese variation include: Alves (forthcoming), Alves & Nguyễn (2007), Emeneau
(1947), Hoàng (1989), Honda (2006), Nguyễn, Đ.-H. (1995), Pham (2005), Thompson (1991[1965]), Vũ
(1982), Vương (1981).
6. ^ Some differences in grammatical words are noted in Vietnamese grammar: Demonstratives, Vietnamese
grammar: Pronouns.
7. ^ Table data from Hoàng (1989).
8. ^ As can be seen from the correspondences in the table, no Vietnamese dialect has preserved all of the
contrasts implied by the current writing system.
9. ^ In southern dialects, v is reported to have a spelling pronunciation (i.e., the spelling influences
pronunciation) of [vj] or [bj] among educated speakers. However, educated speakers revert to usual [j] in
more relaxed speech. Less educated speakers have [j] more consistently throughout their speech. See:
Thompson (1959), Thompson (1965: 85, 89, 93, 97-98).
10. ^ Gregerson (1981) notes that this variation was present in de Rhodes's time in some initial consonant
clusters: mlẽ ~ mnhẽ "reason" (cf. modern Vietnamese lẽ "reason").
11. ^ The symbol ː represents long vowel length.
12. ^ There are different descriptions of Hanoi vowels. Another common description is that of Thompson
(1965):

Back
Front Central
unrounded rounded

High i [i] ư [ɯ] u [u]

Upper Mid ê [e] ơ [ɤ] ô [o]

Lower Mid e [ɛ] â [ʌ] o [ɔ]

Low a [a] ă [ɐ]

13. This description distinguishes four degrees of vowel height and a rounding contrast (rounded vs.
unrounded) between back vowels. The relative shortness of ă [ɐ] and â [ʌ] would, then, be a secondary
feature. Thompson describes the vowel ă [ɐ] as being slightly higher (upper low) than a [aː].
14. ^ In Vietnamese, diphthongs are âm đôi.
15. ^ The diphthongs and triphthongs as described by Thompson can be compared with the description above:

Thompson's diphthongs Thompson's triphthongs

Vowel Front Back Centering Centering Front Back
nucleus offglide offglide offglide diphthong offglide offglide

iu~yu ia ~ iê – iêu [iə̯ʊ]̯
i – ia~iê [iə̯]
[iʊ̯]

ưa ~ ươ ươi [ɯ̯əɪ̯] ươu [ɯə̯ʊ]̯
ê – êu [eʊ̯] –

ua ~ uô uôi [uə̯ɪ]̯ –
e – eo [ɛʊ̯] –

ư ưi [ɯɪ̯] ưu [ɯʊ̯] ưa~ươ [ɯə̯]

â ây [ʌɪ̯] âu [ʌʊ̯] –
ơ ơi [ɤɪ̯] – –

ă ay [ɐɪ̯] au [ɐʊ̯] –

a ai [aɪ̯] ao [aʊ̯] –

u ui [uɪ̯] – ua~uô [uə̯]

ô ôi [oɪ̯] – –

o oi [ɔɪ̯] – –

16. ^ The lack of diphthong consisting of a ơ + back offglide (i.e., [əːʊ̯]) is an apparent gap.
17. ^ Called thanh điệu in Vietnamese
18. ^ Note that the name of each tone has the corresponding tonal diacritic on the vowel.
19. ^ Comparison note: As such its grammar relies on word order and sentence structure rather than
morphology (in which word changes through inflection). Whereas European languages tend to use
morphology to express tense, Vietnamese uses grammatical particles or syntactic constructions.
20. ^ Nguyễn Đ.-H. (1997: 29) gives the following context: "... a collaborator under the French administration
was presented with a congratulatory panel featuring the two Chinese characters quần thần. This Sino-
Vietnamese expression could be defined as bầy tôi meaning ‘all the king's subjects’. But those two
syllables, when undergoing commutation of rhyme and tone, would generate bồi tây meaning ‘servant in a
French household’.
21. ^ See www.users.bigpond.com/doanviettrung/noilai.html, Language Log's
itre.cis.upenn.edu/~myl/languagelog/archives/001788.html, and tphcm.blogspot.com/2005/01/ni-li.html for
more examples.

[edit] Bibliography
[edit] General

• Dương, Quảng-Hàm. (1941). Việt-nam văn-học sử-yếu [Outline history of Vietnamese literature].
Saigon: Bộ Quốc gia Giáo dục.
• Emeneau, M. B. (1947). Homonyms and puns in Annamese. Language, 23 (3), 239-244.
• Emeneau, M. B. (1951). Studies in Vietnamese (Annamese) grammar. University of California
publications in linguistics (Vol. 8). Berkeley: University of California Press.
• Hashimoto, Mantaro. (1978). The current state of Sino-Vietnamese studies. Journal of Chinese
Linguistics, 6, 1-26.
• Nguyễn, Đình-Hoà. (1995). NTC's Vietnamese-English dictionary (updated ed.). NTC language
dictionaries. Lincolnwood, IL: NTC Pub. Press. ISBN; ISBN
• Nguyễn, Đình-Hoà. (1997). Vietnamese: Tiếng Việt không son phấn. Amsterdam: John Benjamins
Publishing Company.
• Rhodes, Alexandre de. (1991). Từ điển Annam-Lusitan-Latinh [original: Dictionarium
Annamiticum Lusitanum et Latinum]. (L. Thanh, X. V. Hoàng, & Q. C. Đỗ, Trans.). Hanoi: Khoa
học Xã hội. (Original work published 1651).
• Thompson, Laurence E. (1991). A Vietnamese reference grammar. Seattle: University of
Washington Press. Honolulu: University of Hawaii Press. (Original work published 1965). (Online
version: www.sealang.net/archives/mks/THOMPSONLaurenceC.htm.)
• Uỷ ban Khoa học Xã hội Việt Nam. (1983). Ngữ-pháp tiếng Việt [Vietnamese grammar]. Hanoi:
Khoa học Xã hội.

[edit] Sound system

• Michaud, Alexis. (2004). Final consonants and glottalization: New perspectives from Hanoi
Vietnamese. Phonetica 61) pp. 119-146. Preprint version
• Nguyễn, Văn Lợi; & Edmondson, Jerold A. (1998). Tones and voice quality in modern northern
Vietnamese: Instrumental case studies. Mon-Khmer Studies, 28, 1-18. (Online version:
www.sealang.net/archives/mks/NGUYNVnLoi.htm).
• Thompson, Laurence E. (1959). Saigon phonemics. Language, 35 (3), 454-476.

[edit] Pragmatics/Language variation

• Alves, Mark J. (forthcoming). A look at North-Central Vietnamese. In Papers from the Thirteenth
Annual Meeting of the Southeast Asian Linguistics Society. Arizona State University Press. Pre-
publication electronic version:
http://www.geocities.com/malves98/Alves_Vietnamese_Northcentral.pdf.
• Alves, Mark J.; & Nguyễn, Duy Hương. (2007). Notes on Thanh-Chương Vietnamese in Nghệ-An
province. In M. Alves, M. Sidwell, & D. Gil (Eds.), SEALS VIII: Papers from the 8th annual
meeting of the Southeast Asian Linguistics Society 1998 (pp. 1-9). Canberra: Pacific Linguistics,
The Australian National University, Research School of Pacific and Asian Studies. Electronic
version: http://pacling.anu.edu.au/catalogue/SEALSVIII_final.pdf.
• Hoàng, Thị Châu. (1989). Tiếng Việt trên các miền đất nước: Phương ngữ học [Vietnamese in
different areas of the country: Dialectology]. Hà Nội: Khoa học xã hội.
• Honda, Koichi. (2006). F0 and phonation types in Nghe Tinh Vietnamese tones. In P. Warren &
C. I. Watson (Eds.), Proceedings of the 11th Australasian International Conference on Speech
Science and Technology (pp. 454-459). Auckland, New Zealand: University of Auckland.
Electronic version: http://www.assta.org/sst/2006/sst2006-119.pdf.
• Luong, Hy Van. (1987). Plural markers and personal pronouns in Vietnamese person reference:
An analysis of pragmatic ambiguity and negative models. Anthropological Linguistics, 29 (1), 49-
70.
• Pham, Andrea Hoa. (2005). Vietnamese tonal system in Nghi Loc: A preliminary report. In C.
Frigeni, M. Hirayama, & S. Mackenzie (Eds.), Toronto working papers in linguistics: Special
issue on similarity in phonology (Vol. 24, pp. 183-459). Auckland, New Zealand: University of
Auckland. Electronic version: http://r1.chass.utoronto.ca/twpl/pdfs/twpl24/Pham_TWPL24.pdf.
• Sophana, Srichampa. (2004). Politeness strategies in Hanoi Vietnamese speech. Mon-Khmer
Studies, 34, 137-157. (Online version:
www.sealang.net/archives/mks/SOPHANASrichampa.htm).
• Sophana, Srichampa. (2005). Comparison of greetings in the Vietnamese dialects of Ha Noi and
Ho Chi Minh City. Mon-Khmer Studies, 35, 83-99. (Online version:
www.sealang.net/archives/mks/SOPHANASrichampa.htm).
• Vũ, Thang Phương. (1982). Phonetic properties of Vietnamese tones across dialects. In D. Bradley
(Ed.), Papers in Southeast Asian linguistics: Tonation (Vol. 8, pp. 55-75). Sydney: Pacific
Linguistics, The Australian National University.
• Vương, Hữu Lễ. (1981). Vái nhận xét về đặc diểm của vần trong thổ âm Quảng Nam ở Hội An
[Some notes on special qualities of the rhyme in local Quang Nam speech in Hoi An]. In Một Số
Vấn Ðề Ngôn Ngữ Học Việt Nam [Some linguistics issues in Vietnam] (pp. 311-320). Hà Nội: Nhà
Xuất Bản Ðại Học và Trung Học Chuyên Nghiệp.
[edit] Historical/Comparative

• Alves, Mark. (1999). "What's so Chinese about Vietnamese?", in Papers from the Ninth Annual
Meeting of the Southeast Asian Linguistics Society. University of California, Berkeley. PDF
• Cooke, Joseph R. (1968). Pronominal reference in Thai, Burmese, and Vietnamese. University of
California publications in linguistics (No. 52). Berkeley: University of California Press.
• Gregerson, Kenneth J. (1969). A study of Middle Vietnamese phonology. Bulletin de la Société
des Etudes Indochinoises, 44, 135-193. (Reprinted in 1981).
• Nguyễn, Đình-Hoà. (1986). Alexandre de Rhodes' dictionary. Papers in Linguistics, 19, 1-18.
• Shorto, Harry L. edited by Sidwell, Paul, Cooper, Doug and Bauer, Christian (2006). A Mon-
Khmer comparative dictionary. Canberra: Australian National University. Pacific Linguistics.
ISBN
• Thompson, Laurence E. (1967). The history of Vietnamese finals. Language, 43 (1), 362-371.

[edit] Orthography

• Haudricourt, André-Georges. (1949). Origine des particularités de l'alphabet vietnamien. Dân
Việt-Nam, 3, 61-68.
• Nguyễn, Đình-Hoà. (1955). Quốc-ngữ: The modern writing system in Vietnam. Washington, D.
C.: Author.
• Nguyễn, Đình-Hoà. (1990). Graphemic borrowing from Chinese: The case of chữ nôm, Vietnam's
demotic script. Bulletin of the Institute of History and Philology, Academia Sinica, 61, 383-432.
• Nguyễn, Đình-Hoà. (1996). Vietnamese. In P. T. Daniels, & W. Bright (Eds.), The world's writing
systems, (pp. 691-699). New York: Oxford University Press. ISBN.

[edit] Pedagogical

• Nguyen, Bich Thuan. (1997). Contemporary Vietnamese: An intermediate text. Southeast Asian
language series. Northern Illinois University, Center for Southeast Asian Studies.
• Healy, Dana. (2004). Teach yourself Vietnamese. Teach yourself. Chicago: McGraw-Hill. ISBN
• Hoang, Thinh; Nguyen, Xuan Thu; Trinh, Quynh-Tram; (2000). Vietnamese phrasebook, (3rd
ed.). Hawthorn, Vic.: Lonely Planet. ISBN
• Moore, John. (1994). Colloquial Vietnamese: A complete language course. London: Routledge.
ISBN; ISBN (w/ CD); ISBN (w/ cassettes);
• Nguyễn, Đình-Hoà. (1967). Read Vietnamese: A graded course in written Vietnamese. Rutland,
VT: C.E. Tuttle.
• Lâm, Lý-duc; Emeneau, M. B.; & Steinen, Diether von den. (1944). An Annamese reader.
Berkeley: University of California, Berkeley.
• Nguyễn, Đang Liêm. (1970). Vietnamese pronunciation. PALI language texts: Southeast Asia.
Honolulu: University of Hawaii Press. ISBN -X

[edit] External links

Vietnamese language edition of Wikipedia, the free encyclopedia
Wikibooks has a book on the topic of
Vietnamese

Vietnamese language edition of Wiktionary, the free dictionary/thesaurus
Wikimedia Commons has media related to: Vietnamese language
This article or section may contain an excessive number of external links.
Please improve this article by incorporating them into the appropriate end section, or by converting them to inline citations.
(January 2008)

• Sound System in Vietnamese
• Translating Vietnamese poetry
• Versification of Vietnamese Riddles
• Vietnamese Online Grammar Project

[edit] Dictionaries

• Lexicon of Vietnamese words borrowed from French by Jubinell
• Nom look-up
• The Free Vietnamese Dictionary Project
• VDict: Vietnamese online dictionaries

[edit] Software resources

• A Comprehensive Unicode Browser Test Page for Vietnamese / Quốc Ngữ.
• A Comprehensive Unicode Font Test Page for Vietnamese / Quốc Ngữ.
• Online Keyboard for Vietnamese

[edit] Vietnamese pedagogy

• 20 lessons
• Omniglot
• Online Vietnamese Pronunciation and Spelling Practice (ASU)
• Online Vietnamese Reading Program (ASU)
• The right place of the Vietnamese accent a simple rule for learners, on where to put the tonal
accent

[edit] Other resources

• Ethnologue report for Vietnamese
• Wikibooks in Vietnamese
• Wikisource in Vietnamese
• Wiktionary in Vietnamese

Retrieved from "http://en.wikipedia.org/wiki/Vietnamese_language"
Categories: Vietnamese language | Viet-Muong languages | Languages of Vietnam | Tonal languages
Missile
From Wikipedia, the free encyclopedia

Jump to: navigation, search
For other uses, see Missile (disambiguation).
For the record label, see Guided Missile.
This article does not cite any references or sources. Please help improve this article by adding
citations to reliable sources. Unverifiable material may be challenged and removed. (June 2008)

The RAF's Brimstone missile is a modern fire and forget anti-tank missile.
Exocet missile in flight

A guided missile (see also pronunciation differences) is a self-propelled projectile used as a weapon.
Missiles are typically propelled by rockets or jet engines. Missiles generally have an explosive warhead,
although other weapon types may also be used.

Contents
[hide]

• 1 Etymology
• 2 Technology
o 2.1 Guidance Systems
o 2.2 Targeting Systems
o 2.3 Flight System
o 2.4 Engine
o 2.5 Warhead
• 3 Early Development
• 4 Basic roles
o 4.1 Surface to Surface/Air to Surface
 4.1.1 Ballistic missiles
 4.1.2 Cruise missiles
 4.1.3 Anti-shipping
 4.1.4 Anti-tank
o 4.2 Surface to Air
 4.2.1 Anti-Aircraft
 4.2.2 Anti-ballistic
o 4.3 Air-to-air
o 4.4 Anti-satellite weapon (ASAT)
o 4.5 Guidance systems

• 5 See also

[edit] Etymology
The word missile comes from the Latin verb mittere, literally meaning "to send".

In common military parlance, the word missile describes a powered, guided munition, whilst the word
"rocket" describes a powered, unguided munition. Unpowered, guided munitions are known as guided
bombs. A common further sub-division is to consider ballistic missile to mean a munition that follows a
ballistic trajectory and cruise missile to describe a munition that generates lift.
[edit] Technology
Guided missiles have a number of different system components:

• targeting and/or guidance
• flight system
• engine
• warhead

[edit] Guidance Systems

Missiles may be targeted in a number of ways. The most common method is to use some form of
radiation, such as infra-red, lasers or radio waves, to guide the missile onto its target. This radiation may
emanate from the target (such as the heat of an engine or the radio waves from an enemy radar), it may be
provided by the missile itself (such as a radar) or it may be provided by a friendly third party (such as the
radar of the launch vehicle/platform, or a laser designator operated by friendly infantry). The first two are
often known as fire and forget as they need no further support or control from the launch vehicle/platform
in order to function. Another method is to use a TV camera - using either visible light or infra-red - in
order to see the target. The picture may be used either by a human operator who steers the missile onto its
target, or by a computer doing much the same job. Many missiles use a combination of two or more of the
above methods, to improve accuracy and the chances of a successful engagement.

[edit] Targeting Systems

Another method is to target the missile by knowing the location of the target, and using a guidance system
such as INS, TERCOM or GPS. This guidance system guides the missile by knowing the missile's current
position and the position of the target, and then calculating a course between them. This job can also be
performed somewhat crudely by a human operator who can see the target and the missile, and guides it
using either cable or radio based remote-control.

[edit] Flight System

Whether a guided missile uses a targeting system, a guidance system or both, it needs a flight system. The
flight system uses the data from the targeting or guidance system to maneuver the missile in flight,
allowing it to counter inaccuracies in the missile or to follow a moving target. There are two main
systems: vectored thrust (for missiles that are powered throughout the guidance phase of their flight) and
aerodynamic maneuvering (wings, fins, canards, etc).

[edit] Engine

Missiles are powered by an engine, generally either a type of rocket or jet engine. Rockets are generally of
the solid fuel type for ease of maintenance and fast deployment, although some larger ballistic missiles
use liquid fuel rockets. Jet engines are generally used in cruise missiles, most commonly of the turbojet
type, due to it's relative simplicity and low frontal area. Ramjets are the only other common form of jet
engine propulsion, although any type of jet engine could theoretically be used. Missiles often have
multiple engine stages, particularly in those launched from the ground - these stages may all be of similar
types or may include a mix of engine types.

[edit] Warhead
The warhead or warheads of a missile provides its primary destructive power (many missiles have
extensive secondary destructive power due to the high kinetic energy of the weapon and unburnt fuel that
may be onboard). Warheads are most commonly of the high explosive type, often employing shaped
charges to exploit the accuracy of a guided weapon to destroy hardened targets. Other warhead types
include submunitions, incendiaries, nuclear weapons, chemical, biological or radiological weapons or
kinetic energy penetrators.

[edit] Early Development
The first missiles to be used operationally were a series of German missiles of WW2. Most famous of
these are the V1 and V2, both of which used a simple mechanical autopilot to keep the missile flying
along a pre-chosen route. Less well known were a series of anti-shipping and anti-aircraft missiles,
typically based on a simple radio control system directed by the operator.

[edit] Basic roles
Missiles are generally categorized by their launch platform and intended target - in broadest terms these
will either be surface (ground or water) and air, and then sub-categorized by range and the exact target
type (such as anti-tank or anti-ship). Many weapons are designed to be launched from both surface or the
air, and a few are designed to attack either surface or air targets (such as the ADATS missile). Most
weapons require some modification in order to be launched from the air or ground, such as adding
boosters to the ground launched version.

[edit] Surface to Surface/Air to Surface

[edit] Ballistic missiles

After the boost-stage, ballistic missiles follow a trajectory mainly determined by ballistics. The guidance
is for relatively small deviations from that.

Ballistic missiles are largely used for land attack missions. Although normally associated with nuclear
weapons, some conventionally armed ballistic missiles are in service, such as ATACMS. The V2 had
demonstrated that a ballistic missile could deliver a warhead to a target city with no possibility of
interception, and the introduction of nuclear weapons meant it could do useful damage when it arrived.
The accuracy of these systems was fairly poor, but post-war development by most military forces
improved the basic inertial platform concept to the point where it could be used as the guidance system on
ICBMs flying thousands of miles. Today the ballistic missile represents the only strategic deterrent in
most military forces; the USAFs continued support of manned bombers is considered by some to be
entirely political in nature.[citation needed] Ballistic missiles are primarily surface launched, with air launch
being theoretically possible using a weapon such as the canceled Skybolt missile.

[edit] Cruise missiles

The V1 had been successfully intercepted during the war, but this did not make the cruise missile concept
entirely useless. After the war, the US deployed a small number of nuclear-armed cruise missiles in
Germany, but these were considered to be of limited usefulness. Continued research into much longer
ranged and faster versions led to the US's Navaho missile, and its Soviet counterparts, the Burya and
Buran cruise missile. However, these were rendered largely obsolete by the ICBM, and none was used
operationally. Shorter-range developments have become widely used as highly accurate attack systems,
such as the US Tomahawk missile or the German Taurus missile.
Cruise missiles are generally associated with land attack operations, but also have an important role as
anti shipping weapons. They are primarily launched from air or sea platforms in both roles, although land
based launchers also exist.

[edit] Anti-shipping

Another major German missile development project was the anti-shipping class (such as the Fritz X and
Henschel Hs 293), intended to stop any attempt at a cross-channel invasion. However the British were
able to render their systems useless by jamming their radios, and missiles with wire guidance were not
ready by D-Day. After the war the anti-shipping class slowly developed, and became a major class in the
1960s with the introduction of the low-flying turbojet powered cruise missiles known as "sea-skimmers".
These became famous during the Falklands War when an Argentine Exocet missile sank a Royal Navy
destroyer.

A number of anti-submarine missiles also exist; these generally use the missile in order to deliver another
weapon system such as a torpedo or depth charge to the location of the submarine, at which point the
other weapon will conduct the underwater phase of the mission.

[edit] Anti-tank

PARS 3 LR, a modern anti-tank fire-and-forget missile of the German Army

By the end of WWII all forces had widely introduced unguided rockets using HEAT warheads as their
major anti-tank weapon (see Panzerfaust, Bazooka). However these had a limited useful range of a 100 m
or so, and the Germans were looking to extend this with the use of a missile using wire guidance, the X-7.
After the war this became a major design class in the later 1950s, and by the 1960s had developed into
practically the only non-tank anti-tank system in general use. During the 1973 Yom Kippur War between
Israel and Egypt, the 9M14 Malyutka (aka "Sagger") man-portable anti-tank missile proved potent against
Israeli tanks. While other guidance systems have been tried, the basic reliability of wire-guidance means
this will remain the primary means of controlling anti-tank missile in the near future. Anti tank missiles
may be launched from aircraft, vehicles or by ground troops in the case of smaller weapons.

[edit] Surface to Air

[edit] Anti-Aircraft
The Stinger shoulder-launched surface-to-air missile system.

By 1944 US and British air forces were sending huge air fleets over occupied Europe, increasing the
pressure on the Luftwaffe day and night fighter forces. The Germans were keen to get some sort of useful
ground-based anti-aircraft system into operation. Several systems were under development, but none had
reached operational status before the war's end. The US Navy also started missile research to deal with the
Kamikaze threat. By 1950 systems based on this early research started to reach operational service,
including the US Army's Nike Ajax, the Navy's "3T's" (Talos, Terrier, Tartar), and soon followed by the
Soviet S-25 Berkut and S-75 Dvina and French and British systems. Anti-aircraft weapons exist for
virtually every possible launch platform, with surface launched systems ranging from huge, self propelled
or ship mounted launchers to man portable systems.

[edit] Anti-ballistic

Like most missiles, the Arrow missile and MIM-104 Patriot for defense against short-range missiles,
carry explosives.

However, in the case of a large closing speed, a projectile without explosives is used, just a collision is
sufficient to destroy the target. See Missile Defense Agency for the following systems being developed:

• Kinetic Energy Interceptor (KEI)
• Aegis Ballistic Missile Defense System (Aegis BMD) - a SM-3 missile with Lightweight Exo-
Atmospheric Projectile (LEAP) Kinetic Warhead (KW)

[edit] Air-to-air

A modern IRIS-T air-to-air missile of the German Luftwaffe.

Soviet RS-82 rockets were successfully tested in combat at the Battle of Khalkhin Gol in 1939.

German experience in WWII demonstrated that destroying a large aircraft was quite difficult, and they
had invested considerable effort into air-to-air missile systems to do this. Their Me262's jets often carried
R4M rockets, and other types of "bomber destroyer" aircraft had unguided rockets as well. In the post-war
period the R4M served as the pattern for a number of similar systems, used by almost all interceptor
aircraft during the 1940s and '50s. Lacking guidance systems, such rockets had to be carefully aimed at
relatively close range to successfully hit the target. The US Navy and USAF began deploying guided
missiles in the early 1950s, most famous being the US Navy's AIM-9 Sidewinder and USAF's AIM-4
Falcon. These systems have continued to advance, and modern air warfare consists almost entirely of
missile firing. In the Falklands War technically inferior British Harriers were able to defeat faster
Argentinian opponents using AIM-9G missiles provided by the United States as the conflict began. The
latest heat-seeking designs can lock onto a target from various angles, not just from behind, where the
heat signature from the engines is strongest. Other types rely on radar guidance (either on-board or
"painted" by the launching aircraft). Air to Air missiles also have a wide range of sizes, ranging from
helicopter launched self defense weapons with a range of a few miles, to long range weapons designed for
interceptor aircraft such as the Phoenix missile.

[edit] Anti-satellite weapon (ASAT)

The proposed Brilliant Pebbles defense system would use kinetic energy collisions without explosives.
Anti satellite weapons may be launched either by an aircraft or a surface platform, depending on the
design.

[edit] Guidance systems

Missile guidance systems generally fall into a number of basic classes, each one associated with a
particular role. Modern electronics has allowed systems to be mixed on a single airframe, dramatically
increasing the capabilities of the missiles.

See the main article at Missile guidance for details of the types of missile guidance systems.

[edit] See also

The USS Lake Erie (CG-70) fires a missile at USA 193
Wikimedia Commons has media related to: Missile
Look up missile in
Wiktionary, the free dictionary.

• List of missiles
• List of missiles by nation
• Timeline of rocket and missile technology
• V-1 flying bomb
• V-2 rocket
• Redstone missile
• List of World War II guided missiles of Germany
• Shoulder-launched missile weapon
• Fire-and-forget
• Scramjet
• Missile designation
• Pursuit guidance
• Aeroprediction
• Trajectory optimization
• Proportional navigation
• GPS/INS
• Skid-to-turn
• Center of pressure
Fixed-wing aircraft
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article needs additional citations for verification. Please help improve this article by adding
reliable references. Unsourced material may be challenged and removed. (December 2007)
"Airplane" and "Aeroplane" redirect here. For other uses, see Airplane (disambiguation).

Fixed-wing aircraft
A Jet2.com Boeing 737-300, a modern passenger
airliner

Part of a series on
Categories of Aircraft

Lighter than air (aerostats)

Unpowered Powered

• Balloon • Airship

Hybrid Lighter-than-air/Heavier-than-air

Unpowered Powered

• Hybrid airship

Heavier than air (aerodynes)

Unpowered Powered

Flexible-wing Flexible-wing
• Hang glider • Powered hang glider
• Paraglider • Powered paraglider

Fixed-wing Fixed-wing
• Glider • Powered
airplane/aeroplane

Hybrid fixed/rotary wing
• Tiltwing
• Tiltrotor
• Coleopter

Rotary-wing Rotary-wing
• Rotor kite • Autogyro
• Gyrodyne
("Heliplane")
• Helicopter

Other means of lift
• Ornithopter
• Flettner airplane

see also
• Ground-effect vehicle
• Hovercraft
• Flying Bedstead
• Avrocar

A fixed-wing aircraft is a heavier-than-air craft whose lift is generated not by wing motion relative to the
aircraft, but by forward motion through the air. The term is used to distinguish from rotary-wing aircraft
or ornithopters, where the movement of the wing surfaces relative to the aircraft generates lift. In the US
and Canada, the term airplane is used, though around the rest of the English-speaking world, including
Ireland and Commonwealth nations, the spelling aeroplane is more common. These terms refer to any
fixed wing aircraft powered by propellers or jet engines. The word derives from the Greek αέρας (aéras-)
("air") and -plane.[1] The spelling "aeroplane" is the older of the two, dating back to the mid-late 19th
century.[2] Some fixed-wing aircraft may be remotely or robot controlled.

Contents
[hide]

• 1 Overview
• 2 Structure
• 3 Controls
o 3.1 Control duplication
• 4 Aircraft instruments
• 5 Propulsion
o 5.1 Unpowered aircraft
o 5.2 Propeller aircraft
o 5.3 Jet aircraft
 5.3.1 Supersonic jet aircraft
o 5.4 Unmanned Aircraft
o 5.5 Rocket-powered aircraft
o 5.6 Ramjet aircraft
o 5.7 Scramjet aircraft
• 6 History
• 7 Designing and constructing an aircraft
o 7.1 Industrialized production
• 8 Safety
o 8.1 Comparisons
o 8.2 Causes
• 9 Environmental impact
• 10 See also
• 11 Notes
• 12 References
• 13 External links

[edit] Overview
Fixed-wing aircraft range from small training and recreational aircraft to wide-body aircraft and military
cargo aircraft. The word also embraces aircraft with folding or removable wings that are intended to fold
when on the ground. This is usually to ease storage or facilitate transport on, for example, a vehicle trailer
or the powered lift connecting the hangar deck of an aircraft carrier to its flight deck. It also embraces
aircraft with "variable-sweep wings", such as the General Dynamics F-111, Grumman F-14 Tomcat and
the Panavia Tornado, which can vary the sweep angle of their wings during flight. There are also rare
examples of aircraft which can vary the angle of incidence of their wings in flight, such the F-8 Crusader,
which are also considered to be "fixed-wing".

A Cessna 177 propeller-driven general aviation aircraft

The two necessities for fixed-wing aircraft are air flow over the wings for lifting of the aircraft, and an
area for landing. The majority of aircraft, however, also need an airport with the infrastructure to receive
maintenance, restocking, refueling and for the loading and unloading of crew, cargo and passengers.
Some aircraft are capable of take off and landing on ice, aircraft carriers, snow, or calm water.

The aircraft is the second fastest method of transport, after the rocket. Commercial jet aircraft can reach
up to 1000 km/h. Certified single-engined, piston-driven aircraft are capable of reaching up to 435 km/h,
while Experimental (modified WW II fighters) piston singles reach over 815 km/h at the Reno Air Races.
Supersonic aircraft (military, research and a few private aircraft) can reach speeds faster than sound. The
speed record for a plane powered by an air-breathing engine is held by the experimental NASA X-43,
which reached nearly ten times the speed of sound.

The biggest aircraft built is the Antonov An-225, while the fastest still in production is the Mikoyan MiG-
31. The biggest supersonic jet ever produced is the Tupolev Tu-160.

[edit] Structure

The P-38 Lightning, a twin-engine fixed-wing aircraft with a twin-boom configuration.
An F-16 Fighting Falcon, an American military fixed-wing aircraft

The Mexican unmanned aerial vehicle S4 Ehécatl at take-off

The structure of a fixed-wing aircraft consists of the following major parts:

• A long narrow often cylindrical form, called a fuselage, usually with tapered or rounded ends to
make its shape aerodynamically smooth. The fuselage carries the human flight crew if the aircraft
is piloted, the passengers if the aircraft is a passenger aircraft, other cargo or payload, and engines
and/or fuel if the aircraft is so equipped. The pilots operate the aircraft from a cockpit located at
the front or top of the fuselage and equipped with windows, controls, and instruments. Passengers
and cargo occupy the remaining available space in the fuselage. Some aircraft may have two
fuselages, or additional pods or booms.

• A wing (or wings in a multiplane) with an airfoil cross-section shape, used to generate
aerodynamic lifting force to support the aircraft in flight by deflecting air downward as the aircraft
moves forward. The wing halves are typically symmetrical about the plane of symmetry (for
symmetrical aircraft). The wing also stabilizes the aircraft about its roll axis and the ailerons
control rotation about that axis.

• At least one control surface (or surfaces) mounted vertically usually above the rear of the fuselage,
called a vertical stabilizer. The vertical stabilizer is used to stabilize the aircraft about its yaw axis
(the axis in which the aircraft turns from side to side) and to control its rotation along that axis.
Some aircraft have multiple vertical stabilizers.

• At least one horizontal surface at the front or back of the fuselage used to stabilize the aircraft
about its pitch axis (the axis around which the aircraft tilts upward or downward). The horizontal
stabilizer (also known as tailplane) is usually mounted near the rear of the fuselage, or at the top of
the vertical stabilizer, or sometimes a canard is mounted near the front of the fuselage for the same
purpose.

• On powered aircraft, one or more aircraft engines are propulsion units that provide thrust to push
the aircraft forward through the air. The engine is optional in the case of gliders that are not motor
gliders. The most common propulsion units are propellers, powered by reciprocating or turbine
engines, and jet engines, which provide thrust directly from the engine and usually also from a
large fan mounted within the engine. When the number of engines is even, they are distributed
symmetrically about the roll axis of the aircraft, which lies along the plane of symmetry (for
symmetrical aircraft); when the number is odd, the odd engine is usually mounted along the
centerline of the fuselage.

• Landing gear, a set of wheels, skids, or floats that support the aircraft while it is on the surface.

Some varieties of aircraft, such as flying wing aircraft, may lack a discernible fuselage structure and
horizontal or vertical stabilizers.

[edit] Controls
Main article: Aircraft flight control systems

A number of controls allow pilots to direct aircraft in the air. The controls found in a typical fixed-wing
aircraft are as follows:

• A yoke or joystick, which controls rotation of the aircraft about the pitch and roll axes. A yoke
resembles a kind of steering wheel, and a control stick is just a simple rod with a handgrip. The
pilot can pitch the aircraft downward by pushing on the yoke or stick, and pitch the aircraft
upward by pulling on it. Rolling the aircraft is accomplished by turning the yoke in the direction
of the desired roll, or by tilting the control stick in that direction. Pitch changes are used to adjust
the altitude and speed of the aircraft; roll changes are used to make the aircraft turn. Control sticks
and yokes are usually positioned between the pilot's legs; however, a sidestick is a type of control
stick that is positioned on either side of the pilot (usually the left side for the pilot in the left seat,
and vice versa, if there are two pilot seats).

• Rudder pedals, which control rotation of the aircraft about the yaw axis. There are two pedals that
pivot so that when one is pressed forward the other moves backward, and vice versa. The pilot
presses on the right rudder pedal to make the aircraft yaw to the right, and on the left pedal to
make it yaw to the left. The rudder is used mainly to balance the aircraft in turns, or to compensate
for winds or other effects that tend to turn the aircraft about the yaw axis.

• A throttle, which adjusts the thrust produced by the aircraft's engines. The pilot uses the throttle to
increase or decrease the speed of the aircraft, and to adjust the aircraft's altitude (higher speeds
cause the aircraft to climb, lower speeds cause it to descend). In some aircraft the throttle is a
single lever that controls thrust; in others, adjusting the throttle means adjusting a number of
different engine controls simultaneously. Aircraft with multiple engines usually have individual
throttle controls for each engine.

• Brakes, used to slow and stop the aircraft on the ground, and sometimes for turns on the ground.

Other possible controls include:

• Flap levers, which are used to control the position of flaps on the wings.

• Spoiler levers, which are used to control the position of spoilers on the wings, and to arm their
automatic deployment in aircraft designed to deploy them upon landing.

• Trim controls, which usually take the form of knobs or wheels and are used to adjust pitch, roll, or
yaw trim.
• A tiller, a small wheel or lever used to steer the aircraft on the ground (in conjunction with or
instead of the rudder pedals).

• A parking brake, used to prevent the aircraft from rolling when it is parked on the ground.

The controls may allow full or partial automation of flight, such as an autopilot, a wing leveler, or a flight
management system. Pilots adjust these controls to select a specific attitude or mode of flight, and then
the associated automation maintains that attitude or mode until the pilot disables the automation or
changes the settings. In general, the larger and/or more complex the aircraft, the greater the amount of
automation available to pilots.

[edit] Control duplication

On an aircraft with a pilot and copilot, or instructor and trainee, the aircraft is made capable of control
without the crew changing seats. The most common arrangement is two complete sets of controls, one for
each of two pilots sitting side by side, but in some aircraft (military fighter aircraft, some taildraggers and
aerobatic aircraft) the dual sets of controls are arranged one in front of the other. A few of the less
important controls may not be present in both positions, and one position is usually intended for the pilot
in command (e.g., the left "captain's seat" in jet airliners). Some small aircraft use controls that can be
moved from one position to another, such as a single yoke that can be swung into position in front of
either the left-seat pilot or the right-seat pilot (i.e. Beechcraft Bonanza).

Aircraft that require more than one pilot usually have controls intended to suit each pilot position, but still
with sufficient duplication so that all pilots can fly the aircraft alone in an emergency. For example, in jet
airliners, the controls on the left (captain's) side include both the basic controls and those normally
manipulated by the pilot in command, such as the tiller, whereas those of the right (first officer's) side
include the basic controls again and those normally manipulated by the copilot, such as flap levers. The
unduplicated controls that are required for flight are positioned so that they can be reached by either pilot,
but they are often designed to be more convenient to the pilot who manipulates them under normal
condition.

[edit] Aircraft instruments
Instruments provide information to the pilot. They may operate mechanically from the pitot-static system,
or they may be electronic, requiring 12VDC, 24VDC, or 400 Hz power systems.[3] An aircraft that uses
computerized CRT or LCD displays almost exclusively is said to have a glass cockpit.

Basic instruments include:

• An airspeed indicator, which indicates the speed at which the aircraft is moving through the
surrounding air.
• An altimeter, which indicates the altitude of the aircraft above the ground or above mean sea level.

• A Heading indicator, (sometimes referred to as a "directional gyro (DG)") which indicates the
magnetic compass heading that the aircraft's fuselage is pointing towards. The actual direction the
airplane is flying towards is affected by the wind conditions.

• An attitude indicator, sometimes called an artificial horizon, which indicates the exact orientation
of the aircraft about its pitch and roll axes.

Other instruments might include:
• A Turn coordinator, which helps the pilot maintain the aircraft in a coordinated attitude while
turning.
• A rate-of-climb indicator, which shows the rate at which the aircraft is climbing or descending
• A horizontal situation indicator, shows the position and movement of the aircraft as seen from
above with respect to the ground, including course/heading and other information.
• Instruments showing the status of each engine in the aircraft (operating speed, thrust, temperature,
and other variables).
• Combined display systems such as primary flight displays or navigation displays.
• Information displays such as on-board weather radar displays.

[edit] Propulsion
Main article: Aircraft engine

Fixed-wing aircraft can be sub-divided according to the means of propulsion they use.

[edit] Unpowered aircraft

Main article: Unpowered aircraft

Aircraft that primarily intended for unpowered flight include gliders (sometimes called sailplanes), hang
gliders and paragliders. These are mainly used for recreation. After launch, the energy for sustained
gliding flight is obtained through the skilful exploitation of rising air in the atmosphere. Gliders that are
used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1,
though 50:1 is more common. Glider flights of thousands of kilometers at average speeds over 200 km/h
have been achieved. The glider is most commonly launched by a tow-plane or by a winch. Some gliders,
called motor gliders, are equipped with engines (often retractable) and some are capable of self-launching.
The most numerous unpowered aircraft are hang gliders and paragliders. These are foot-launched and are
generally slower, less massive, and less expensive than sailplanes. Hang gliders most often have flexible
wings which are given shape by a frame, though some have rigid wings. This is in contrast to paragliders
which have no frames in their wings. Military gliders have been used in war to deliver assault troops, and
specialized gliders have been used in atmospheric and aerodynamic research. Experimental aircraft and
winged spacecraft have also made unpowered landings.

[edit] Propeller aircraft

Aquila AT01

Smaller and older propeller aircraft make use of reciprocating internal combustion engines that turns a
propeller to create thrust. They are quieter than jet aircraft, but they fly at lower speeds, and have lower
load capacity compared to similar sized jet powered aircraft. However, they are significantly cheaper and
much more economical than jets, and are generally the best option for people who need to transport a few
passengers and/or small amounts of cargo. They are also the aircraft of choice for pilots who wish to own
an aircraft.

Turboprop aircraft are a halfway point between propeller and jet: they use a turbine engine similar to a jet
to turn propellers. These aircraft are popular with commuter and regional airlines, as they tend to be more
economical on shorter journeys.

[edit] Jet aircraft

Jet aircraft make use of turbines for the creation of thrust. These engines are much more powerful than a
reciprocating engine. As a consequence, they have greater weight capacity and fly faster than propeller
driven aircraft. One drawback, however, is that they are noisy; this makes jet aircraft a source of noise
pollution. However, turbofan jet engines are quieter, and they have seen widespread usage partly for that
reason.

The jet aircraft was developed in Germany in 1931. The first jet was the Heinkel He 178, which was
tested at Germany's Marienehe Airfield in 1939. In 1943 the Messerschmitt Me 262, the first jet fighter
aircraft, went into service in the German Luftwaffe. In the early 1950s, only a few years after the first jet
was produced in large numbers, the De Havilland Comet became the world's first jet airliner. However,
the early Comets were beset by structural problems discovered after numerous pressurization and
depressurization cycles, leading to extensive redesigns.

Most wide-body aircraft can carry hundreds of passengers and several tons of cargo, and are able to travel
for distances up to 17,000 km. Aircraft in this category are the Boeing 747, Boeing 767, Boeing 777, the
upcoming Boeing 787, Airbus A300/A310, Airbus A330, Airbus A340, Airbus A380, Lockheed L-1011
TriStar, McDonnell Douglas DC-10, McDonnell Douglas MD-11, Ilyushin Il-86, and Ilyushin Il-96.

Jet aircraft possess high cruising speeds (700 to 900 km/h, or 400 to 550 mph) and high speeds for take-
off and landing (150 to 250 km/h). Due to the speed needed for takeoff and landing, jet aircraft make use
of flaps and leading edge devices for the control of lift and speed, as well as thrust reversers to direct the
airflow forward, slowing down the aircraft upon landing.

[edit] Supersonic jet aircraft

Supersonic aircraft, such as military fighters and bombers, Concorde, and others, make use of special
turbines (often utilizing afterburners), that generate the huge amounts of power for flight faster than the
speed of the sound. Flight at supersonic speed creates more noise than flight at subsonic speeds, due to the
phenomenon of sonic booms. This limits supersonic flights to areas of low population density or open
ocean. When approaching an area of heavier population density, supersonic aircraft are obliged to fly at
subsonic speed.

Due to the high costs, limited areas of use and low demand there are no longer any supersonic aircraft in
use by any major airline. The last Concorde flight was on 26 November 2003. It appears that supersonic
aircraft will remain in use almost exclusively by militaries around the world for the foreseeable future,
though research into new civilian designs continues.

[edit] Unmanned Aircraft

Main article: Unmanned aerial vehicle

An aircraft is said to be 'unmanned' when there is no person in the cockpit of the plane. The aircraft is
controlled only by remote controls or other electronic devices.
[edit] Rocket-powered aircraft

Bell X-1A in flight
Main article: Rocket-powered aircraft

Experimental rocket powered aircraft were developed by the Germans as early as World War II (see Me
163 Komet), and about 29 were manufactured and deployed. The first fixed wing aircraft to break the
sound barrier in level flight was a rocket plane- the Bell X-1. The later North American X-15 was another
important rocket plane that broke many speed and altitude records and laid much of the groundwork for
later aircraft and spacecraft design. Rocket aircraft are not in common usage today, although rocket-
assisted takeoffs are used for some military aircraft. SpaceShipOne is the most famous current rocket
aircraft, being the testbed for developing a commercial sub-orbital passenger service; another rocket plane
is the XCOR EZ-Rocket; and there is of course the Space Shuttle.

[edit] Ramjet aircraft

USAF Lockheed SR-71 Blackbird trainer

A ramjet is a form of jet engine that contains no major moving parts and can be particularly useful in
applications requiring a small and simple engine for high speed use, such as missiles. The D-21 Tagboard
was an unmanned Mach 3+ reconnaissance drone that was put into production in 1969 for spying, but due
to the development of better spy satellites, it was cancelled in 1971. The SR-71's Pratt & Whitney J58
engines ran 80% as ramjets at high-speeds (Mach 3.2). The SR-71 was dropped at the end of the Cold
War, then brought back during the 1990s. They were used also in the Gulf War. The last SR-71 flight was
in October 2001.

[edit] Scramjet aircraft
The X-43A, shortly after booster ignition

Scramjet aircraft are in the experimental stage. The Boeing X-43 is an experimental scramjet with a world
speed record for a jet-powered aircraft - Mach 9.7, nearly 12,000 km/h (≈ 7,000 mph) at an altitude of
about 36,000 meters (≈ 110,000 ft). The X-43A set the flight speed record on 16 November 2004.

[edit] History
Main articles: Aviation history and First flying machine

The dream of flight goes back to the days of pre-history. Many stories from antiquity involve flight, such
as the Greek legend of Icarus and Daedalus, and the Vimana in ancient Indian epics. Around 400 BC,
Archytas, the Ancient Greek philosopher, mathematician, astronomer, statesman, and strategist, was
reputed to have designed and built the first artificial, self-propelled flying device, a bird-shaped model
propelled by a jet of what was probably steam, said to have actually flown some 200 meters.[4][5] This
machine, which its inventor called The Pigeon (Greek: Περιστέρα "Peristera"), may have been suspended
on a wire or pivot for its flight.[6][7] Amongst the first recorded attempts at aviation were the attempts made
by Yuan Huangtou in the 6th century and by Abbas Ibn Firnas in the 9th century. Leonardo da Vinci
researched the wing design of birds and designed a man-powered aircraft in his Codex on the Flight of
Birds (1502). In the 1630s, Lagari Hasan Çelebi flew in a rocket artificially powered by gunpowder. In
the 18th century, Francois Pilatre de Rozier and Francois d'Arlandes flew in an aircraft lighter than air, a
balloon. The biggest challenge became to create other craft, capable of controlled flight.

Le Bris and his glider, Albatros II, photographed by Nadar, 1868

Sir George Cayley, the founder of the science of aerodynamics, was building and flying models of fixed-
wing aircraft as early as 1803, and he built a successful passenger-carrying glider in 1853.[8] In 1856,
Frenchman Jean-Marie Le Bris made the first powered flight, by having his glider "L'Albatros artificiel"
pulled by a horse on a beach. On 28 August 1883, the American John J. Montgomery made a controlled
flight in a glider. Other aviators who had made similar flights at that time were Otto Lilienthal, Percy
Pilcher and Octave Chanute.

The first self-powered aircraft was created by an Englishman by the name of John Stringfellow of Chard
in Somerset, who created a self-powered model aircraft that had its first successful flight in 1848.

Clément Ader constructed and designed a self-powered aircraft. On October 9, 1890, Ader attempted to
fly the Éole, which succeeded in taking off and flying uncontrolled a distance of approximately 50 meters
before witnesses. In August 1892 the Avion II flew for a distance of 200 meters, and on October 14, 1897,
Avion III flew a distance of more than 300 meters. Richard Pearse made a poorly documented
uncontrolled flight on March 31, 1903 in Waitohi, New Zealand, and on August 28, 1903 in Hanover, the
German Karl Jatho made his first flight.[citation needed]

The Wright Brothers made their first successful test flights on December 17, 1903. This flight is
recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping
body for aeronautics and astronautics, as "the first sustained and controlled heavier-than-air powered
flight".[9] By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial
periods. Strictly speaking, the Flyer's wings were not completely fixed, as it depended for stability on a
flexing mechanism named wing warping. This was later superseded by the development of ailerons,
devices which performed a similar function but were attached to an otherwise rigid wing.

Alberto Santos-Dumont a Brazilian living in France, built the first practical dirigible balloons at the end
of the nineteenth century. In 1906 he flew the first fixed wing aircraft in Europe, the 14-bis, which was of
his and Gabriel Voisin's design. It was the first aircraft to take off, fly and land without the use of
catapults, high winds, or other external assistance.[10] A later design of his, the Demoiselle, introduced
ailerons and brought all around pilot control during a flight.[11]

World War I served as a testbed for the use of the aircraft as a weapon. Initially seen by the generals as a
"toy", aircraft demonstrated their potential as mobile observation platforms, then proved themselves to be
machines of war capable of causing casualties to the enemy. "Fighter aces" appeared, described as
"knights of the air", the greatest was the German Manfred von Richthofen, the Red Baron. On the side of
the allies, the ace with the highest number of downed aircraft was René Fonck, of France.

Following the war, aircraft technology continued to develop. Alcock and Brown crossed the Atlantic non-
stop for the first time in 1919, a feat first performed solo by Charles Lindbergh in 1927. The first
commercial flights took place between the United States and Canada in 1919. The turbine or the jet
engine was in development in the 1930s; military jet aircraft began operating in the 1940s.

Aircraft played a primary role in the Second World War, having a presence in all the major battles of the
war, Pearl Harbor, the battles of the Pacific, the Battle of Britain. They were an essential component of
the military strategies of the period, such as the German Blitzkrieg or the American and Japanese aircraft
carrier campaigns of the Pacific.

In October 1947, Chuck Yeager was the first person to exceed the speed of sound, flying the Bell X-1.

Aircraft in a civil military role continued to feed and supply Berlin in 1948, when access to railroads and
roads to the city, completely surrounded by Eastern Germany, were blocked, by order of the Soviet
Union.

The first commercial jet, the de Havilland Comet, was introduced in 1952. A few Boeing 707s, the first
widely successful commercial jet, are still in service after nearly 50 years. The Boeing 727 was another
widely used passenger aircraft, and the Boeing 747 was the world's biggest commercial aircraft between
1970 and 2005, when it was surpassed by the Airbus A380.

[edit] Designing and constructing an aircraft
Small aircraft can be designed and constructed by amateurs as homebuilts, such as Chris Neil's Woody
Helicopter. Other aviators with less knowledge make their aircraft using pre-manufactured kits,
assembling the parts into a complete aircraft.

Most aircraft are constructed by companies with the objective of producing them in quantity for
customers. The design and planning process, including safety tests, can last up to four years for small
turboprops, and up to 12 years for aircraft with the capacity of the A380.
During this process, the objectives and design specifications of the aircraft are established. First the
construction company uses drawings and equations, simulations, wind tunnel tests and experience to
predict the behavior of the aircraft. Computers are used by companies to draw, plan and do initial
simulations of the aircraft. Small models and mockups of all or certain parts of the aircraft are then tested
in wind tunnels to verify the aerodynamics of the aircraft.

When the design has passed through these processes, the company constructs a limited number of these
aircraft for testing on the ground. Representatives from an aviation governing agency often make a first
flight. The flight tests continue until the aircraft has fulfilled all the requirements. Then, the governing
public agency of aviation of the country authorizes the company to begin production of the aircraft.

In the United States, this agency is the Federal Aviation Administration (FAA), and in the European
Union, Joint Aviation Authorities (JAA). In Canada, the public agency in charge and authorizing the mass
production of aircraft is Transport Canada.

In the case of the international sales of aircraft, a license from the public agency of aviation or transports
of the country where the aircraft is also to be used is necessary. For example, aircraft from Airbus need to
be certified by the FAA to be flown in the United States and vice versa, aircraft of Boeing need to be
approved by the JAA to be flown in the European Union.

Quieter aircraft are becoming more and more needed due to the increase in air traffic, particularly over
urban areas, as noise pollution is a major concern. MIT and Cambridge University have been designing
delta-wing aircraft that are 25 times more silent (63 dB) than current craft and can be used for military
and commercial purposes. The project is called the Silent Aircraft Initiative, but production models will
not be available until around 2030.[3]

[edit] Industrialized production

There are few companies that produce aircraft on a large scale. However, the production of an aircraft for
one company is a process that actually involves dozens, or even hundreds, of other companies and plants,
that produce the parts that go into the aircraft. For example, one company can be responsible for the
production of the landing gear, while another one is responsible for the radar. The production of such
parts is not limited to the same city or country; in the case of large aircraft manufacturing companies, such
parts can come from all over the world.

The parts are sent to the main plant of the aircraft company, where the production line is located. In the
case of large aircraft, production lines dedicated to the assembly of certain parts of the aircraft can exist,
especially the wings and the fuselage.

When complete, an aircraft goes through a set of rigorous inspection, to search for imperfections and
defects, and after being approved by the inspectors, the aircraft is tested by a pilot, in a flight test, in order
to assure that the controls of the aircraft are working properly. With this final test, the aircraft is ready to
receive the "final touchups" (internal configuration, painting, etc), and is then ready for the customer.

[edit] Safety
Main article: Air safety

[edit] Comparisons

There are three main statistics which may be used to compare the safety of various forms of travel:[12]:
Deaths per billion journeys

Bus: 4.3

Rail: 20

Van: 20

Car: 40

Foot: 40

Water: 90

Air: 117

Bicycle: 170

Motorcycle: 1640

Deaths per billion hours

Bus: 11.1

Rail: 30

Air: 30.8

Water: 50

Van: 60

Car: 130
Foot: 220

Bicycle: 550

Motorcycle: 4840

Deaths per billion kilometres

Air: 0.05

Bus: 0.4

Rail: 0.6

Van: 1.2

Water: 2.6

Car: 3.1

Bicycle: 44.6

Foot: 54.2

Motorcycle: 108.9

It is worth noting that the air industry's insurers base their calculations on the "number of deaths per
journey" statistic while the industry itself generally uses the "number of deaths per kilometre" statistic in
press releases.[13]

[edit] Causes
The majority of aircraft accidents are a result of human error on the part of the pilot(s) or controller(s).
After human error, mechanical failure is the biggest cause of air accidents, which sometimes also can
involve a human component; e.g., negligence of the airline in carrying out proper maintenance. Adverse
weather is the third largest cause of accidents. Icing, downbursts, and low visibility are often major
contributors to weather related crashes. Birds have been ranked as a major cause for large rotor bursts on
commercial turboprop engines, spurring extra safety measures to keep birds away. Technological
advances such as ice detectors also help pilots ensure the safety of their aircraft.

[edit] Environmental impact
Main article: Aviation and the environment

[edit] See also
Look up airplane in
Wiktionary, the free dictionary.
Look up aeroplane in
Wiktionary, the free dictionary.
Look up aircraft in
Wiktionary, the free dictionary.

• Aircraft
• Aircraft flight mechanics
• Aviation
• Aviation history
• List of altitude records reached by different aircraft types
• Rotorcraft
• Decalage

[edit] Notes
[edit] References
• In 1903 when the Wright brothers used the word "aeroplane" it meant wing, not the whole aircraft.
See text of their patent. U.S. Patent 821,393 — Wright brothers' patent for "Flying Machine"
• Blatner, David. The Flying Book : Everything You've Ever Wondered About Flying On Airplanes.
ISBN 0-8027-7691-4

[edit] External links
Wikimedia Commons has media related to: Aircraft

• Airliners.net
• Aerospaceweb.org
• How Airplanes Work - Howstuffworks.com

Retrieved from "http://en.wikipedia.org/wiki/Fixed-wing_aircraft"
Categories: Aeronautics | Aircraft configurations
Computer
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the machine. For other uses, see Computer (disambiguation).
"Computer technology" redirects here. For the company, see Computer Technology Limited.
This article needs additional citations for verification. Please help improve this article by adding
reliable references. Unsourced material may be challenged and removed. (July 2008)

The NASA Columbia Supercomputer

A computer is a machine that manipulates data according to a list of instructions.

The first devices that resemble modern computers date to the mid-20th century (1940–1945), although the
computer concept and various machines similar to computers existed earlier. Early electronic computers
were the size of a large room, consuming as much power as several hundred modern personal computers
(PC).[1] Modern computers are based on tiny integrated circuits and are millions to billions of times more
capable while occupying a fraction of the space.[2] Today, simple computers may be made small enough to
fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are
icons of the Information Age and are what most people think of as "a computer"; however, the most
common form of computer in use today is the embedded computer. Embedded computers are small,
simple devices that are used to control other devices — for example, they may be found in machines
ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.

The ability to store and execute lists of instructions called programs makes computers extremely versatile
and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this
versatility: any computer with a certain minimum capability is, in principle, capable of performing the
same tasks that any other computer can perform. Therefore, computers with capability and complexity
ranging from that of a personal digital assistant to a supercomputer are all able to perform the same
computational tasks given enough time and storage capacity.

Contents
[hide]

• 1 History of computing
• 2 Stored program architecture
o 2.1 Programs
o 2.2 Example
• 3 How computers work
o 3.1 Control unit
o 3.2 Arithmetic/logic unit (ALU)
o 3.3 Memory
o 3.4 Input/output (I/O)
o 3.5 Multitasking
o 3.6 Multiprocessing
o 3.7 Networking and the Internet
• 4 Further topics
o 4.1 Hardware
o 4.2 Software
o 4.3 Programming languages
o 4.4 Professions and organizations
• 5 See also
• 6 External links
• 7 Notes

• 8 References

History of computing
Main article: History of computer hardware

The Jacquard loom was one of the first programmable devices.
It is difficult to identify any one device as the earliest computer, partly because the term "computer" has
been subject to varying interpretations over time. Originally, the term "computer" referred to a person
who performed numerical calculations (a human computer), often with the aid of a mechanical calculating
device.

The history of the modern computer begins with two separate technologies - that of automated calculation
and that of programmability.

Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the
astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria (c.
10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a
complex system of ropes and drums that might be considered to be a means of deciding which parts of the
mechanism performed which actions and when.[3] This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest
programmable analog computer.[4] It displayed the zodiac, the solar and lunar orbits, a crescent moon-
shaped pointer travelling across a gateway causing automatic doors to open every hour,[5][6] and five
robotic musicians who play music when struck by levers operated by a camshaft attached to a water
wheel. The length of day and night could be re-programmed every day in order to account for the
changing lengths of day and night throughout the year.[4]

The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and
Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by
European engineers. However, none of those devices fit the modern definition of a computer because they
could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched
paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting
Jacquard loom was an important step in the development of computers because the use of punched cards
to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable
computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable
mechanical computer that he called "The Analytical Engine".[7] Due to limited finances, and an inability to
resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by
tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating
Recording Corporation, which later became IBM. By the end of the 19th century a number of
technologies that would later prove useful in the realization of practical computers had begun to appear:
the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly
sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a
basis for computation. However, these were not programmable and generally lacked the versatility and
accuracy of modern digital computers.

Defining characteristics of some early digital computers of the 1940s (In the history of computing
hardware)
First Numeral Computing Turing
Name Programming
operational system mechanism complete

Electro- Program-controlled by punched
Zuse Z3 (Germany) May 1941 Binary Yes (1998)
mechanical film stock

Atanasoff–Berry Not programmable—single
mid-1941 Binary Electronic No
Computer (US) purpose

January Program-controlled by patch
Colossus (UK) Binary Electronic No
1944 cables and switches

Program-controlled by 24-
Harvard Mark I – Electro-
1944 Decimal channel punched paper tape (but No
IBM ASCC (US) mechanical
no conditional branch)

November Program-controlled by patch
ENIAC (US) Decimal Electronic Yes
1945 cables and switches

Manchester Small-
Stored-program in Williams
Scale Experimental June 1948 Binary Electronic Yes
cathode ray tube memory
Machine (UK)

Program-controlled by patch
cables and switches plus a
Modified ENIAC September primitive read-only stored
Decimal Electronic Yes
(US) 1948 programming mechanism using
the Function Tables as program
ROM

Stored-program in mercury
EDSAC (UK) May 1949 Binary Electronic Yes
delay line memory

Stored-program in Williams
Manchester Mark October
Binary Electronic cathode ray tube memory and Yes
1 (UK) 1949
magnetic drum memory

November Stored-program in mercury
CSIRAC (Australia) Binary Electronic Yes
1949 delay line memory
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and
1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics
(largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important
steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon
1940). Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

• Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine
featuring binary arithmetic, including floating point arithmetic and a measure of programmability.
In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational
computer.
• The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based
computation, binary numbers, and regenerative capacitor memory.
• The secret British Colossus computers (1943)[8], which had limited programmability but
demonstrated that a device using thousands of tubes could be reasonably reliable and
electronically reprogrammable. It was used for breaking German wartime codes.
• The Harvard Mark I (1944), a large-scale electromechanical computer with limited
programmability.
• The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic
and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of
1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible
architecture which essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design,
which came to be known as the "stored program architecture" or von Neumann architecture. This design
was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC,
distributed in 1945. A number of projects to develop computers based on the stored-program architecture
commenced around this time, the first of these being completed in Great Britain. The first to be
demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while
the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program
design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was
completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture, making it the
single trait by which the word "computer" is now defined. While the technologies used in computers have
changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the
von Neumann architecture.
Microprocessors are miniaturized devices that often implement stored program CPUs.

Computers that used vacuum tubes as their electronic elements were in use throughout the 1950s. Vacuum
tube electronics were largely replaced in the 1960s by transistor-based electronics, which are smaller,
faster, cheaper to produce, require less power, and are more reliable. In the 1970s, integrated circuit
technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size
and cost and further increased speed and reliability of computers. By the 1980s, computers became
sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as
washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer.
With the evolution of the Internet, personal computers are becoming as common as the television and the
telephone in the household.

Stored program architecture
Main articles: Computer program and Computer programming

The defining feature of modern computers which distinguishes them from all other machines is that they
can be programmed. That is to say that a list of instructions (the program) can be given to the computer
and it will store them and carry them out at some time in the future.

In most cases, computer instructions are simple: add one number to another, move some data from one
location to another, send a message to some external device, etc. These instructions are read from the
computer's memory and are generally carried out (executed) in the order they were given. However, there
are usually specialized instructions to tell the computer to jump ahead or backwards to some other place
in the program and to carry on executing from there. These are called "jump" instructions (or branches).
Furthermore, jump instructions may be made to happen conditionally so that different sequences of
instructions may be used depending on the result of some previous calculation or some external event.
Many computers directly support subroutines by providing a type of jump that "remembers" the location
it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and
line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of
interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the
program over and over again until some internal condition is met. This is called the flow of control within
the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding
two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would
take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the
other hand, a computer may be programmed to do this with just a few simple instructions. For example:

mov #0,sum ; set sum to 0
mov #1,num ; set num to 1
loop: add num,sum ; add num to sum
add #1,num ; add 1 to num
cmp num,#1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further
human intervention. It will almost never make a mistake and a modern PC can complete the task in about
a millionth of a second.[9]

However, computers cannot "think" for themselves in the sense that they only solve problems in exactly
the way they are programmed to. An intelligent human faced with the above addition task might soon
realize that instead of actually adding up all the numbers one can simply use the equation

and arrive at the correct answer (500,500) with little work.[10] In other words, a computer programmed to
add up the numbers one by one as in the example above would do exactly that without regard to
efficiency or alternative solutions.

Programs

A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y +
W(1)" and is labelled "PROJ039" for identification purposes.

In practical terms, a computer program may run from just a few instructions to many millions of
instructions, as in a program for a word processor or a web browser. A typical modern computer can
execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years
of operation. Large computer programs comprising several million instructions may take teams of
programmers years to write, thus the probability of the entire program having been written without error
is highly unlikely.

Errors in computer programs are called "bugs". Bugs may be benign and not affect the usefulness of the
program, or have only subtle effects. But in some cases they may cause the program to "hang" - become
unresponsive to input such as mouse clicks or keystrokes, or to completely fail or "crash". Otherwise
benign bugs may sometimes may be harnessed for malicious intent by an unscrupulous user writing an
"exploit" - code designed to take advantage of a bug and disrupt a program's proper execution. Bugs are
usually not the fault of the computer. Since computers merely execute the instructions they are given,
bugs are nearly always the result of programmer error or an oversight made in the program's design.[11]

In most computers, individual instructions are stored as machine code with each instruction being given a
unique number (its operation code or opcode for short). The command to add two numbers together
would have one opcode, the command to multiply them would have a different opcode and so on. The
simplest computers are able to perform any of a handful of different instructions; the more complex
computers have several hundred to choose from—each with a unique numerical code. Since the
computer's memory is able to store numbers, it can also store the instruction codes. This leads to the
important fact that entire programs (which are just lists of instructions) can be represented as lists of
numbers and can themselves be manipulated inside the computer just as if they were numeric data. The
fundamental concept of storing programs in the computer's memory alongside the data they operate on is
the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store
some or all of its program in memory that is kept separate from the data it operates on. This is called the
Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some
traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this
technique was used with many early computers,[12] it is extremely tedious to do so in practice, especially
for complicated programs. Instead, each basic instruction can be given a short name that is indicative of
its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics
are collectively known as a computer's assembly language. Converting programs written in assembly
language into something the computer can actually understand (machine language) is usually done by a
computer program called an assembler. Machine languages and the assembly languages that represent
them (collectively termed low-level programming languages) tend to be unique to a particular type of
computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held
videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64
computer that might be in a PC.[13]

Though considerably easier than in machine language, writing long programs in assembly language is
often difficult and error prone. Therefore, most complicated programs are written in more abstract high-
level programming languages that are able to express the needs of the computer programmer more
conveniently (and thereby help reduce programmer error). High level languages are usually "compiled"
into machine language (or sometimes into assembly language and then into machine language) using
another computer program called a compiler.[14] Since high level languages are more abstract than
assembly language, it is possible to use different compilers to translate the same high level language
program into the machine language of many different types of computer. This is part of the means by
which software like video games may be made available for different computer architectures such as
personal computers and various video game consoles.

The task of developing large software systems is an immense intellectual effort. Producing software with
an acceptably high reliability on a predictable schedule and budget has proved historically to be a great
challenge; the academic and professional discipline of software engineering concentrates specifically on
this problem.

Example

A traffic light showing red.

Suppose a computer is being employed to drive a traffic light. A simple stored program might say:

1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. Jump to instruction number (2)

With this set of instructions, the computer would cycle the light continually through red, green, yellow
and back to red again until told to stop running the program.

However, suppose there is a simple on/off switch connected to the computer that is intended to be used to
make the light flash red while some maintenance operation is being performed. The program might then
instruct the computer to:

1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. If the maintenance switch is NOT turned on then jump to instruction number 2
12. Turn on the red light
13. Wait for one second
14. Turn off the red light
15. Wait for one second
16. Jump to instruction number 11

In this manner, the computer is either running the instructions from number (2) to (11) over and over or
its running the instructions from (11) down to (16) over and over, depending on the position of the switch.
[15]

How computers work
Main articles: Central processing unit and Microprocessor

A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit,
the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by
busses, often made of groups of wires.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are
collectively known as a central processing unit (CPU). Early CPUs were composed of many separate
components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit
called a microprocessor.
Control unit

Main articles: CPU design and Control unit

The control unit (often called a control system or central controller) directs the various components of a
computer. It reads and interprets (decodes) instructions in the program one by one. The control system
decodes each instruction and turns it into a series of control signals that operate the other parts of the
computer.[16] Control systems in advanced computers may change the order of some instructions so as to
improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that
keeps track of which location in memory the next instruction is to be read from.[17]

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

The control system's function is as follows—note that this is a simplified description, and some of these
steps may be performed concurrently or in a different order depending on the type of CPU:

1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the
other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input
device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to
perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output
device.
8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by
calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be
read from a place 100 locations further down the program. Instructions that modify the program counter
are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often
conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction
is in itself like a short computer program - and indeed, in some more complex CPU designs, there is
another yet smaller computer called a microsequencer that runs a microcode program that causes all of
these events to happen.

Arithmetic/logic unit (ALU)

This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unverifiable material may be challenged and
removed. (July 2008)
Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic.
The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting
or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots.
Some can only operate on whole numbers (integers) whilst others use floating point to represent real
numbers—albeit with limited precision. However, any computer that is capable of performing just the
simplest operations can be programmed to break down the more complex operations into simple steps that
it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—
although it will take more time to do so if its ALU does not directly support the operation. An ALU may
also compare numbers and return boolean truth values (true or false) depending on whether one is equal
to, greater than or less than the other ("is 64 greater than 65?").

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating
complicated conditional statements and processing boolean logic.

Superscalar computers contain multiple ALUs so that they can process several instructions at the same
time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can
perform arithmetic on vectors and matrices.

Memory

Main article: Computer storage

Magnetic core memory was popular main memory for computers through the 1960s until it was
completely replaced by semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell
has a numbered "address" and can store a single number. The computer can be instructed to "put the
number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is
in cell 2468 and put the answer into cell 1595". The information stored in memory may represent
practically anything. Letters, numbers, even computer instructions can be placed into memory with equal
ease. Since the CPU does not differentiate between different types of information, it is up to the software
to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight
bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to
+127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight).
When negative numbers are required, they are usually stored in two's complement notation. Other
arrangements are possible, but are usually not seen outside of specialized applications or historical
contexts. A computer can store any kind of information in memory as long as it can be somehow
represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more
rapidly than the main memory area. There are typically between two and one hundred registers depending
on the type of CPU. Registers are used for the most frequently needed data items to avoid having to
access main memory every time data is needed. Since data is constantly being worked on, reducing the
need to access main memory (which is often slow compared to the ALU and control units) greatly
increases the computer's speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only
memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-
loaded with data and software that never changes, so the CPU can only read from it. ROM is typically
used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when
the power to the computer is turned off while ROM retains its data indefinitely. In a PC , the ROM
contains a specialized program called the BIOS that orchestrates loading the computer's operating system
from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers,
which frequently do not have disk drives, all of the software required to perform the task may be stored in
ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware
than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned
off but being rewritable like RAM. However, flash memory is typically much slower than conventional
ROM and RAM so its use is restricted to applications where high speeds are not required.[18]

In more sophisticated computers there may be one or more RAM cache memories which are slower than
registers but faster than main memory. Generally computers with this sort of cache are designed to move
frequently needed data into the cache automatically, often without the need for any intervention on the
programmer's part.

Input/output (I/O)

This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unverifiable material may be challenged and
removed. (July 2008)
Main article: Input/output

Hard disks are common I/O devices used with computers.

I/O is the means by which a computer receives information from the outside world and sends results back.
Devices that provide input or output to the computer are called peripherals. On a typical personal
computer, peripherals include input devices like the keyboard and mouse, and output devices such as the
display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and
output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics
processing unit might contain fifty or more tiny computers that perform the calculations necessary to
display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the
main CPU in performing I/O.

Multitasking

This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unverifiable material may be challenged and
removed. (July 2008)
Main article: Computer multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some
systems it is necessary to give the appearance of running several programs simultaneously. This is
achieved by having the computer switch rapidly between running each program in turn. One means by
which this is done is with a special signal called an interrupt which can periodically cause the computer to
stop executing instructions where it was and do something else instead. By remembering where it was
executing prior to the interrupt, the computer can return to that task later. If several programs are running
"at the same time", then the interrupt generator might be causing several hundred interrupts per second,
causing a program switch each time. Since modern computers typically execute instructions several orders
of magnitude faster than human perception, it may appear that many programs are running at the same
time even though only one is ever executing in any given instant. This method of multitasking is
sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many people to share
the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more
slowly - in direct proportion to the number of programs it is running. However, most programs spend
much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting
for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until
the event it is waiting for has occurred. This frees up time for other programs to execute so that many
programs may be run at the same time without unacceptable speed loss.

Multiprocessing

Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers may divide their work between one or more separate CPUs, creating a multiprocessing
configuration. Traditionally, this technique was utilized only in large and powerful computers such as
supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple
CPUs on a single integrated circuit) personal and laptop computers have become widely available and are
beginning to see increased usage in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the
basic stored-program architecture and from general purpose computers.[19] They often feature thousands of
CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to
be useful only for specialized tasks due to the large scale of program organization required to successfully
utilize most of the available resources at once. Supercomputers usually see usage in large-scale
simulation, graphics rendering, and cryptography applications, as well as with other so-called
"embarrassingly parallel" tasks.
Networking and the Internet

This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unverifiable material may be challenged and
removed. (July 2008)
Main articles: Computer networking and Internet

Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information between multiple locations since the 1950s. The
U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of
special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the United States began to link their
computers together using telecommunications technology. This effort was funded by ARPA (now
DARPA), and the computer network that it produced was called the ARPANET. The technologies that
made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military
institutions and became known as the Internet. The emergence of networking involved a redefinition of
the nature and boundaries of the computer. Computer operating systems and applications were modified
to include the ability to define and access the resources of other computers on the network, such as
peripheral devices, stored information, and the like, as extensions of the resources of an individual
computer. Initially these facilities were available primarily to people working in high-tech environments,
but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the
development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking
become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally.
A very large proportion of personal computers regularly connect to the Internet to communicate and
receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking
is becoming increasingly ubiquitous even in mobile computing environments.

Further topics
Hardware

Main article: Computer hardware
The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays,
power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware

Antikythera mechanism, Difference
Calculators
Engine, Norden bombsight
First Generation
(Mechanical/Electromechanical)
Jacquard loom, Analytical Engine,
Programmable Devices
Harvard Mark I, Z3

Atanasoff–Berry Computer, IBM
Calculators
604, UNIVAC 60, UNIVAC 120

Colossus, ENIAC, Manchester
Second Generation (Vacuum Tubes)
Small-Scale Experimental
Machine, EDSAC, Manchester
Programmable Devices
Mark 1, CSIRAC, EDVAC,
UNIVAC I, IBM 701, IBM 702,
IBM 650, Z22

IBM 7090, IBM 7080, System/360,
Mainframes
BUNCH
Third Generation (Discrete transistors
and SSI, MSI, LSI Integrated circuits)
PDP-8, PDP-11, System/32,
Minicomputer
System/36

Fourth Generation (VLSI integrated Minicomputer VAX, IBM System i
circuits)

4-bit microcomputer Intel 4004, Intel 4040

Intel 8008, Intel 8080, Motorola
8-bit microcomputer 6800, Motorola 6809, MOS
Technology 6502, Zilog Z80

Intel 8088, Zilog Z8000, WDC
16-bit microcomputer
65816/65802

32-bit microcomputer Intel 80386, Pentium, Motorola
68000, ARM architecture

Alpha, MIPS, PA-RISC, PowerPC,
64-bit microcomputer[20]
SPARC, x86-64

Embedded computer Intel 8048, Intel 8051

Desktop computer, Home
computer, Laptop computer,
Personal computer Personal digital assistant (PDA),
Portable computer, Tablet
computer, Wearable computer

Quantum computer, Chemical
computer, DNA computing,
Theoretical/experimental
Optical computer, Spintronics
based computer

Other Hardware Topics

Input Mouse, Keyboard, Joystick, Image scanner

Peripheral device Output Monitor, Printer
(Input/output)

Floppy disk drive, Hard disk, Optical disc
Both
drive, Teleprinter

Short range RS-232, SCSI, PCI, USB

Computer busses
Long range (Computer
Ethernet, ATM, FDDI
networking)

Software

Main article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data,
protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in
an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls into an uncertain area
somewhere between hardware and software.

Computer software

UNIX System V, AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD
Unix/BSD
operating systems

GNU/Linux List of Linux distributions, Comparison of Linux distributions

Windows 95, Windows 98, Windows NT, Windows 2000, Windows
Microsoft Windows
XP, Windows Vista, Windows CE

Operating
system DOS 86-DOS (QDOS), PC-DOS, MS-DOS, FreeDOS

Mac OS Mac OS classic, Mac OS X

Embedded and real-
List of embedded operating systems
time

Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs

Multimedia DirectX, OpenGL, OpenAL

Library
Programming
C standard library, Standard template library
library

Protocol TCP/IP, Kermit, FTP, HTTP, SMTP
Data
File format HTML, XML, JPEG, MPEG, PNG

User Graphical user Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM
interface interface (WIMP)
Text-based user
Command-line interface, Text user interface
interface

Word processing, Desktop publishing, Presentation program, Database
Office suite management system, Scheduling & Time management, Spreadsheet,
Accounting software

Browser, E-mail client, Web server, Mail transfer agent, Instant
Internet Access
messaging

Design and Computer-aided design, Computer-aided manufacturing, Plant
manufacturing management, Robotic manufacturing, Supply chain management

Raster graphics editor, Vector graphics editor, 3D modeler, Animation
Graphics
editor, 3D computer graphics, Video editing, Image processing

Application Digital audio editor, Audio playback, Mixing, Audio synthesis,
Audio
Computer music

Compiler, Assembler, Interpreter, Debugger, Text Editor, Integrated
Software
development environment, Performance analysis, Revision control,
Engineering
Software configuration management

Educational Edutainment, Educational game, Serious game, Flight simulator

Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform,
Games
Massively multiplayer, Interactive fiction

Artificial intelligence, Antivirus software, Malware scanner,
Misc
Installer/Package management systems, File manager

Programming languages

Programming languages provide various ways of specifying programs for computers to run. Unlike
natural languages, programming languages are designed to permit no ambiguity and to be concise. They
are purely written languages and are often difficult to read aloud. They are generally either translated into
machine language by a compiler or an assembler before being run, or translated directly at run time by an
interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are
thousands of different programming languages—some intended to be general purpose, others useful only
for highly specialized applications.
Programming Languages

Timeline of programming languages, Categorical list of programming languages,
Lists of programming
Generational list of programming languages, Alphabetical list of programming
languages
languages, Non-English-based programming languages

Commonly used
ARM, MIPS, x86
Assembly languages

Commonly used
BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal
High level languages

Commonly used
Bourne script, JavaScript, Python, Ruby, PHP, Perl
Scripting languages

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers
involving computers. Following the theme of hardware, software and firmware, the brains of people who
work in the industry are sometimes known irreverently as wetware or "meatware".

Computer-related professions

Hardware- Electrical engineering, Electronics engineering, Computer engineering,
related Telecommunications engineering, Optical engineering, Nanoscale engineering

Software- Computer science, Human-computer interaction, Information technology, Software
related engineering, Scientific computing, Web design, Desktop publishing

The need for computers to work well together and to be able to exchange information has spawned the
need for many standards organizations, clubs and societies of both a formal and informal nature.

Organizations

Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C

Professional Societies ACM, ACM Special Interest Groups, IET, IFIP
Free/Open source software Free Software Foundation, Mozilla Foundation, Apache Software
groups Foundation

See also
Look up Computer in
Wiktionary, the free dictionary.
Wikiquote has a collection of quotations related to: Computers
Wikimedia Commons has media related to: Computer

At Wikiversity, you can learn about: Introduction to Computers

• Computability theory
• Computer science
• Computing
• Computers in fiction
• Computer security and Computer insecurity
• Electronic waste
• List of computer term etymologies
• Virtualization

External links
• Computer mini-article

Notes
1. ^ In 1946, ENIAC consumed an estimated 174 kW. By comparison, a typical personal computer
may use around 400 W; over four hundred times less. (Kempf 1961)
2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100
operations per second. A modern "commodity" microprocessor (as of 2007) can process billions of
operations per second, and many of these operations are more complicated and useful than early
computer operations.
3. ^ "Heron of Alexandria". Retrieved on 2008-01-15.
4. ^ a b Ancient Discoveries, Episode 11: Ancient Robots, History Channel,
http://www.youtube.com/watch?v=rxjbaQl0ad8, retrieved on 6 September 2008
5. ^ Howard R. Turner (1997), Science in Medieval Islam: An Illustrated Introduction, p. 184,
University of Texas Press, ISBN 0292781490
6. ^ Donald Routledge Hill, "Mechanical Engineering in the Medieval Near East", Scientific
American, May 1991, pp. 64-9 (cf. Donald Routledge Hill, Mechanical Engineering)
7. ^ The Analytical Engine should not be confused with Babbage's difference engine which was a
non-programmable mechanical calculator.
8. ^ B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park's Codebreaking Computers,
Oxford University Press, 2006
9. ^ This program was written similarly to those for the PDP-11 minicomputer and shows some
typical things a computer can do. All the text after the semicolons are comments for the benefit of
human readers. These have no significance to the computer and are ignored. (Digital Equipment
Corporation 1972)
10. ^ Attempts are often made to create programs that can overcome this fundamental limitation of
computers. Software that mimics learning and adaptation is part of artificial intelligence.
11. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware
may fail or may itself have a fundamental problem that produces unexpected results in certain
situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early
1990s to produce inaccurate results for certain floating point division operations. This was caused
by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
12. ^ Even some later computers were commonly programmed directly in machine code. Some
minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches.
However, this method was usually used only as part of the booting process. Most modern
computers boot entirely automatically by reading a boot program from some non-volatile memory.
13. ^ However, there is sometimes some form of machine language compatibility between different
computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of
the same programs that an Intel Core 2 microprocessor can, as well as programs designed for
earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early
commercial computers, which were often one-of-a-kind and totally incompatible with other
computers.
14. ^ High level languages are also often interpreted rather than compiled. Interpreted languages are
translated into machine code on the fly by another program called an interpreter.
15. ^ Although this is a simple program, it contains a software bug. If the traffic signal is showing red
when someone switches the "flash red" switch, it will cycle through green once more before
starting to flash red as instructed. This bug is quite easy to fix by changing the program to
repeatedly test the switch throughout each "wait" period—but writing large programs that have no
bugs is exceedingly difficult.
16. ^ The control unit's rule in interpreting instructions has varied somewhat in the past. While the
control unit is solely responsible for instruction interpretation in most modern computers, this is
not always the case. Many computers include some instructions that may only be partially
interpreted by the control system and partially interpreted by another device. This is especially the
case with specialized computing hardware that may be partially self-contained. For example,
EDVAC, the first modern stored program computer to be designed, used a central control unit that
only interpreted four instructions. All of the arithmetic-related instructions were passed on to its
arithmetic unit and further decoded there.
17. ^ Instructions often occupy more than one memory address, so the program counters usually
increases by the number of memory locations required to store one instruction.
18. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making
it less useful for heavy random access usage. (Verma 1988)
19. ^ However, it is also very common to construct supercomputers out of many pieces of cheap
commodity hardware; usually individual computers connected by networks. These so-called
computer clusters can often provide supercomputer performance at a much lower cost than
customized designs. While custom architectures are still used for most of the most powerful
supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500
2006)
20. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the
architectures listed in this table, except for Alpha, existed in 32-bit forms before their 64-bit
incarnations were introduced.

References
• a
Kempf, Karl (1961). "Historical Monograph: Electronic Computers Within the Ordnance Corps".
Aberdeen Proving Ground (United States Army).
• a
Phillips, Tony (2000). "The Antikythera Mechanism I". American Mathematical Society. Retrieved on
2006-04-05.
• a
Shannon, Claude Elwood (1940). "A symbolic analysis of relay and switching circuits". Massachusetts
Institute of Technology.
• a
Digital Equipment Corporation (1972) (PDF). PDP-11/40 Processor Handbook. Maynard, MA: Digital
Equipment Corporation. http://bitsavers.vt100.net/dec/www.computer.museum.uq.edu.au_mirror/D-09-
30_PDP11-40_Processor_Handbook.pdf.
• a
Verma, G.; Mielke, N. (1988). "Reliability performance of ETOX based flash memories". IEEE
International Reliability Physics Symposium.
• a
Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (2006-11-13). "Architectures Share Over
Time". TOP500. Retrieved on 2006-11-27.
• Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors and Computer
Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.

Retrieved from "http://en.wikipedia.org/wiki/Computer"
Category: Computing
Hidden categories: Semi-protected | Articles needing additional references from July 2008 | All articles
with unsourced statements | Articles with unsourced statements since December 2007

Microprocessor
From Wikipedia, the free encyclopedia

(Redirected from Microprocessors)
Jump to: navigation, search

Microprocessor

Date invented Late 1960s/Early 1970s (see article for
explanation)

Connects to Printed circuit boards via sockets,
soldering, or other methods.

Architectures PowerPC, x86, x86-64, and many others
(see below, and article)

Common AMD, Analog Devices, Atmel, Cypress,
manufacturers Fairchild, Fujitsu, Hitachi, IBM, Infineon,
Intel, Intersil, ITT, Maxim, Microchip,
Mitsubishi, Mostek, Motorola, National,
NEC, NXP, OKI, Renesas, Samsung,
Sharp, Siemens, Signetics, STM,
Synertek, Texas, Toshiba, TSMC, UMC,
Winbond, Zilog, and others.

A microprocessor incorporates most or all of the functions of a central processing unit (CPU) on a single
integrated circuit (IC). [1] The first microprocessors emerged in the early 1970s and were used for
electronic calculators, using BCD arithmetic on 4-bit words. Other embedded uses of 4 and 8-bit
microprocessors, such as terminals, printers, various kinds of automation etc, followed rather quickly.
Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose
microcomputers in the mid-1970s.

Computer processors were for a long period constructed out of small and medium-scale ICs containing
the equivalent of a few to a few hundred transistors. The integration of the whole CPU onto a single VLSI
chip therefore greatly reduced the cost of processing capacity. From their humble beginnings, continued
increases in microprocessor capacity have rendered other forms of computers almost completely obsolete
(see history of computing hardware), with one or more microprocessor as processing element in
everything from the smallest embedded systems and handheld devices to the largest mainframes and
supercomputers.

Since the early 1970s, the increase in capacity of microprocessors has been known to generally follow
Moore's Law, which suggests that the complexity of an integrated circuit, with respect to minimum
component cost, doubles every 18 months. In the late 1990s, heat generation (TDP), due to switching
losses, static current leakage, and other factors, emerged as a leading developmental constraint[2].

Contents
[hide]

• 1 History
o 1.1 First types
o 1.2 Notable 8-bit designs
o 1.3 16-bit designs
o 1.4 32-bit designs
o 1.5 64-bit designs in personal computers
o 1.6 Multicore designs
o 1.7 RISC
• 2 Special-purpose designs
• 3 Market statistics
• 4 Architectures
• 5 See also
o 5.1 Major designers
• 6 Notes
• 7 References
• 8 External links
o 8.1 General

o 8.2 Historical documents

[edit] History
Main article: History of general purpose CPUs

[edit] First types

The 4004 with cover removed (left) and as actually used (right).

Three projects arguably delivered a complete microprocessor at about the same time, namely Intel's 4004,
the Texas Instruments (TI) TMS 1000, and Garrett AiResearch's Central Air Data Computer (CADC).
In 1968, Garrett AiResearch, with designer Ray Holt and Steve Geller, were invited to produce a digital
computer to compete with electromechanical systems then under development for the main flight control
computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a
MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and
much more reliable than the mechanical systems it competed against, and was used in all of the early
Tomcat models. This system contained a "a 20-bit, pipelined, parallel multi-microprocessor". However,
the system was considered so advanced that the Navy refused to allow publication of the design until
1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown even today. (see First
Microprocessor Chip Set.) TI developed the 4-bit TMS 1000, and stressed pre-programmed embedded
applications, introducing a version called the TMS1802NC on September 17, 1971, which implemented a
calculator on a chip. The Intel chip was the 4-bit 4004, released on November 15, 1971, developed by
Federico Faggin and Marcian Hoff, the manager of the designing team was Leslie L. Vadász.

TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the
single-chip microprocessor architecture on September 4, 1973. It may never be known which company
actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and
TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the
microprocessor patent. A nice history of these events is contained in court documentation from a legal
dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.

Interestingly, a third party (Gilbert Hyatt) was awarded a patent which might cover the "microprocessor".
See a webpage claiming an invention pre-dating both TI and Intel, describing a "microcontroller".
According to a rebuttal and a commentary, the patent was later invalidated, but not before substantial
royalties were paid out.

A computer-on-a-chip is a variation of a microprocessor which combines the microprocessor core (CPU),
some memory, and I/O (input/output) lines, all on one chip. The computer-on-a-chip patent, called the
"microcomputer patent" at the time, U.S. Patent 4,074,351 , was awarded to Gary Boone and Michael J.
Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or
more microprocessors as its CPU(s), while the concept defined in the patent is perhaps more akin to a
microcontroller.

According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with
Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal
they were designing. Datapoint later decided to use the chip, and Intel marketed it as the 8008 in April,
1972. This was the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer
kit advertised in the magazine Radio-Electronics in 1974. The 8008 and its successor, the world-famous
8080, opened up the microprocessor component marketplace.

[edit] Notable 8-bit designs

The 4004 was later followed in 1972 by the 8008, the world's first 8-bit microprocessor. These processors
are the precursors to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit
processors. The competing Motorola 6800 was released August 1974. Its architecture was cloned and
improved in the MOS Technology 6502 in 1975, rivaling the Z80 in popularity during the 1980s.

Both the Z80 and 6502 concentrated on low overall cost, by combining small packaging, simple computer
bus requirements, and including circuitry that normally must be provided in a separate chip (example: the
Z80 included a memory controller). It was these features that allowed the home computer "revolution" to
accelerate sharply in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-
81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to
several firms. It became the core of the Apple IIc and IIe personal computers, medical implantable grade
pacemakers and defibrilators, automotive, industrial and consumer devices. WDC pioneered the licensing
of microprocessor technology which was later followed by ARM and other microprocessor Intellectual
Property (IP) providers in the 1990’s.

Motorola trumped the entire 8-bit market by introducing the MC6809 in 1978, arguably one of the most
powerful, orthogonal, and clean 8-bit microprocessor designs ever fielded – and also one of the most
complex hard-wired logic designs that ever made it into production for any microprocessor. Microcoding
replaced hardwired logic at about this time for all designs more powerful than the MC6809 – because the
design requirements were getting too complex for hardwired logic.

Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to
its innovative and powerful instruction set architecture.

A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA
COSMAC) (introduced in 1976) which was used in NASA's Voyager and Viking spaceprobes of the
1970s, and onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the
first to implement C-MOS technology. The CDP1802 was used because it could be run at very low
power, and because its production process (Silicon on Sapphire) ensured much better protection against
cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is
said to be the first radiation-hardened microprocessor.

The RCA 1802 had what is called a static design, meaning that the clock frequency could be made
arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use
minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would
awaken/improve the performance of the processor in time for important tasks, such as navigation updates,
attitude control, data acquisition, and radio communication.

[edit] 16-bit designs

The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early
1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. During the same year,
National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which
was later followed by an NMOS version, the INS8900.

Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC)
in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild
Semiconductor MicroFlame 9440, both of which were introduced in the 1975 to 1976 timeframe.

The first single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-
990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home
computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic
64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common,
smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to
compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package,
moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design.
The family later expanded to include the 99105 and 99110.

The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS
65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super
Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead "upsized" their 8080
design into the 16-bit Intel 8086, the first member of the x86 family which powers most modern PC type
computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and
succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an
external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their
8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC
market dominance with the processor family's backwards compatibility.

The integrated microprocessor memory management unit (MMU) was developed by Childs et al. of Intel,
and awarded US patent number 4,442,484.

[edit] 32-bit designs

Upper interconnect layers on an Intel 80486DX2 die.

16-bit designs were in the markets only briefly when full 32-bit implementations started to appear.

The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely
known, had 32-bit registers but used 16-bit internal data paths, and a 16-bit external data bus to reduce pin
count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though
it clearly has 32-bit architecture. The combination of high performance, large (16 megabytes (2^24))
memory space and fairly low costs made it the most popular CPU design of its class. The Apple Lisa and
Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the
Atari ST and Commodore Amiga.

The world's first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit
addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production
in 1982 (See this bibliographic reference and this general reference). After the divestiture of AT&T in
1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the
WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers;
in the 3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit
laptop computer; and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-
pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V
operating system.

Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a
commercial success. It had an advanced capability-based object-oriented architecture, but poor
performance compared to other competing architectures such as the Motorola 68000.

Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The
MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely
popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River
Data Systems) produced desktop-size systems. Following this with the MC68030, which added the MMU
into the chip, the 68K family became the processor for everything that wasn't running DOS. The
continued success led to the MC68040, which included an FPU for better math performance. A 68050
failed to achieve its performance goals and was not released, and the follow-up MC68060 was released
into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early
1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there
were more 68020s in embedded equipment than there were Intel Pentiums in PCs (See this webpage for
this embedded usage information). The ColdFire processor cores are derivatives of the venerable 68020.

During this time (early to mid 1980s), National Semiconductor introduced a very similar 16-bit pinout,
32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named
the NS 32032, and a line of 32-bit industrial OEM microcomputers. By the mid-1980s, Sequent
introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was
one of the design's few wins, and it disappeared in the late 1980s.

The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They
were used in high-end workstations and servers by SGI, among others.

Other designs included the interesting Zilog Z8000, which arrived too late to market to stand a chance and
disappeared quickly.

In the late 1980s, "microprocessor wars" started killing off some of the microprocessors. Apparently, with
only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to
Intel microprocessors.

From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and
server markets, and these microprocessors became faster and more capable. Intel had licensed early
versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix
built later versions of the architecture based on their own designs. During this span, these processors
increased in complexity (transistor count) and capability (instructions/second) by at least a factor of 1000.
Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with
the public at large.

[edit] 64-bit designs in personal computers

While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early
2000s saw the introduction of 64-bit microchips targeted at the PC market.

With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (now called
AMD64), in September 2003, followed by Intel's fully compatible 64-bit extensions (first called IA-32e
or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy
applications without any performance penalty as well as new 64-bit software. With operating systems
Windows XP x64, Windows Vista x64, Linux, BSD and Mac OS X that run 64-bit native, the software is
also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an
increase in register size from the IA-32 as it also doubles the number of general-purpose registers.

The move to 64 bits by PowerPC processors had been intended since the processors' design in the early
90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related
data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at
or above 64 bits for several years. Unlike what happened with IA-32 was extended to x86-64, no new
general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-
bit mode for applications making no use of the larger address space is minimal.

[edit] Multicore designs
AMD Athlon 64 X2 3600 Dual core processor
Main article: Multi-core (computing)

A different approach to improving a computer's performance is to add extra processors, as in symmetric
multiprocessing designs which have been popular in servers and workstations since the early 1990s.
Keeping up with Moore's Law is becoming increasingly challenging as chip-making technologies
approach the physical limits of the technology.

In response, the microprocessor manufacturers look for other ways to improve performance, in order to
hold on to the momentum of constant upgrades in the market.

A multi-core processor is simply a single chip containing more than one microprocessor core, effectively
multiplying the potential performance with the number of cores (as long as the operating system and
software is designed to take advantage of more than one processor). Some components, such as bus
interface and second level cache, may be shared between cores. Because the cores are physically very
close they interface at much faster clock rates compared to discrete multiprocessor systems, improving
overall system performance.

In 2005, the first mass-market dual-core processors were announced and as of 2007 dual-core processors
are widely used in servers, workstations and PCs while quad-core processors are now available for high-
end applications in both the home and professional environments.

Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core
design. The Niagara 2 supports more threads and operates at 1.6 GHz.

High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well
as the new Intel Core 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skulltrail
motherboard.

[edit] RISC

In the mid-1980s to early-1990s, a crop of new high-performance RISC (reduced instruction set
computer) microprocessors appeared, which were initially used in special purpose machines and Unix
workstations, but then gained wide acceptance in other roles.

The first commercial design was released by MIPS Technologies, the 32-bit R2000 (the R1000 was not
released). The R3000 made the design truly practical, and the R4000 introduced the world's first 64-bit
design. Competing projects would result in the IBM POWER and Sun SPARC systems, respectively.
Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel
i860 and Intel i960, Motorola 88000, DEC Alpha and the HP-PA.
Market forces have "weeded out" many of these designs, with almost no desktop or laptop RISC
processors and with the SPARC being used in Sun designs only. MIPS is primarily used in embedded
systems, notably in Cisco routers. The rest of the original crop of designs have disappeared. Other
companies have attacked niches in the market, notably ARM, originally intended for home computer use
but since focussed on the embedded processor market. Today RISC designs based on the MIPS, ARM or
PowerPC core power the vast majority of computing devices.

As of 2007, two 64-bit RISC architectures are still produced in volume for non-embedded applications:
SPARC and Power Architecture. The RISC-like Itanium is produced in smaller quantities. The vast
majority of 64-bit microprocessors are now x86-64 CISC designs from AMD and Intel.

[edit] Special-purpose designs
Though the term "microprocessor" has traditionally referred to a single- or multi-chip CPU or system-on-
a-chip (SoC), several types of specialized processing devices have followed from the technology. The
most common examples are microcontrollers, digital signal processors (DSP) and graphics processing
units (GPU). Many examples of these are either not programmable, or have limited programming
facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only
recently gained limited facilities like programmable vertex shaders. There is no universal consensus on
what defines a "microprocessor", but it is usually safe to assume that the term refers to a general-purpose
CPU of some sort and not a special-purpose processor unless specifically noted.

[edit] Market statistics
In 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold. [1] Although
about half of that money was spent on CPUs used in desktop or laptop personal computers, those count
for only about 0.2% of all CPUs sold.

Silicon Valley has an old saying: "The first chip costs a million dollars; the second one costs a nickel." In
other words, most of the cost is in the design and the manufacturing setup: once manufacturing is
underway, it costs almost nothing.[citation needed]

About 55% of all CPUs sold in the world are 8-bit microcontrollers. Over 2 billion 8-bit microcontrollers
were sold in 1997. [2]

Less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2%
are used in desktop or laptop personal computers. Most microprocessors are used in embedded control
applications such as household appliances, automobiles, and computer peripherals. "Taken as a whole, the
average price for a microprocessor, microcontroller, or DSP is just over $6." [3]

[edit] Architectures
• 65xx
o MOS Technology 6502
o Western Design Center 65xx
• ARM family
• Altera Nios, Nios II
• Atmel AVR architecture (purely microcontrollers)
• EISC
• RCA 1802 (aka RCA COSMAC, CDP1802)
• DEC Alpha
• IBM POWER
• Intel
o 4004, 4040
o 8080, 8085
o 8048, 8051
o iAPX 432
o i860, i960
o Itanium
• LatticeMico32
• M32R architecture
• MIPS architecture
• Motorola
o Motorola 6800
o Motorola 6809
o Motorola 68000 family, ColdFire
o Motorola G3, G4, G5
• NSC 320xx
• OpenCores OpenRISC architecture
• PA-RISC family
• National Semiconductor SC/MP ("scamp")
• Signetics 2650
• SPARC
• SuperH family
• Transmeta Crusoe, Efficeon (VLIW architectures, IA-32 32-bit Intel x86 emulator)
• INMOS Transputer
• x86 architecture
o Intel 8086, 8088, 80186, 80188 (16-bit real mode-only x86 architecture)
o Intel 80286 (16-bit real mode and protected mode x86 architecture)
o IA-32 32-bit x86 architecture
o x86-64 64-bit x86 architecture
• XAP processor from Cambridge Consultants
• Xilinx
o MicroBlaze soft processor
o PowerPC405 embedded hard processor in Virtex FPGAs
• Zilog
o Z80, Z180, eZ80
o Z8, eZ8

o and others

[edit] See also

At Wikiversity, you can learn about: Introduction to Computers/Processor

• Central processing unit
• Computer architecture
• Addressing mode
• Digital signal processor
• List of microprocessors
• Microprocessor Chronology
• Arithmetic and logical unit
• CISC / RISC
• Clock rate
• Computer bus
• Computer engineering
• CPU cooling
• CPU core voltage
• CPU design
• CPU locking
• CPU power consumption
• Firmware
• Floating point unit
• Front side bus
• Instruction pipeline
• Instruction set
• Microarchitecture
• Microcode
• Microcontroller
• Microprocessor Chronicles (documentary film)
• Motherboard
• Pipeline
• Superscalar
• Superpipelined
• Wait state
• Scratchpad RAM
• Soft processor

[edit] Major designers

In 2007, the companies with the largest share of the microprocessor market were[3]

• Renesas Technology (21 percent)
• Freescale Semiconductor (12 percent share)
• NEC (10 percent)
• Infineon (6 percent)
• Microchip (6 percent)
• Fujitsu (5 percent)
• Matsushita (5 percent)
• STMicroelectronics (5 percent)
• Samsung (4 percent), and
• Texas Instruments Semiconductors (4 percent)

Other notable microprocessor design companies include:

• Intel
• Advanced Micro Devices (AMD)
• IBM Microelectronics
• AMCC
• ARM Holdings
• MIPS Technologies
• VIA Technologies
• Western Design Center
• Sun Microsystems
• CPU Tech

[edit] Notes
1. ^ Adam Osborne, An Introduction to Microcomputers Volume 1 Basic Concepts,2nd Edition, Osborne-
McGraw Hill, Berkely California, 1980, ISBN 0-931988-34-9 pg1-1
2. ^ Hodgin, Rick (2007-12-03). "Six fold reduction in semiconductor power loss, a faster, lower heat process
technology", TG Daily, TG Publishing network. Retrieved on 3 December 2007.
3. ^ "Renesas seeks control of controller arena" by Mark LaPedus 2008

[edit] References
• Ak Ray & KM Bhurchandi , "Advanced Microprocessors and Peripherals on Architecture
Programming and Interfacing" published in India by Tata McGraw Hill Publishing Company Ltd.

[edit] External links
Wikimedia Commons has media related to: Microprocessors

[edit] General

• Great Microprocessors of the Past and Present – By John Bayko
• Microprocessor history – Hosted by IBM
• Microprocessor instruction set cards – By Jonathan Bowen
• CPU-Collection — An extensive archive of photographs and information, with hundreds of
microprocessors from 1974 to the present day
• CPU-World – Extensive CPU/MCU/FPU data
• Gecko's CPU Library – The Gecko's CPU/FPU collection from 4004 to today: hundreds pages of
pictures and informations about processors, packages, sockets, etc.
• HowStuffWorks "How Microprocessors Work"
• IC Die Photography – A gallery of CPU die photographs

[edit] Historical documents

• TMS1802NC calculator chip press release – Texas Instruments, September 17, 1971
• 1973: TI Receives first patent on Single-Chip Microprocessor
• TI Awarded Basic Microcomputer Patent – TI, February 17, 1978 ("microcomputer" to be
understood as a single-chip computer; a simple µC)
• Important discoveries in microprocessors during 2004 – Hosted by IBM
• Pico and General Instrument's Single Chip Calculator processor Possibly pre-dating Intel and TI.
• 1974 speculation on the possible applications of the microprocessor

SyHung2020@gmail.com Hanoi 201208. Http://www.wikipedia.org