You are on page 1of 201

Childbirth

From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article may require cleanup to meet Wikipedia's quality standards.
Please improve this article if you can. (November 2006)

Childbirth (also called labour, birth, partus or parturition) is the culmination of a
human pregnancy or gestation period with the delivery of one or more newborn infants
from a woman's uterus. The process of human childbirth is categorized in three stages of
labour: the shortening and dilation of the cervix, descent and delivery of the infant, and
delivery of the placenta.[1]

Contents
[hide]

• 1 The mechanics of birth
• 2 The stages of normal human birth
o 2.1 Latent phase
o 2.2 First stage: contractions
o 2.3 Second stage: delivery
o 2.4 Third stage: placenta
• 3 After the birth
• 4 Pain
o 4.1 Descriptions
o 4.2 Non-medical pain control
o 4.3 Medical pain control
• 5 Complications of birth
o 5.1 Labor complications
o 5.2 Maternal complications
o 5.3 Fetal complications
• 6 Instrumental delivery (Forceps and Ventouse)
• 7 Twins and multiple births
• 8 Variations
o 8.1 Being born in the caul
o 8.2 Orgasm during childbirth
• 9 Professions associated with childbirth
• 10 Social and legal aspects
• 11 Psychological aspects
• 12 Partner and other support
• 13 See also
• 14 References

• 15 External links

[edit] The mechanics of birth
Because humans are bipedal with an erect stance and have, in relation to the size of the
pelvis, the biggest head and shoulders of any species, human fetuses are adapted to make
birth possible.

The erect posture causes the weight of the abdominal contents to thrust on the pelvic
floor, a complex structure which must not only support this weight but allow three
channels to pass through it: the urethra, the vagina and the rectum. The relatively large
head and shoulders require a specific sequence of manoeuvres to occur for the bony head
and shoulders to pass through the bony ring of the pelvis. If these manoeuvres fail, the
progress of labour is arrested. All changes in the soft tissues of the cervix and the birth
canal are entirely dependent on the successful completion of these six maneuvers:

1. Engagement of the fetal head in the transverse position. The baby is looking
across the pelvis at one or other of the mother's hips.
2. Descent and flexion of the fetal head
3. Internal rotation. The fetal head rotates 90 degrees to the occipito-anterior so
that the baby's face is towards the mother's rectum.
4. Delivery by extension. The fetal head passes out of the birth canal. Its head is
tilted backwards so that its forehead leads the way through the vagina.
5. Restitution. The fetal head turns through 45 degrees to restore its normal
relationship with the shoulders, which are still at an angle.
6. External rotation. The shoulders repeat the corkscrew movements of the head,
which can be seen in the final movements of the fetal head.

[edit] The stages of normal human birth
[edit] Latent phase

The latent phase of labour may last many days and the contractions are an intensification
of the Braxton-Hicks contractions that start around 26 weeks gestation. Cervical
effacement occurs during the closing weeks of pregnancy and is usually complete or near
complete, by the end of latent phase. Cervical effacement is the thinning and stretching of
the cervix. The degree of cervical effacement may be felt during a vaginal examination.
A 'long' cervix implies that not much has been taken into the lower segment, and vice
versa for a 'short' cervix. Latent phase ends with the onset of active first stage; when the
cervix is about 3 cm dilated.

[edit] First stage: contractions
The first stage of labor starts classically when the effaced (thinned) cervix is 3 cm dilated.
There is variation in this point as some women may have active contractions prior to
reaching this point, or they may reach this point without regular contractions. The onset
of actual labor is defined when the cervix begins to progressively dilate. Rupture of the
membranes, or a blood stained 'show' may or may not occur at around this stage.

Uterine muscles form opposing spirals from the top of the upper segment of the uterus to
its junction with the lower segment. During effacement, the cervix becomes incorporated
into the lower segment of the uterus. During a contraction, these muscles contract causing
shortening of the upper segment and drawing upwards of the lower segment, in a gradual
expulsive motion. This draws the cervix up over the baby's head. Full dilatation is
reached when the cervix is the size of the baby's head; at around 10 cm dilation for a term
baby.

The duration of labour varies widely, but active phase averages some 8 hours for women
giving birth to their first child ("primiparae") and 4 hours for women who have already
given birth ("multiparae").[2]

[edit] Second stage: delivery

This stage begins when the cervix is fully dilated, and ends when the baby is finally
delivered. At the beginning of the normal second stage, the head is fully engaged in the
pelvis; the widest diameter of the head has successfully passed through the pelvic brim.
Ideally it has successfully also passed below the interspinous diameter. This is the
narrowest part of the pelvis. If these have been accomplished, all that will remain is for
the fetal head to pass below the pubic arch and out through the introitus. This is assisted
by the additional maternal efforts of "bearing down". The fetal head is seen to 'crown' as
the labia part. At this point the woman may feel a burning or stinging sensation.

Delivery of the fetal head signals the successful completion of the fourth mechanism of
labour (delivery by extension), and is followed by the fifth and sixth mechanisms
(restitution and external rotation).
A newborn baby with umbilical cord ready to be clamped

The second stage of labour will vary to some extent, depending on how successfully the
preceding tasks have been accomplished.

[edit] Third stage: placenta

In this stage, the uterus expels the placenta (afterbirth). The placenta is usually delivered
within 15-30 minutes of the baby being born. Maternal blood loss is limited by
contraction of the uterus following delivery of the placenta. Normal blood loss is less
than 600 mL.

Breastfeeding during and after the third stage, the placenta is visible in the bowl to the
right.

The third stage can be managed either expectantly or actively. Expectant management
(also known as physiological management) allows the placenta to be expelled without
medical assistance. Breastfeeding soon after birth and massaging of the top of the uterus
(the fundus) causes uterine contractions that encourage delivery of the placenta. Active
management utilizes oxytocic agents and controlled cord traction. The oxytocic agents
augment uterine muscular contraction and the cord traction assists with rapid delivery of
the placenta.
A Cochrane database study[3] suggests that blood loss and the risk of postpartum bleeding
will be reduced in women offered active management of the third stage of labour.
However, the use of ergometrine for active management was associated with nausea or
vomiting and hypertension, and controlled cord traction requires the immediate clamping
of the umbilical cord.

[edit] After the birth
Medical professionals typically recommend breastfeeding of the first milk, colostrum, to
assist with uterine contraction to reduce postpartum bleeding/hemorrhage in the mother,
and to pass antibodies, immunities and other benefits to the baby[citation needed]. Many
cultures feature initiation rites for newborns, such as naming ceremonies, baptism, and
others.

Mothers are often allowed a period where they are relieved of their normal duties to
recover from childbirth. The length of this period varies. In China it is 30 days and is
referred to as "doing the month" or "sitting month" (see Postpartum period)[citation needed]. In
some other countries, taking time off from work to care for a newborn is called
"maternity leave" or "parental leave" and can vary from a few days to several months.

[edit] Pain
Pain levels reported by labouring women vary widely. Pain levels seem to be influenced
by fear and anxiety levels, experience with prior childbirth, cultural ideas of childbirth
and pain[4][5], mobility during labour and the support given during labour. One study
found that middle-eastern women, especially those with a low educational background,
had more painful experiences during childbirth.[6]

Pain is only one factor of many influencing women's experience with the process of
childbirth. A systematic review of 137 studies found that personal expectations, the
amount of support from caregivers, quality of the caregiver-patient relationship, and
involvement in decisionmaking are more important in women's overall satisfaction with
the experience of childbirth than are other factors such as age, socioeconomic status,
ethnicity, preparation, physical environment, pain, immobility, or medical interventions.
[7]

[edit] Descriptions

Pain in contractions has been described as feeling like a very strong menstrual cramp.
Midwives often encourage refraining from screaming but recommend moaning and
grunting to relieve some pain. Crowning will feel like intense stretching and burning.
Even women who show little reaction to labor pains often show a reaction to crowning.

[edit] Non-medical pain control
Some women prefer to avoid analgesic medication during childbirth. They still can try to
alleviate labor pain using psychological preparation, education, massage, hypnosis, or
water therapy in a tub or shower. Some women like to have someone to support them
during labor and birth, such as the father of the baby, the woman's mother, a sister, a
close friend, a partner or a doula. Some women deliver in a squatting or crawling position
in order to more effectively push during the second stage and so that gravity can aid the
descent of the baby through the birth canal.

The human body also has a chemical response to pain, by releasing endorphins.
Endorphins are present before, during, and immediately after childbirth.[8] Some
homebirth advocates believe that this hormone can induce feelings of pleasure and
euphoria during childbirth,[9] reducing the risk of maternal depression some weeks later.[8]

Water birth is an option chosen by some women for pain relief during labor and
childbirth, and some studies have shown waterbirth in an uncomplicated pregnancy to
reduce the need for analgesia, without evidence of increased risk to mother or newborn.
[10]
Hot water tubs are available in many hospitals and birthing centres.

Meditation and mind medicine techniques for the use of pain control during labour and
delivery. These techniques are used in conjunction with progressive muscle relaxation
and many other forms of relaxation for the mind and body to aid in pain control for
women during childbirth. One such technique is the use of hypnosis in childbirth.

[edit] Medical pain control

Different measures for pain control have varying degrees of success and side effects to
the woman and her baby. In some countries of Europe, doctors commonly prescribe
inhaled nitrous oxide gas for pain control; in the UK, midwives may use this gas without
a doctor's prescription. Pethidine (with or without promethazine) may be used early in
labour, as well as other opioids, but if given too close to birth there is a risk of respiratory
depression in the infant.

Popular medical pain control in hospitals include the regional anesthetics epidural blocks,
and spinal anaesthesia. Epidural analgesia is a generally safe and effective method of
relieving pain in labour, but is associated with longer labour, more operative intervention
(particularly instrument delivery), and increases in cost.[11] One study found that the
women receiving epidural analgesia had more fear before the administering of the
epidural than those who did not receive it, but that they did not necessarily have more
pain.[12] Medicine administered via epidural can cross the placenta and enter the
bloodstream of the fetus.[13] Epidural analgesia has no statistically significant impact on
the risk of caesarean section, and does not appear to have an immediate effect on neonatal
status as determined by Apgar scores.[14]

[edit] Complications of birth
Birthing complications may be maternal or fetal, and long term or short term.
[edit] Labor complications

The second stage of labor may be delayed or lengthy due to:

• malpresentation of the fetal head (breech birth, face-first, or other)
• failure of descent of the fetal head through the pelvic brim or the interspinous
diameter
• poor uterine contraction strength
• a big baby and a small pelvis
• shoulder dystocia

Secondary changes may be observed: swelling of the tissues, maternal exhaustion, fetal
heart rate abnormalities. Left untreated, severe complications include death of mother or
baby, and genitovaginal fistula. These are commonly seen in Third World countries
where births are often unattended or attended by poorly trained community members.

[edit] Maternal complications

Vaginal birth injury with visible tears or episiotomies are common. Internal tissue
tearing as well as nerve damage to the pelvic structures lead in a proportion of women to
problems with prolapse, incontinence of stool or urine and sexual dysfunction. Fifteen
percent of women become incontinent, to some degree, of stool or urine after normal
delivery, this number rising considerably after these women reach menopause. Vaginal
birth injury is a necessary, but not sufficient, cause of all non hysterectomy related
prolapse in later life. Risk factors for significant vaginal birth injury include:

• a baby weighing more than nine pounds
• the use of forceps or vacuum for delivery. These markers are more likely to be
signals for other abnormalities as forceps or vacuum are not used in normal
deliveries.
• the need to repair large tears after delivery

Pelvic girdle pain. Hormones and enzymes work together to produce ligamentous
relaxation and widening of the symphysis pubis during the last trimester of pregnancy.
Most girdle pain occurs before birthing, and is know as diastasis of the pubic symphysis.
Predisposing factors for girdle pain include maternal obesity.

Infection remains a major cause of mortality and morbidity in the developing world
today. The work of Ignaz Semmelweis was seminal in the pathophysiology and treatment
of puerperal fever and saved many lives.

Hemorrhage, or heavy blood loss, is still the leading cause of death of birthing mothers
in the world today, especially in the developing world. Heavy blood loss leads to
hypovolemic shock, insufficient perfusion of vital organs and death if not rapidly treated.
Blood transfusion may be life saving. Rare sequelae include Hypopituitarism Sheehan's
syndrome. The maternal mortality (MMR) rate varies from 9/100,000 live births in the
US and Europe, to 900/100,000 live births in Sub-Saharan Africa. [8]

[edit] Fetal complications

Mechanical fetal injury

Risk factors for fetal birth injury include fetal macrosomia (big baby), maternal obesity,
the need for instrumental delivery, and an inexperienced attendant. Specific situations
that can contribute to birth injury include breech presentation and shoulder dystocia.
Most fetal birth injuries resolve without long term harm, but brachial plexus injury may
lead to Erb's palsy.

Neonatal infection

Neonates are prone to infection in the first month of life. Some organisms such as S.
agalactiae (Group B Streptococcus) or (GBS) are more prone to cause these occasionally
fatal infections. Risk factors for GBS infection include:

• prematurity (birth prior to 37 weeks gestation)
• a sibling who has had a GBS infection
• prolonged labour or rupture of membranes

Neonatal death Infant deaths (neonatal deaths from birth to 28 days, or perinatal deaths
if including fetal deaths at 28 weeks gestation and later) are around 1% in modernized
countries. The "natural" mortality rate of childbirth—where nothing is done to avert
maternal death—has been estimated as being between 1,000 and 1,500 deaths per
100,000 births.[15] (See main article: neonatal death, maternal death)

The most important factors affecting mortality in childbirth are adequate nutrition and
access to quality medical care ("access" is affected both by the cost of available care, and
distance from health services). "Medical care" in this context does not refer specifically
to treatment in hospitals, but simply routine prenatal care and the presence, at the birth, of
an attendant with birthing skills.

A 1983-1989 study by the Texas Department of Health highlighted the differences in
neonatal mortality (NMR) between high risk and low risk pregnancies. NMR was 0.57%
for doctor-attended high risk births, and 0.19% for low risk births attended by non-nurse
midwives. Conversely, some studies demonstrate a higher perinatal mortality rate with
assisted home births.[16] Around 80% of pregnancies are low-risk. Factors that may make
a birth high risk include prematurity, high blood pressure, gestational diabetes and a
previous cesarean section.

Intrapartum asphyxia: The term Fetal distress is emotive and misleading. True
intrapartum asphyxia is the impairment of oxygen to the brain and vital tissues during the
progress of labour. This may exist in a pregnancy already impaired by maternal or fetal
disease, or may rarely arise de novo in labour. True intrapartum asphyxia is not as
common as previously believed, and is usually accompanied by multiple other symptoms
during the immediate period after delivery. Monitoring might show up problems during
birthing, but the interpretation and use of monitoring devices is complex and prone to
misinterpretation.

[edit] Instrumental delivery (Forceps and Ventouse)
• The woman will have her legs supported in stirrups.
• If an anaesthetic is not already in place it will be given.
• Episiotomy might be needed.
• A Trial Forceps might be performed, which is abandoned in favor of a caesarean
section if delivery is not optimal.

[edit] Twins and multiple births
Twins can be delivered vaginally. In some cases twin delivery is done in a larger delivery
room or in theatre, just in case complications occur e.g.

• Both twins born vaginally - one comes normally but the other is breech and/or
helped by a forceps/ventouse delivery
• One twin born vaginally and the other by caesarean section.
• If the twins are joined at any part of the body - called conjoined twins, delivery is
mostly by caesarean section.

[edit] Variations
[edit] Being born in the caul

When the amniotic sac has not ruptured during labour or pushing, the infant can be born
with the membranes intact. This is referred to as "being born in the caul." The caul is
harmless and its membranes are easily broken and wiped away. In medieval times, and in
some cultures still today, a caul was seen as a sign of good fortune for the baby, even
giving the child psychic gifts such as clairvoyance, and in some cultures was seen as
protection against drowning. The caul was often impressed onto paper and stored away as
an heirloom for the child. With the advent of modern interventive obstetrics, premature
artificial rupture of the membranes has become common, so babies are rarely born in the
caul.

[edit] Orgasm during childbirth

While uncommon, experiencing a form of orgasm during childbirth is possible and
should not be a cause for concern. There are similarities between the process of orgasm
and childbirth; both involve involuntary contractions of some of the same muscles.
[edit] Professions associated with childbirth

Model of pelvis used in the beginning of the 20th century to teach technical procedures
for a successful childbirth. Museum of the History of Medicine, Porto Alegre, Brazil

Doulas are assistants who support mothers during pregnancy, labour, birth, and
postpartum. They are not medical attendants; rather, they provide emotional support and
non-medical pain relief for women during labour.

Maternal-fetal medicine specialists are experts in managing and treating high-risk
pregnancy and delivery. They are usually also obstetricians.

Midwives provide care to low-risk pregnant mothers. Midwives may be licensed and
registered, or may be lay practitioners. Jurisdictions with legislated midwives will
typically have a registering and disciplinary body, such as a College of Midwifery.
Registered midwives are trained to assist a mother with labour and birth, either through
direct-entry or nurse-midwifery programs. Lay midwives, who are usually not licensed or
registered, typically gain experience through apprenticeship with other lay midwives.

Obstetricians provide care for normal and abnormal births and pathological labour
conditions. Obstetricians are trained surgeons, so they can undertake surgical procedures
relating to childbirth. Such procedures include cesarean sections, episiotomies, or assisted
delivery. Most obstetricians also provide gynecological care, and may have a primary,
well-woman, care element to their practices.

Obstetric nurses assist midwives, doctors, women, and babies prior to, during, and after
the birth process, in the hospital system. Some midwives are also obstetric nurses.
Obstetric nurses hold various certifications and typically undergo additional obstetric
training in addition to standard nursing training.

[edit] Social and legal aspects
In most cultures, a person's age is now defined relative to their date of birth. Exceptions
include China, where age is counted from conception in some regions.[citation needed]
Historically, in Europe age was once counted from baptism.

Some families view the placenta as a special part of birth, since it has been the child's life
support for so many months. Some parents like to see and touch this organ. In some
cultures, parents plant a tree along with the placenta on the child's first birthday. The
placenta may be eaten by the newborn's family, ceremonially or otherwise (for nutrition;
the great majority of animals in fact do this naturally).[17]

The exact location in which childbirth takes place is an important factor in determining
nationality, in particular for birth aboard aircraft and ships.

[edit] Psychological aspects
Childbirth can be an intense event and strong emotions, both positive and negative, can
be brought to the surface.

While many women experience joy, relief, and elation upon the birth of their child, some
women report symptoms compatible with post-traumatic stress disorder (PTSD) after
birth. Between 70 and 80% of mothers in the United States report some feelings of
sadness or "baby blues" after childbirth. Postpartum depression may develop in some
women; about 10% of mothers in the United States are diagnosed with this condition.
Abnormal and persistent fear of childbirth is known as tokophobia.

Preventative group therapy has proven effective as a prophylactic treatment for
postpartum depression.[18]

Childbirth can be stressful for the infant.[citation needed] Stresses associated with breech birth,
such as asphyxiation, may affect the infant's brain.

[edit] Partner and other support
Main article: Men's role in childbirth

There is increasing evidence to show that the participation of the woman's partner in the
birth leads to better birth and also post-birth outcomes, providing the partner does not
exhibit excessive anxiety.[19] Research also shows that when a labouring woman was
supported by a female helper such as a family member or doula during labour, she had
less need for chemical pain relief, the likelihood of caesarean section was reduced, use of
forceps and other instrumental deliveries were reduced and there was a reduction in the
length of labour and the baby had a higher Apgar score (Dellman 2004, Vernon 2006).

It is the traditional history of home labour that makes The Netherlands an attractive site
for studies related to birth. One third of all baby deliveries there are still happening at
home in contrast with other western industrialized countries. Apparently, Dutch fathers
have been in the scene of labour for a long time as can be observed in paintings from the
17th and 18th centuries.

During this study[citation needed], it was found that fathers can have different roles during birth
and that little is said about the conflicts between partners or partners and professionals.
Among other findings were also: the interpretation of the presence of fathers during birth
as a modern version of the anthropological couvade ritual to ease the woman's pain; the
majority of fathers did not perceive any limitation to participate in their childbirth and
upper generations did not play an important rule in the transmission of knowledge about
birth to those fathers but the wives, feminine acquaintances and midwives.

The research was based, mainly, on in-depth interviews, where fathers described what
was happening from their partner’s first signals of birth labour until the placenta delivery.

[edit] See also
Wikinews has related news: 17-pound baby born in Russia
Wikimedia Commons has media related to: Childbirth

• Pre- and perinatal psychology
• Postnatal
• Lamaze
• Natalism
• Homebirth
• Unassisted childbirth
• Waterbirth
• Pre-labor
• Asynclitic birth, an abnormal birth position
• Vernix caseosa

[edit] References
Parent
From Wikipedia, the free encyclopedia

Jump to: navigation, search
For other uses, see Parent (disambiguation).

Faces of mother and child; detail of sculpture at Soldier Field, Chicago, Illinois, USA.

A parent is a mother or father; one who sires or gives birth to and/or nurtures and raises
an offspring. The different roles of parents vary throughout the tree of life, and are
especially complex in human culture.

Contents
[hide]

• 1 Mother
• 2 Father
• 3 Biological parents and parental testing
o 3.1 Parental testing
• 4 Parent-offspring conflict
• 5 See also
• 6 References

• 7 External links

[edit] Mother
Main article: Mother

Nestlings and mother Mourning Dove

A mother is always the biological or social female parent of a child or offspring. The
maternal bond describes the feelings the mother has for her (or another's) child. In the
case of a mammal such as a human, the mother gestates her child (called first an embryo,
then a fetus) in the uterus from conception or implantation until the fetus is sufficiently
well-developed to be born. The mother then goes into labour and gives birth. Once the
child is born, the mother produces milk to feed the child. In most situations, due to the
fact that the mother was the one to carry the offspring, she is closer to the child. The
mother is also usually the one to stay home and care for the child. However, if the mother
is working and the father stays home to care for the child, the child would have more time
to bond with the father regardless of the fact that the mother was the one pregnant with
the child.

[edit] Father
Main article: Father

Like mothers, fathers may be categorised according to their biological, social or legal
relationship with the child. Historically, the biological relationship paternity has been
determinative of fatherhood. However, proof of paternity has been intrinsically
problematic and so social rules often determined who would be regarded as a father e.g.
the husband of the mother.

[edit] Biological parents and parental testing
The term biological parent refers to a parent who is the biological mother or father of an
individual. While an individual's parents are often also their biological parents, it is
seldom used unless there is an explicit difference between who acted as a parent for that
individual and the person from whom they inherit half of their genes. For example, a
person whose father has remarried may call his new wife their stepmother and continue to
refer to their mother normally, though someone who has had little or no contact with their
biological mother may address their foster parent as their mother, and their biological
mother as such, or perhaps by her first name.

[edit] Parental testing

Main article: Parental testing

A paternity test is conducted to prove paternity, that is, whether a man is the biological
father of another individual. This may be relevant in view of rights and duties of the
father. Similarly, a maternity test can be carried out. This is less common, because at
least during childbirth and pregnancy, except in the case of a pregnancy involving
embryo transfer or egg donation, it is obvious who the mother is. However, it is used in a
number of events such as legal battles where a person's maternity is challenged, where
the mother is uncertain because she has not seen her child for an extended period of time,
or where deceased persons need to be identified.

Although not constituting completely reliable evidence, several congenital traits such as
attached earlobes, the widow's peak, or the cleft chin, may serve as tentative indicators of
(non-)parenthood as they are readily observable and inherited via autosomal-dominant
genes.

A more reliable way to ascertain parenthood is via DNA analysis (known as genetic
fingerprinting of individuals, although older methods have included ABO blood group
typing, analysis of various other proteins and enzymes, or using HLA antigens. The
current techniques for paternity testing are using polymerase chain reaction (PCR) and
restriction fragment length polymorphism (RFLP). For the most part however, genetic
fingerprinting has all but taken over all the other forms of testing.

[edit] Parent-offspring conflict
Main article: Parent-offspring conflict

Parent-offspring conflict describes the evolutionary conflict arising from differences in
optimal fitness of parents and their offspring. While parents tend to maximize the number
of offspring, the offspring can increase their fitness by getting a greater share of parental
investment often by competing with their siblings. The theory was proposed by Robert
Trivers in 1974 and extends the more general selfish gene theory and has been used to
explain many observed biological phenomena.[1] For example, in some bird species,
although parents often lay two eggs and attempt to raise two or more young, the strongest
fledgling takes a greater share of the food brought by parents and will often kill the
weaker sibling, an act known as siblicide.

David Haig has argued that human fetal genes would be selected to draw more resources
from the mother than it would be optimal for the mother to give, an hypothesis that has
received empirical support. The placenta, for example, secretes allocrine hormones that
decrease the sensitivity of the mother to insulin and thus make a larger supply of blood
sugar available to the fetus. The mother responds by increasing the level of insulin in her
bloodstream, the placenta has insulin receptors that stimulate the production of insulin-
degrading enzymes which counteract this effect.[2]

[edit] See also

Look up parent in Wiktionary, the free dictionary.
Wikimedia Commons has media related to: Childbirth

• Bateman's principle - the theory that females almost always invest more energy
into producing offspring than males, and that therefore in most species females
are a limiting resource over which the other sex will compete.
• Child abuse
• Cinderella effect
• Egg and sperm donation.

• Paternal bond
• Paternity (law)
• Parental investment
• Reciprocal socialization
• Surrogate mother

[edit] References

Mother
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Please help improve this article or section by expanding it. Further information
might be found on the talk page. (October 2008)
For other uses, see Mother (disambiguation).
Mom, Mommy, Moms, and Mum redirect here. For other uses, see Mom
(disambiguation), Mommy (disambiguation), Moms (disambiguation), and Mum
(disambiguation).
"Motherhood" redirects here. For the upcoming comedy film, see Motherhood (film).
"Mothering" redirects here. For the bimonthly parenting magazine, see Mothering
(magazine).

Migrant Mother by Dorothea Lange

A mother is a biological and/or social female parent of an offspring. Because of the
complexity and differences of the social, cultural, and religious definitions and roles, it is
challenging to define a mother in a universally accepted definition.

Contents
[hide]

• 1 Biological
• 2 Title
• 3 Social role and experience
o 3.1 Social role
o 3.2 Experience
• 4 Religious
• 5 Synonyms and translations
• 6 Famous and mythical mothers
• 7 See also

• 8 Notes

Biological
In the case of a mammal such as a human, the biological mother gestates a fertilized
ovum, which is called first an embryo, and then a fetus. This gestation occurs in the
mother's uterus from conception until the fetus is sufficiently developed to be born. The
mother then goes into labor and gives birth. Once the child is born, the mother produces
milk in a process called lactation to feed the child; often the mother's breast milk is the
child's sole nourishment for the first year or more of the child's life.

Title

Monumento a la Madre in Mexico City. The inscription translates "To her who loves us
before she meets us."

The title mother is often given to a woman other than the biological parent, if it is she
who fulfills the social role. This is most commonly either an adoptive mother or a
stepmother (the biologically unrelated wife of a child's father). Also, in both African-
American and lesbian cultures non-biological othermothers exist. Currently, with
advances in reproductive technologies, the function of biological motherhood can be split
between the genetic mother (who provides the ovum) and the gestational mother (who
carries the pregnancy), and in theory neither might be the social mother (the one who
brings up the child). A healthy connection between a mother and a child form a secure
base, from which the child may later venture forth into the world.[1]

Social role and experience
Social role

See also: Sociology of motherhood

Mothers have historically fulfilled the primary role in the raising of children, but since
the late 20th century, the role of the father in child care has been given greater
prominence in most Western countries.[2][3]

Experience

The experience of motherhood varies greatly depending upon location. The organization
Save the Children has ranked the countries of the world, and found that Scandinavian
countries are the best places to be a mother, whereas countries in sub-Saharan Africa are
the worst.[4] A mother in the bottom 10 countries is over 750 times more likely to die in
pregnancy or childbirth, compared to a mother in the top 10 countries, and a mother in
the bottom 10 countries is 28 times more likely to see her child die before reaching his or
her first birthday.

Religious
Most of the major world religions define tasks or roles for mothers through either
religious law or through the deification or glorification of mothers who served in
substantial religious events. There are many examples of religious law relating to mothers
and women. Some major world religions which have specific religious law or scriptural
canon regarding mothers include Christians[5], Jews[6], and Muslims[7]. Some examples of
glorification or deification include Mary for Christians, the Hindu Mother Goddess, or
Demeter of ancient Greek belief.

Synonyms and translations
Main article: Mama and papa

The proverbial "first word" of an infant often sounds like "ma" or "mama". This strong
association of that sound with "mother" has persisted in nearly every language on earth,
countering the natural localization of language.

Familiar or colloquial terms for mother in English are:

• mom or mommy, in most of North America (especially the U.S.). It is used widely
in the West Midlands, in the UK.
• mum or mummy, is used in the UK, Netherlands, Australia, and New Zealand
• Ma, Mam or Mammy is used in Ireland and sometimes in the UK and the US.
• Maa, Amaa, Mata is used in India and sometimes in neighboring countries,
originating from the Sanskrit matrika and mata
• "mama" is used in many countries, but is considered a Spanish form of "mother"

In many other languages, similar pronunciations apply:

• mama in Polish and Slovak
• māma (妈妈, 媽媽) in Mandarin Chinese
• máma in Czech
• maman in French and Persian
• mamma in Italian
• mãe in Portuguese
• Ami in Punjabi
• mama in Swahili
• eema (‫ )אמא‬in Hebrew
• eomma (엄마, IPA: ʌmma) in Korean
• Mama, borrowed from the English, is in common use in Japan.
• In many south Asian cultures and the Middle East the mother is known as amma
or oma or ammi or "ummi", or variations thereof. Many times these terms denote
affection or a maternal role in a child's life.

Famous and mythical mothers

The Hindu mother goddess Parvati feeding her son, the elephant-headed wisdom god
Ganesha

• Mary
• Bithiah
• Gaia
• Hagar
• Isis
• Jocasta, mother of Oedipus, important in Freudian psychology
• Juno
• Queen Maya
• Euripides' Medea
• Venus
• Parvati
• Demeter

See also
Wikimedia Commons has media related to: Mothers
Look up mother in
Wiktionary, the free dictionary.

• Mary
• Attachment parenting
• Breastfeeding
• Human bonding
• Jungian archetypes
• Lactation
• Matriarch
• Matricide
• Mother Goose
• Matrilocality
• Mother insult
• Mother's Day
• Mothers rights
• Nuclear family
• Oedipus complex
• Parenting
• Mother ship
• Mother goddess
• Mother Superior or Abbess
• Othermother

Notes

Father
From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article may need to be rewritten entirely to comply with Wikipedia's quality
standards. You can help. The discussion page may contain suggestions.
For other uses, see Father (disambiguation).
"Dad" redirects here. For other uses, see Dad (disambiguation).
"Fatherhood" redirects here. For other uses, see Fatherhood (disambiguation).
"Fathering" redirects here. For the journal, see Fathering (journal).
Father with child

The father is defined as the male parent of an offspring.[1] The adjective "paternal" refers
to father, parallel to "maternal" for mother. According to the anthropologist Maurice
Godelier, the parental role assumed by human males is a critical difference between
human society and that of humans' closest biological relatives - chimpanzees and
bonobos - who appear to be unaware of their "father" connection.[2][3] In Western
Countries, Fathers tell their children to believe in the imaginary and mythical Santa
Claus, Or Father Christmas.

The father-child relationship is the defining factor of the fatherhood role.[4][5] "Fathers
who are able to develop into responsible parents are able to engender a number of
significant benefits for themselves, their communities, and most importantly, their
children."[6] Involved fathers offer developmentally specific provisions to their sons and
daughters throughout the life cycle and are impacted themselves by their doing so.[7]
Active father figures have a key role to play in reducing behaviour problems in boys and
psychological problems in young women.[8] For example, children who experience
significant father involvement tend to exhibit higher scores on assessments of cognitive
development, enhanced social skills and fewer behavior problems.[9][10][11] An increased
amount of father-child involvement has also proven to increase a child's social stability,
educational achievement, and even their potential to have a solid marriage as an adult.
The children are also more curious about the world around them and develop greater
problem solving skills.[12] Children who were raised without fathers perceive themselves
to be less cognitively and physically competent than their peers from father-present
families.[13] Mothers raising children without fathers reported more severe disputes with
their child. Sons raised without fathers showed more feminine but no less masculine
characteristics of gender role behavior.[14]

The father is often seen as an authority figure.[15][16][17][18] According to Deleuze, the father
authority exercises repression over sexual desire.[19] A common observation among
scholars is that the authority of the father and of the [political] leader are closely
intertwined, that there is a symbolic identification between domestic authority and
national political leadership.[20] In this sense, links have been shown between the concepts
of "patriarchal", "paternalistic", "cult of personality", "fascist", "totalitarian", "imperial".
[20]
The fundamental common grounds between domestic and national authority, are the
mechanisms of naming (exercise the authority in someone's name) and identification.[20]
In a patriarchal society, authority typically uses such rhetoric of fatherhood and family to
implement their rule and advocate its legitimacy.[21]

In the Roman and aristocratic patriarchal family, "the husband and the father had a
measure of political authority and served as intermediary between the household and the
polity."[22][23] In Western culture patriarchy and authority have been synonymous.[24] In the
19th century Europe, the idea was common, among both traditionalist and
revolutionaries, that the authority of the domestic father should "be made omnipotent in
the family so that it becomes less necessary in the state".[25][26][20] In the second part of that
century, there was an extension of the authority of the husband over his wife and the
authority of the father over his children, including "increased demands for absolute
obedience of children to the father".[20] Europe saw the rise of "new ideological hegemony
of the nuclear family form and a legal codification of patriarchy", which was
contemporary with the solid spread of the "nation-state model as political norm of order".
[20]

Like mothers, human fathers may be categorised according to their biological, social or
legal relationship with the child.[27] Historically, the biological relationship paternity has
been determinative of fatherhood. However, proof of paternity has been intrinsically
problematic and so social rules often determined who would be regarded as a father, e.g.
the husband of the mother.

This method of the determination of fatherhood has persisted since Roman times in the
famous sentence: Mater semper certa; pater est quem nuptiae demonstrant (Mother is
always certain; the father is whom the marriage shows). The historical approach has been
destabilised with the recent emergence of accurate scientific testing, particularly DNA
testing. As a result, the law on fatherhood is undergoing rapid changes. In the United
States, the Uniform Parentage Act essentially defines a father as a man who conceives a
child through sexual intercourse.[citation needed]

The most familiar English terms for father include dad, daddy, papa, pop and pa. Other
colloquial expressions include my old man.

Contents
[hide]

• 1 Categories
o 1.1 Non-biological (social / legal relationship between father and child)
o 1.2 Fatherhood defined by contact level with child
• 2 Non-human fatherhood
• 3 See also
• 4 References

• 5 Bibliography

[edit] Categories

Father reading with children

• Natural/Biological father - the most common category: child product of man and
woman
• Birth father - the biological father of a child who, due to adoption or parental
separation, does not raise the child or cannot take care of one.
• Surprise father - where the men did not know that there was a child until
possibly years afterwards
• Posthumous father - father died before children were born (or even conceived in
the case of artificial insemination)
• Teenage father/youthful father - may be associated with premarital sexual
intercourse
• Non-parental father - unmarried father whose name does not appear on child's
birth certificate: does not have legal responsibility but continues to have financial
responsibility (UK)
• Sperm donor father - a genetic connection but man does not have legal or
financial responsibility if conducted through licensed clinics

[edit] Non-biological (social / legal relationship between father and child)

• Stepfather - wife has child from previous relationship
• Father-in-law - the father of one's spouse
• Adoptive father - child is adopted(not of their blood)
• Foster father - child is raised by a man who is not the biological or adoptive
father usually as part of a couple.
• Cuckolded father - where child is the product of the mother's adulterous
relationship
• Social father - where man takes de facto responsibility for a child (in such a
situation the child is known as a "child of the family" in English law)
• Mothers's partner - assumption that current partner fills father role
• Mothers's husband - under some jurisdictions (e.g. in Quebec civil law), if the
mother is married to another man, the latter will be defined as the father
• DI Dad - social / legal father of children produced via Donor Insemination where
a donor's sperm were used to impregnate the DI Dad's spouse.

[edit] Fatherhood defined by contact level with child

• Weekend/holiday father - where child(ren) only stay(s) with father at weekends,
holidays, etc.
• Absent father - father who cannot or will not spend time with his child(ren)
• Second father - a non-parent whose contact and support is robust enough that near
parental bond occurs (often used for older male siblings who significantly aid in
raising a child).
• Stay at home dad - the male equivalent of a housewife with child

• Where man in couple originally seeking IVF treatment withdraws consent before
fertilisation (UK)

• Where the apparently male partner in an IVF arrangement turns out to be legally a
female (evidenced by birth certificate) at the time of the treatment (UK) (TLR 1st
June 2006)

A biological child of a man who, for the special reason above, is not their legal
father, has no automatic right to financial support or inheritance. Legal
fatherlessness refers to a legal status and not to the issue of whether the father is
now dead or alive.

[edit] Non-human fatherhood
For some animals, it is the fathers who take care of the young.

• Darwin frog (Rhinoderma darwini) fathers carry eggs in the vocal pouch.

• The female seahorse (hippocampus) deposits eggs into the pouch on the male's
abdomen. The male releases sperm into the pouch, fertilizing the eggs. The
embryos develop within the male's pouch, nourished by their individual yolk sacs.

• Unlike most birds, it is the Emperor Penguin fathers who sit on top of the nests.

Fathers in other species assist mothers with caring for the young.

• Wolf fathers help feed, protect, and play with their pups. In some cases, several
generations of wolves live in the pack, giving pups the care of grandparents,
aunts/uncles, and siblings, in addition to parents.
• Dolphin fathers help in the care of the young.

• A number of bird species have active, caring fathers who assist the mothers.

Most species[citation needed], though, display little or no paternal role in caring for offspring.
The male leaves the female soon after mating and long before any offspring are born. It is
the females who must do all the work of caring for the young.

• A male bear leaves the female shortly after mating and will kill and sometimes eat
any bear cub he comes across, even if the cub is his. Bear mothers spend much of
their cubs' early life protecting them from males. (Many artistic works, such as
advertisements and cartoons, depict kindly "papa bears" when this is the opposite
of reality.)

• Domesticated dog fathers show little interest in their offspring, and unlike wolves,
are not monogamous with their mates and are thus likely to leave them after
mating.

• Male lions will tolerate cubs, but only allow them to eat meat from dead prey after
they have had their fill. Some are quite cruel towards their young and may hurt or
kill them with little provocation. A male who kills another male to take control of
his pride will also usually kill any cubs belonging to that competing male.
However, it is also the males who are responsible for guarding the pride while the
females hunt.

Finally, in some species neither the father nor the mother provides any care

• This is true for most insects and fish

[edit] See also
Father can also refer metaphorically to a person who is considered the founder of a body
of knowledge or of an institution. In such context the meaning of "father" is similar to
that of "founder". See List of people known as the father or mother of something.

Look up Father in
Wiktionary, the free dictionary.
Wikimedia Commons has media related to: Fathers

• Paternal bond
• Sociology of fatherhood
• Fathers' rights
• Responsible Fatherhood
• Fathers' Day
• Mother
• God the Father

[edit] References

Interpersonal relationship
From Wikipedia, the free encyclopedia

(Redirected from Personal relationship)
Jump to: navigation, search
This article needs additional citations for verification. Please help improve this
article by adding reliable references. Unsourced material may be challenged and
removed. (December 2008)

Intimate relationships are one type of interpersonal relationship.

An interpersonal relationship is a relatively long-term association between two or more
people. This association may be based on emotions like love and liking, regular business
interactions, or some other type of social commitment. Interpersonal relationships take
place in a great variety of contexts, such as family, friends, marriage, acquaintances,
work, clubs, neighborhoods, and churches. They may be regulated by law, custom, or
mutual agreement, and are the basis of social groups and society as a whole. Although
humans are fundamentally social creatures, interpersonal relationships are not always
healthy. Examples of unhealthy relationships include abusive relationships and
codependence.

A relationship is normally viewed as a connection between two individuals, such as a
romantic or intimate relationship, or a parent-child relationship. Individuals can also have
relationships with groups of people, such as the relation between a pastor and his
congregation, an uncle and a family, or a mayor and a town. Finally, groups or even
nations may have relations with each other, though this is a much broader domain than
that covered under the topic of interpersonal relationships. See such articles as
international relations for more information on associations between groups. Most
scholarly work on relationships focuses on romantic partners in pairs or dyads. These
intimate relationships are, however, only a small subset of interpersonal relationships.
All relationships involve some level of interdependence. People in a relationship tend to
influence each other, share their thoughts and feelings, and engage in activities together.
Because of this interdependence, anything that changes or impacts one member of the
relationship will have some level of impact on the other member.[1] The study of
interpersonal relationships involves several branches of social science, including such
disciplines as sociology, psychology, anthropology, and social work.

Contents
[hide]

• 1 Varieties
• 2 Theories
• 3 Development
• 4 See also
• 5 References

• 6 External links

[edit] Varieties

Close relationships are important for emotional wellbeing throughout the lifespan.

Interpersonal relationships include kinship and family relations in which people become
associated by genetics or consanguinity. These include such roles as father, mother, son,
or daughter. Relationships can also be established by marriage, such as husband, wife,
father-in-law, mother-in-law, uncle by marriage, or aunt by marriage. They may be
formal long-term relationships recognized by law and formalized through public
ceremony, such as marriage or civil union. They may also be informal long-term
relationships such as loving relationships or romantic relationships with or without living
together. In these cases the "other person" is often called lover, boyfriend, or girlfriend,
as distinct from just a male or female friend, or "significant other". If the partners live
together, the relationship may resemble marriage, with the parties possibly even called
husband and wife. Scottish common law can regard such couples as actual marriages
after a period of time. Long-term relationships in other countries can become known as
common-law marriages, although they may have no special status in law. The term
mistress may refer in a somewhat old-fashioned way to a female lover of an already
married or unmarried man. A mistress may have the status of an "official mistress" (in
French maîtresse en titre); as exemplified by the career of Madame de Pompadour.

Friendships consist of mutual liking, trust, respect, and often even love and unconditional
acceptance. They usually imply the discovery or establishment of similarities or common
ground between the individuals.[2] Internet friendships and pen-pals may take place at a
considerable physical distance. Brotherhood and sisterhood can refer to individuals
united in a common cause or having a common interest, which may involve formal
membership in a club, organization, association, society, lodge, fraternity, or sorority.
This type of interpersonal relationship relates to the comradeship of fellow soldiers in
peace or war. Partners or co-workers in a profession, business, or common workplace
also have a long term interpersonal relationship.

Soulmates are individuals intimately drawn to one another through a favorable meeting of
minds and who find mutual acceptance and understanding with one another. Soulmates
may feel themselves bonded together for a lifetime and hence may become sexual
partners, but not necessarily. Casual relationships are sexual relationships extending
beyond one-night stands that exclusively consist of sexual behavior. One can label the
participants as "friends with benefits" or as friends "hooking up" when limited to sexual
intercourse, or regard them as sexual partners in a wider sense. Platonic love is an
affectionate relationship into which the sexual element does not enter, especially in cases
where one might easily assume otherwise.

[edit] Theories
Psychologists have suggested that all humans have a basic, motivational drive to form
and maintain caring interpersonal relationships. According to this view, people need both
stable relationships and satisfying interactions with the people in those relationships. If
either of these two ingredients is missing, people will begin to feel anxious, lonely,
depressed, and unhappy.[3]

According to attachment theory, relationships can be viewed in terms of attachment
styles that develop during early childhood. These patterns are believed to influence
interactions throughout adulthood by shaping the roles people adopt in relationships. For
example, one partner may be securely attached while the other is anxious and avoidant.
Thus, early childhood experience (primarily with parents) is believed to have long lasting
effects on all future relationships.

Social exchange theory interprets relationships in terms of exchanged benefits. It predicts
that people regard relationships in terms of rewards obtained from the relationship, as
well as potential rewards from alternate relationships.[4] Equity theory stems from a
criticism of social exchange theory and suggests that people care about more than just
maximizing rewards. They also want fairness and equity in their relationships.

Relational dialectics regards relationships not as static entities, but as continuing
processes, forever changing. This approach sees constant tension in the negotiation of
three main issues: autonomy vs. connection, novelty vs. predictability, and openness vs.
closedness.

[edit] Development
Interpersonal relationships are dynamic systems that change continuously during their
existence. Like living organisms, relationships have a beginning, a lifespan, and an end.
They tend to grow and improve gradually, as people get to know each other and become
closer emotionally, or they gradually deteriorate as people drift apart and form new
relationships with others. One of the most influential models of relationship development
was proposed by psychologist, George Levinger.[5] This model was formulated to
describe heterosexual, adult romantic relationships, but it has been applied to other kinds
of interpersonal relations as well. According to the model, the natural development of a
relationship follows five stages:

1. Acquaintance - Becoming acquainted depends on previous relationships, physical
proximity, first impressions, and a variety of other factors. If two people begin to
like each other, continued interactions may lead to the next stage, but
acquaintance can continue indefinitely.
2. Buildup - During this stage, people begin to trust and care about each other. The
need for compatibility and such filtering agents as common background and goals
will influence whether or not interaction continues.
3. Continuation - This stage follows a mutual commitment to a long term friendship,
romantic relationship, or marriage. It is generally a long, relative stable period.
Nevertheless, continued growth and development will occur during this time.
Mutual trust is important for sustaining the relationship.
4. Deterioration - Not all relationships deteriorate, but those that do tend to show
signs of trouble. Boredom, resentment, and dissatisfaction may occur, and
individuals may communicate less and avoid self-disclosure. Loss of trust and
betrayals may take place as the downward spiral continues.
5. Termination - The final stage marks the end of the relationship, either by death in
the case of a healthy relationship, or by separation.

Friendships may involve some degree of transitivity. In other words, a person may
become a friend of an existing friend's friend. However, if two people have a sexual
relationship with the same person, they may become competitors rather than friends.
Accordingly, sexual behavior with the sexual partner of a friend may damage the
friendship (see love triangle). Sexual relations between two friends tend to alter that
relationship, either by "taking it to the next level" or by severing it. Sexual partners may
also be classified as friends and the sexual relationship may either enhance or depreciate
the friendship.

Legal sanction reinforces and regularizes marriages and civil unions as perceived
"respectable" building-blocks of society. In the United States of America, for example,
the de-criminalization of homosexual sexual relations in the Supreme Court decision,
Lawrence v. Texas (2003) facilitated the mainstreaming of gay long-term relationships,
and broached the possibility of the legalization of same-sex marriages in that country.

[edit] See also
Wikiquote has a collection of quotations related to: Interpersonal relationship
Look up interpersonal in
Wiktionary, the free dictionary.

At Wikiversity, you can learn about: interpersonal relationships
Main list: List of basic relationship topics

• Affection
• Attachment theory
• Courtship
• Empathy
• Friendship
• Human bonding
• Interpersonal attraction
• Interpersonal communication
• Interpersonal compatibility
• Intimate relationship
• Love
• Social interaction
• Social rejection
• Sympathy

[edit] References

Cohabitation
From Wikipedia, the free encyclopedia

(Redirected from Living together)
Jump to: navigation, search
This article is about a living arrangement. For the situation in governmental politics, see
Cohabitation (government).

Cohabitation is when people live together in an emotionally- and/or physically-intimate
relationship. The term is most frequently applied to couples who are not married.

People may live together for any of a number of reasons. These may include wanting to
test compatibility or to establish financial security before marrying. It may also be
because they are unable to legally marry, because for example same-sex, interracial or
interreligious marriages are not legal or permitted. Other reasons include living with
someone before marriage as a way to avoid divorce, a way for polygamists or
polyamorists to avoid breaking the law, a way to avoid the higher income taxes paid by
some two-income married couples (in the United States), negative effects on pension
payments (among older people), philosophical opposition to the institution of marriage
and seeing little difference between the commitment to live together and the commitment
to marriage. Some individuals may also choose cohabitation because they see their
relationships as being private and personal matters, and not to be controlled by political,
religious or patriarchal institutions.

Some couples prefer cohabitation because it does not legally commit them for an
extended period, and because it is easier to establish and dissolve without the legal costs
often associated with a divorce. In some jurisdictions cohabitation can be viewed legally
as common-law marriages, either after the duration of a specified period, or the birth of
the couple's child, or if the couple consider and behave accordingly as husband and wife.
(This helps provide the surviving partner a legal basis for inheriting the deceased's
belongings in the event of the death of their cohabiting partner.)

Today, cohabitation is a common pattern among people in the Western world, especially
those who desire marriage but whose financial situation temporarily precludes it, or who
wish to prepare for what married life will be like before actually getting married, or
because they see no benefit or value offered by marriage. More and more couples choose
to have long-term relationships without marriage, and cohabit as a permanent
arrangement.

Contents
[hide]

• 1 Opposition
• 2 Cohabitation worldwide
o 2.1 United States
 2.1.1 Statistics
 2.1.2 Stability
 2.1.3 Legal status
o 2.2 Europe
o 2.3 Middle East
o 2.4 Asia
o 2.5 Pacific
o 2.6 Canada
o 2.7 Mexico
• 3 See also
• 4 References

• 5 External links
[edit] Opposition
In the Western world, a man and a woman who lived together without being married
were once socially shunned and persecuted and potentially prosecuted by law. In some
jurisdictions, cohabitation was illegal until relatively recently. Other jurisdictions have
created a Common-law marriage status when two people of the opposite sex live together
for a prescribed period of time. Most jurisdictions no longer persecute this private choice.

Opposition to cohabitation comes mainly from religious groups. Opponents of
cohabitation usually argue that living together in this fashion is less stable and hence
harmful. According to one argument, the total and unconditional commitment of
marriage strengthens a couple's bond and makes the partners feel more secure, more
relaxed, and happier than those that have chosen to cohabitation.[1] Opponents of
cohabitation commonly cite statistics that indicate that couples who have lived together
before marriage are more likely to divorce, and that unhappiness, ill health, poverty, and
domestic violence are more common in unmarried couples than in married ones.[2]
Cohabitation advocates, in turn, cite research that either disproves these claims or
indicates that the statistical differences are due to other factors than the fact of
cohabitation itself.[3]

[edit] Cohabitation worldwide
Please help improve this section by expanding it. Further information might be
found on the talk page. (June 2008)

[edit] United States

[edit] Statistics

In most parts of the United States, there is no legal registration or definition of
cohabitation, so demographers have developed various methods of identifying
cohabitation and measuring its prevalence. Most important of these is the Census Bureau,
which currently describes an "unmarried partner" as "A person age 15 years and over,
who is not related to the householder, who shares living quarters, and who has a close
personal relationship with the householder."[4] Before 1995, the Bureau euphemistically
identified any "unrelated" opposite-sex couple living with no other adults as POSSLQs,
or Persons of Opposite Sex Sharing Living Quarters.[5], and they still report these
numbers to show historical trends. However, such measures should be taken loosely, as
researchers report that cohabitation often does not have clear start and end dates, as
people move in and out of each other's homes and sometimes do not agree on the
definition of their living arrangement at a particular moment.[6]

In 2001, in the United States 8.2% of couples were calculated to be cohabiting.[7]
In 2005, the U.S. Census Bureau reported 4.85 million cohabiting couples, up more than
1,000 percent from 1960, when there were 439,000 such couples. A 2000 study found
that more than half of newlyweds have lived together, at least briefly, before walking
down the aisle.

The cohabiting population is inclusive of all ages, but the average cohabiting age group is
between 25-34[8]. This is a meaningless statistic however, as the average married
population is in this range also.

[edit] Stability

In one study, Jay Teachman, a researcher at Western Washington University, studied
premarital cohabitation of women who are in a monogamous relationship.[9] Teachman’s
study showed "women who are committed to one relationship, who have both premarital
sex and cohabit only with the man they eventually marry, have no higher incidence of
divorce than women who abstain from premarital sex and cohabitation. For women in
this category, premarital sex and cohabitation with their eventual husband are just two
more steps in developing a committed, long-term relationship." Teachman's findings
report instead that "It is only women who have more than one intimate premarital
relationship who have an elevated risk of marital disruption. This effect is strongest for
women who have multiple premarital coresidental unions."[10]

Some people have claimed that those who live together before marriage can report having
less satisfying marriages and have a higher chance of separating. A possible explanation
for this trend could be that people who cohabit prior to marriage did so because of
apprehension towards commitment, and when, following marriage, marital problems
arose (or, for that matter, before marriage, when relationship problems arose during the
cohabitation arrangement), this apprehension was more likely to translate into an eventual
separation. It should be noted this model cites antecedent apprehension concerning
commitment as the cause of increased break ups and cohabitation only as an indicator of
such apprehension. Another explanation is that those who choose not to cohabit prior to
marriage are often more conservative in their religious views, a mindset that might
prevent them from divorcing for religious reasons despite experiencing marital problems
no less severe than those encountered by former cohabitants. In addition, the very act of
living together may lead to attitudes that make happy marriages more difficult. The
findings of one recent study, for example, suggest "there may be less motivation for
cohabiting partners to develop their conflict resolution and support skills." (One
important exception: cohabiting couples who are already planning to marry each other in
the near future have just as good a chance at staying together as couples who don’t live
together before marriage).[11]

[edit] Legal status

Some places, including the state of California, have laws that recognize cohabiting
couples as "domestic partners". In California, such couples are defined as people who
"have chosen to share one another's lives in an intimate and committed relationship of
mutual caring," including having a "common residence, and are the same sex or persons
of opposite sex if one or both of the persons are over the age of 62"[12] This recognition
led to the creation of a "Domestic Partners Registry", granting them limited legal
recognition and some rights similar to those of married couples.

Half a century ago, it was illegal in every state for adult lovers to live together without
being married. Today, on the other hand, just five (5) states (Mississippi, Virginia,
Florida, North Dakota and Michigan) still criminalize cohabitation by opposite-sex
couples, although anti-cohabitation laws are generally not enforced. [13][dead link] Many legal
scholars believe that in light of in Lawrence v. Texas, such laws making cohabitation
illegal are unconstitutional (North Carolina Superior Court judge Benjamin Alford has
struck down the North Carolina law on that basis).[14]

[edit] Europe

• In Denmark, Norway and Sweden, cohabitation is very common; roughly 50% of
all children are born into families of unmarried couples, whereas the same figure
for several other Western European countries is roughly 10%. Some couples
decide to marry later.
• In late 2005, 21% of families in Finland consisted of cohabitating couples (all age
groups). Of couples with children, 18% were cohabitating[15]. Of ages 18 and
above in 2003, 13.4% were cohabitating[16]. Generally, cohabitation amongst
Finns is most common for people under 30. Legal obstacles for cohabitation were
removed in 1926 in a reform of the Finnish penal code, while the phenomenon
was socially accepted much later on among non-Christian Finns.
• In the UK, 25% of children are now born to cohabiting parents.
• In France, 17.5% of couples were cohabiting as of 1999.[7]

[edit] Middle East

• The cohabitation rate in Israel is less than 3% of all couples, compared to 8%, on
average, in West European countries. [1]
• Cohabitation is illegal according to sharia law (for the countries that enforce it)[17]
[18]

[edit] Asia

• In India, cohabitation had been taboo since British rule. However, this is no
longer true in big cities, but is still often found in rural areas with more
conservative values. Female live-in partners have economic rights under
Protection of Women from Domestic Violence Act 2005.

• In Japan, according to M. Iwasawa at the National Institute of Population and
Social Security Research, less than 3% of females between 25-29 are currently
cohabiting, but more than 1 in 5 have had some experience of an unmarried
partnership, including cohabitation. A more recent Iwasawa study has shown that
there has been a recent emergence of non marital cohabitation. Couples born in
the 1950s cohort showed an incidence of cohabitation of 11.8%, where the 1960s
and 1970s cohorts showed cohabitation rates of 30%, and 53.9% respectively. The
split between urban and rural residence for people who had cohabited is indicates
68.8% were urban and 31.2% were rural. [19]

• In the Philippines, around 2.4 million Filipinos were cohabiting as of 2004. The
2000 census placed the percentage of cohabiting couples at 19%. The majority of
individuals are between the ages of 20-24. Poverty was often the main factor in
decision to cohabit.[20]

[edit] Pacific

• In Australia, 22% of couples were cohabiting as of 2005. See Australian Bureau
of Statistics.
• In New Zealand, 18.3% of couples were cohabiting as of 2001.[7]

[edit] Canada

• In Canada, 16.0% of couples were cohabiting as of 2001 (29.8.% in Quebec, and
11.7% in the other provinces).[7]

[edit] Mexico

• In Mexico, 18.7% of couples were cohabiting as of 2005.[7]

[edit] See also
• Family
o Family law
• Child
• Interpersonal relationship and Intimate relationship
o Divorce
o Domestic partnership
o Free union
o Marriage
o Pilegesh
• Marriage gap
• Living Apart Together

In [2]:

• Ley de sociedad de convivencia: the Spanish name for "Cohabitation Societies
Law" (a Wikipedia article not yet translated into English), a legislation created on
November 9, 2006, by the Legislation Assembly of Mexico City to establish legal
rights and duties for all those cases where two people (due to either sexual,
familial or friendly reasons) are living together.

[edit] References

History of Vietnam
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article needs additional citations for verification. Please help improve this
article by adding reliable references. Unsourced material may be challenged and
removed. (August 2007)
History of Vietnam
Hồng Bàng Dynasty prior to 257 BC
Thục Dynasty 257–207 BC
First Chinese
207 BC – 39 AD
domination
• Triệu Dynasty 207–111 BC
Trưng Sisters 40–43
Second Chinese
43–544
domination
• Lady Triệu's Rebellion 248
Early Lý Dynasty
544–602
Triệu Việt Vương
Third Chinese
602–905
domination
• Mai Hắc Đế 722
• Phùng Hưng 791–798
Autonomy 905–938
• Khúc Family 906–930
• Dương Đình Nghệ 931–937
• Kiều Công Tiễn 937–938
Ngô Dynasty 939–967
• The 12 Lords Rebellion 966–968
Đinh Dynasty 968–980
Early Lê Dynasty 980–1009
Lý Dynasty 1009–1225
Trần Dynasty 1225–1400
Hồ Dynasty 1400–1407
Fourth Chinese
1407–1427
domination
• Later Trần Dynasty 1407–1413
• Lam Sơn Rebellion 1418–1427
Later Lê Dynasty 1428–1788
• Early Lê 1428–1788
• Restored Lê 1533–1788
• Mạc Dynasty 1527–1592
• Trịnh-Nguyễn War 1627–1673
Tây Sơn Dynasty 1778–1802
Nguyễn Dynasty 1802–1945
• Western imperialism 1887–1945
• Empire of Vietnam 1945
Indochina Wars 1945–1975
Partition 1954
• Democratic Republic 1945–1949 and
of Vietnam 1955–1976
• State of Vietnam 1949–1955
• Republic of Vietnam 1955–1975
• Rep. of South Vietnam 1975–1976
Socialist Republic
from 1976
of Vietnam

Related topics
Champa Dynasties c. 100–1471
List of Vietnamese monarchs
Economic history of Vietnam
Prehistoric cultures of Vietnam
v•d•e

The history of Vietnam begins around 2,700 years ago. Successive dynasties based in
China ruled Vietnam directly for most of the period from 111 BC until 938 when
Vietnam regained its independence.[1] Vietnam remained a tributary state to its larger
neighbor China for much of its history but repelled invasions by the Chinese as well as
three invasions by the Mongols between 1255 and 1285.[2] King Trần Nhân Tông later
diplomatically submitted Vietnam to a tributary of the Yuan to avoid further conflicts.
The independent period temporarily ended in the middle to late 19th century, when the
country was colonized by France (see French Indochina). During World War II, Imperial
Japan expelled the French to occupy Vietnam, though they retained French administrators
during their occupation. After the war, France attempted to re-establish its colonial rule
but ultimately failed. The Geneva Accords partitioned the country in two with a promise
of democratic election to reunite the country.

However, rather than peaceful reunification, partition led to the Vietnam War, a civil war
and a major part of the Cold War. During this time, the People's Republic of China and
the Soviet Union supported the North while the United States supported the South. After
millions of Vietnamese deaths and the American withdrawal from Vietnam in March
1973, the war ended with the fall of Saigon to the North in April 1975. The reunified
Vietnam suffered further internal repression and was isolated internationally due to the
continuing Cold War and the Vietnamese invasion of Cambodia. In 1986, the Communist
Party of Vietnam changed its economic policy and began reforms of the private sector
similar to those in China. Since the mid-1980s, Vietnam has enjoyed substantial
economic growth and some reduction in political repression, though reports of corruption
have also risen.

Contents
[hide]

• 1 Early kingdoms
• 2 Period of Chinese domination (111 BC – 938 AD)
• 3 Early independence (938 AD – 1009 AD)
• 4 Independent period of Đại Việt (1010 AD – 1527 AD)
o 4.1 Mongol invasions
o 4.2 Champa
o 4.3 Ming occupation and the rise of the Le dynasty
• 5 Divided period (1528–1802)
• 6 19th century and French colonization
o 6.1 French invasion
o 6.2 20th century
• 7 First Indochina War (1945 – 1954)
• 8 Vietnam War (1954 – 1975)
• 9 Socialism after 1975
• 10 Changing names
• 11 Further reading
• 12 References
• 13 See also

• 14 External links
Early kingdoms
Evidence of the earliest established society other than the prehistoric Iron Age Đông Sơn
culture in Northern Vietnam was found in Cổ Loa, an ancient city situated near present-
day Hà Nội.

According to myth, the first Vietnamese people were descended from the Dragon Lord
Lạc Long Quân and the Immortal Fairy Âu Cơ. Lạc Long Quân and Âu Cơ had 100 sons
before deciding to part ways. 50 of the children went with their mother to the mountains,
and the other 50 went with their father to the sea. The eldest son became the first in a line
of early Vietnamese kings, collectively known as the Hùng kings (Hùng Vương or the
Hồng Bàng Dynasty). The Hùng kings called their country, located on the Red River
delta in present-day northern Vietnam, Văn Lang. The people of Văn Lang were known
as the Lạc Việt.

Map of Văn Lang, 500 BC.

Song Da bronze drum's surface, Vietnam

Văn Lang is thought to have been a matriarchal society, similar to many other matriarchal
societies common in Southeast Asia and in the Pacific islands at the time. Various
archaeological sites in northern Vietnam, such as Đông Sơn have yielded metal weapons
and tools from this age. Most famous of these artifacts are large bronze drums, probably
made for ceremonial purposes, with sophisticated engravings on the surface, depicting
life scenes with warriors, boats, houses, birds and animals in concentric circles around a
radiating sun at the center.

Many legends from this period offer a glimpse into the life of the people. The Legend of
the Rice Cakes is about a prince who won a culinary contest; he then wins the throne
because his creations, the rice cakes, reflect his deep understanding of the land's vital
economy: rice farming. The Legend of Giong about a youth going to war to save the
country, wearing iron armor, riding an armored horse, and wielding an iron staff, showed
that metalworking was sophisticated. The Legend of the Magic Crossbow, about a
crossbow that can deliver thousands of arrows, showed extensive use of archery in
warfare.
By the 3rd century BC, another Viet group, the Âu Việt, emigrated from present-day
southern China to the Red River delta and mixed with the indigenous Van Lang
population. In 258 BC, a new kingdom, Âu Lạc, emerged as the union of the Âu Việt and
the Lạc Việt, with Thục Phán proclaiming himself "King An Dương Vương". At his
capital Cổ Loa, he built many concentric walls around the city for defensive purposes.
These walls, together with skilled Âu Lạc archers, kept the capital safe from invaders for
a while. However, it also gave rise to the first recorded case of espionage in Vietnamese
history, resulting in the downfall of king An Dương Vương.

In 207 BC, an ambitious Chinese warlord named Triệu Đà (Chinese: Zhao Tuo) defeated
king An Dương Vương by having his son Trọng Thủy (Chinese: Zhong Shi) act as a spy
after marrying An Dương Vương's daughter. Triệu Đà annexed Âu Lạc into his domain
located in present-day Guangdong, southern China, then proclaimed himself king of a
new independent kingdom, Nam Việt (Chinese: 南越, Nan Yue). Trọng Thủy, the
supposed crown prince, drowned himself in Cổ Loa out of remorse for the death of his
wife in the war.

Some Vietnamese consider Triệu's rule a period of Chinese domination, since Triệu Đà
was a former Qin general. Others consider it an era of Việt independence as the Triệu
family in Nam Việt were assimilated to local culture. They ruled independently of what
then constituted China's (Han Dynasty). At one point, Triệu Đà even declared himself
Emperor, equal to the Chinese Han Emperor in the north.

Period of Chinese domination (111 BC – 938 AD)
In 111 BC, Chinese troops invaded Nam Việt and established new territories, dividing
Vietnam into Giao Chỉ (Chinese: 交趾 pinyin: Jiaozhi, now the Red river delta); Cửu
Chân from modern-day Thanh Hoá to Hà Tĩnh; and Nhật Nam, from modern-day Quảng
Bình to Huế. While the Chinese were governors and top officials, the original
Vietnamese nobles (Lạc Hầu, Lạc Tướng) still managed some highlands.

In 40 AD, a successful revolt against harsh rule by Han Governor Tô Định (蘇定 pinyin:
Sū Dìng), led by the two noble women Trưng Trắc and her sister Trưng Nhị, recaptured
65 states (include modern Guangxi), and Trưng Trắc became the Queen (Trưng Nữ
Vương). In 42 AD, Emperor Guangwu of Han sent his famous general Mã Viện
(Chinese: Ma Yuan) to quell the revolt. After a torturous campaign, Ma Yuan defeated
the Trưng Queen, who committed suicide. To this day, the Trưng Sisters are revered in
Vietnam as the national symbol of Vietnamese women. Learning a lesson from the Trưng
revolt, the Han and other successful Chinese dynasties took measures to eliminate the
power of the Vietnamese nobles. The Vietnamese elites would be coerced to assimilate
into Chinese culture and politics. However, in 225 AD, another woman, Triệu Thị Trinh,
popularly known as Lady Triệu (Bà Triệu), led another revolt which lasted until 248 AD.

During the Tang dynasty, Vietnam was called Annam (Giao Châu), until the early 10th
century AD. Giao Chỉ (with its capital around modern Bắc Ninh province) became a
flourishing trading outpost receiving goods from the southern seas. The "History of Later
Han" (Hậu Hán Thư, Hou Hanshu) recorded that in 166 AD the first envoy from the
Roman Empire to China arrived by this route, and merchants were soon to follow. The
3rd-century "Tales of Wei" (Ngụy Lục, Weilue) mentioned a "water route" (the Red
River) from Jiaozhi into what is now southern Yunnan. From there, goods were taken
overland to the rest of China via the regions of modern Kunming and Chengdu.

At the same time, in present-day central Vietnam, there was a successful revolt of Cham
nations. Chinese dynasties called it Lin-Yi (Lin village). It later became a powerful
kingdom, Champa, stretching from Quảng Bình to Phan Thiết (Bình Thuận).

In the period between the beginning of the Chinese Age of Fragmentation to the end of
the Tang Dynasty, several revolts against Chinese rule took place, such as those of Lý
Bôn and his general and heir Triệu Quang Phục; and those of Mai Thúc Loan and Phùng
Hưng. All of them ultimately failed, yet most notable were Lý Bôn and Triệu Quang
Phục, whose Anterior Lý Dynasty ruled for almost half a century (544 AD to 602 AD)
before the Chinese Sui Dynasty reconquered their kingdom Vạn Xuân.

Early independence (938 AD – 1009 AD)
Early in the 10th century, as China became politically fragmented, successive lords from
the Khúc family, followed by Dương Đình Nghệ, ruled Giao Châu autonomously under
the Tang title of Tiết Độ Sứ, Virtuous Lord, but stopping short of proclaiming themselves
kings.

In 938, the kingdom of Southern Han sent troops to conquer autonomous Giao Châu. Ngô
Quyền, Dương Đình Nghệ's son-in-law, defeated the Southern Han fleet at the Battle of
Bach Dang River (938). He then proclaimed himself King Ngô and effectively began the
age of independence for Vietnam.

Ngô Quyền's untimely death after a short reign resulted in a power struggle for the
throne, the country's first major civil war, The upheavals of Twelve warlords (Loạn Thập
Nhị Sứ Quân). The war lasted from 945 AD to 967 AD when the clan led by Đinh Bộ
Lĩnh defeated the other warlords, unifying the country. Dinh founded the Đinh Dynasty
and proclaimed himself First Emperor (Tiên Hoàng) of Đại Cồ Việt (Hán tự: 大瞿越;
literally "Great Viet Land"), with its capital in Hoa Lư (modern day Ninh Bình).
However, the Chinese Song Dynasty only officially recognized him as Prince of Jiaozhi
(Giao Chỉ Quận Vương). Emperor Đinh introduced strict penal codes to prevent chaos
from happening again. He tried to form alliances by granting the title of Queen to five
women from the five most influential families.

In 979 AD, Emperor Đinh Bộ Lĩnh and his crown prince Đinh Liễn were assassinated,
leaving his lone surviving son, the 6-year-old Đinh Toàn, to assume the throne. Taking
advantage of the situation, the Chinese Song Dynasty invaded Đại Cồ Việt. Facing such a
grave threat to national independence, the court's Commander of the Ten Armies (Thập
Đạo Tướng Quân) Lê Hoàn took the throne , founding the Former Lê Dynasty. A capable
military tactician, Lê Hoan realized the risks of engaging the mighty Chinese troops head
on; thus he tricked the invading army into Chi Lăng Pass, then ambushed and killed their
commander, quickly ending the threat to his young nation in 981 AD. The Song Dynasty
withdrew their troops yet would not recognize Lê Hoàn as Prince of Jiaozhi until 12 years
later; nevertheless, he is referred to in his realm as Đại Hành Emperor (Đại Hành Hoàng
Đế). Emperor Lê Hoàn was also the first Vietnamese monarch who began the southward
expansion process against the kingdom of Champa.

Emperor Lê Hoàn's death in 1005 AD resulted in infighting for the throne amongst his
sons. The eventual winner, Lê Long Đĩnh, became the most notorious tyrant in
Vietnamese history. He devised sadistic punishments of prisoners for his own
entertainment and indulged in deviant sexual activities. Toward the end of his short life –
he died at 24 – Lê Long Đĩnh became so ill that he had to lie down when meeting with
his officials in court.

Independent period of Đại Việt (1010 AD – 1527 AD)
Further information: History of the Song Dynasty#Relations with Lý of Vietnam
and border conflict

Southeast Asia circa 1010 AD. Dai Viet lands in blue.

When the king Lê Long Đĩnh died in 1009 AD, a Palace Guard Commander named Lý
Công Uẩn was nominated by the court to take over the throne, and founded the Lý
dynasty. This event is regarded as the beginning of a golden era in Vietnamese history,
with great following dynasties. The way Lý Công Uẩn ascended to the throne was rather
uncommon in Vietnamese history. As a high-ranking military commander residing in the
capital, he had all opportunities to seize power during the tumultuous years after Emperor
Lê Hoàn's death, yet preferring not to do so out of his sense of duty. He was in a way
being "elected" by the court after some debate before a consensus was reached.
Lý Công Uẩn, posthumously referred as Lý Thái Tổ, changed the country's name to Đại
Việt (Hán tự: 大越; literally "Great Viet"). The Lý Dynasty is credited for laying down a
concrete foundation, with strategic vision, for the nation of Vietnam. Leaving Hoa Lư, a
natural fortification surrounded by mountains and rivers, Lý Công Uẩn moved his court
to the new capital in present-day Hanoi and called it Thăng Long (Ascending Dragon).
Lý Công Uẩn thus departed from the militarily defensive mentality of his predecessors
and envisioned a strong economy as the key to national survival. Successive Lý kings
continued to accomplish far-reaching feats: building a dike system to protect the rice
producing area; founding Quốc Tử Giám, the first noble university; holding regular
examinations to select capable commoners for government positions once every three
years; organizing a new system of taxation; establishing humane treatment of prisoners.
Women were holding important roles in Lý society as the court ladies were in charge of
tax collection. The Lý Dynasty also promoted Buddhism, yet maintained a pluralistic
attitude toward the three main philosophical systems of the time: Buddhism,
Confucianism, and Taoism. During the Lý Dynasty, the Chinese Song Dynasty officially
recognized the Đại Việt monarch as King of Giao Chỉ (Giao Chỉ Quận Vương).

The Lý Dynasty had two major wars with Song China, and a few conquests against
neighboring Champa in the south. The most notable battle took place on Chinese territory
in 1075 AD. Upon learning that a Song invasion was imminent, the Lý army and navy
totalling about 100,000 men under the command of Lý Thường Kiệt, Tông Đản used
amphibious operations to preemptively destroy three Song military installations at Yong
Zhou, Qin Zhou, and Lian Zhou in present-day Guangdong and Guangxi, and killed
100,000 Chinese. The Song Dynasty took revenge and invaded Dai Viet in 1076, but the
Song troops were held back at the Battle of Như Nguyệt River commonly known as the
Cầu river, now in Bắc Ninh province about 40 km from the current capital, Hanoi.
Neither side were able to force a victory, so the Lý Dynasty proposed a truce, which the
Song Dynasty accepted.

Trần royal battle standard.

Toward the end of the Lý Dynasty, a powerful court minister named Trần Thủ Độ forced
king Lý Huệ Tông to become a Buddhist monk and Lý Chiêu Hoàng, Huệ Tông's young
daughter, to become queen. Trần Thủ Độ then arranged the marriage of Chiêu Hoàng to
his nephew Trần Cảnh and eventually had the throne transferred to Trần Cảnh, thus
begun the Trần Dynasty. Trần Thủ Độ viciously purged members of the Lý nobility;
some Lý princes escaped to Korea, including Lý Long Tường.

After the purge most Trần kings ruled the country in similar manner to the Lý kings.
Noted Trần Dynasty accomplishments include the creation of a system of population
records based at the village level, the compilation of a formal 30-volume history of Đại
Việt (Đại Việt Sử Ký) by Lê Văn Hưu, and the rising in status of the Nôm script, a
system of writing for Vietnamese language. The Trần Dynasty also adopted a unique way
to train new kings: as a king aged, he would relinquish the throne to his crown prince, yet
holding a title of August Higher Emperor (Thái Thượng Hoàng), acting as a mentor to the
new Emperor.

Mongol invasions

During the Trần Dynasty, Đại Việt repelled three invasions in 1257 AD, 1284 AD, and
1288 AD by the Mongols under Kublai Khan, who had occupied China and founded the
Yuan dynasty. The key to Đại Việt's successes was to avoid the Mongols' strength in
open field battles and city sieges - the Trần court abandoned the capital and the cities.
The Mongols were then countered decisively at their weak points, which were battles in
swampy areas such as Chương Dương, Hàm Tử, Vạn Kiếp and on rivers such as Vân
Đồn and Bạch Đằng. The Mongols also suffered from tropical diseases and loss of
supplies to Trần army's raids. The Yuan-Trần war reached its climax when the retreating
Yuan fleet was decimated at the Battle of Bach Dang (1288). The military architect
behind Dai Viet's victories was Commander Trần Quốc Tuấn, more popularly known as
Trần Hưng Đạo.

Champa

It was also during this period that the Trần kings waged many wars against the southern
kingdom of Champa, continuing the Viets' long history of southern expansion (known as
Nam Tiến) that had begun shortly after gaining independence from China. Often, they
encountered strong resistance from the Chams. Champa troops led by king Chế Bồng
Nga (Cham: Po Binasuor or Che Bonguar) killed king Trần Duệ Tông in battle and even
laid siege to Đại Việt's capital Thăng Long in 1377 AD and again in 1383 AD. However,
the Trần Dynasty was successful in gaining two Champa provinces, located around
present-day Hue, through the peaceful means of the political marriage of Princess Huyền
Trân to a Cham king.

Ming occupation and the rise of the Le dynasty

The Trần dynasty was in turn overthrown by one of its own court officials, Hồ Quý Ly.
Hồ Quý Ly forced the last Trần king to resign and assumed the throne in 1400. He
changed the country name to Đại Ngu (Hán tự: 太虞) and moved the capital to Tây Đô,
Western Capital, now Thanh Hóa. Thăng Long was renamed Đông Đô, Eastern Capital.
Although widely blamed for causing national disunity and losing the country later to the
Chinese Ming Dynasty, Hồ Quý Ly's reign actually introduced a lot of progressive,
ambitious reforms, including the addition of mathematics to the national examinations,
the open critique of Confucian philosophy, the use of paper currency in place of coins,
investment in building large warships and cannon, and land reform. He ceded the throne
to his son, Hồ Hán Thương, in 1401 and assumed the title Thái Thượng Hoàng, in similar
manner to the Trần kings.

In 1407, under the pretext of helping to restore the Trần Dynasty, Chinese Ming troops
invaded Đại Ngu and captured Hồ Quý Ly and Hồ Hán Thương. The Hồ Dynasty came
to an end after only 7 years in power. The Ming occupying force annexed Đại Ngu into
the Ming Empire after claiming that there was no heir to Trần throne. Almost
immediately, Trần loyalists started a resistance war. The resistance, under the leadership
of Trần Quĩ at first gained some advances, yet as Trần Quĩ executed two top commanders
out of suspicion, a rift widened within his ranks and resulted in his defeat in 1413.

In 1418, a wealthy farmer, Lê Lợi, led the Lam son revolution against the Ming from his
base of Lam Sơn (Thanh Hóa province). Overcoming many early setbacks and with
strategic advices from Nguyễn Trãi, Lê Lợi's movement finally gathered momentum,
marched northward, and launched a siege at Đông Quan (now Hanoi), the capital of the
Ming occupation. The Ming Emperor sent a reinforcement force, but Lê Lợi staged an
ambush and killed the Ming commander, Liễu Thăng (Chinese: Liu Sheng), in Chi Lăng.
Ming troops at Đông Quan surrendered. The Lam son revolution killed 300000 Ming
soldiers. In 1428, Lê Lợi ascended to the throne and began the Hậu Lê dynasty (Posterior
Lê). Lê Lợi renamed the country back to Đại Việt and moved the capital back to Thăng
Long.

Map of Vietnam showing the conquest of the south (the Nam Tien, 1069-1757).
The Lê Dynasty carried out land reforms to revitalize the economy after the war. Unlike
the Lý and Trần kings, who were more influenced by Buddhism, the Lê kings leaned
toward Confucianism. A comprehensive set of laws, the Hồng Đức code was introduced
with some strong Confucian elements, yet also included some progressive rules, such as
the rights of women. Art and architecture during the Lê Dynasty also became more
influenced by Chinese styles than during the Lý and Trần Dynasty. The Lê Dynasty
commissioned the drawing of national maps and had Ngô Sĩ Liên continue the task of
writing Đại Việt's history up to the time of Lê Lợi. King Lê Thánh Tông opened hospitals
and had officials distribute medicines to areas affected with epidemics.

In 1471, Le troops led by king Lê Thánh Tông invaded Champa and captured its capital
Vijaya. This event effectively ended Champa as a powerful kingdom, although some
smaller surviving Cham kingdoms still lasted for a few centuries more. It initiated the
dispersal of the Cham people across Southeast Asia. With the kingdom of Champa
mostly destroyed and the Cham people exiled or suppressed, Vietnamese colonization of
what is now central Vietnam proceeded without substantial resistance. However, despite
becoming greatly outnumbered by Kinh (Việt) settlers and the integration of formerly
Cham territory into the Vietnamese nation, the majority of Cham people nevertheless
remained in Vietnam and they are now considered one of the key minorities in modern
Vietnam. The city of Huế, founded in 1600 lies close to where the Champa capital of
Indrapura once stood. In 1479, King Lê Thánh Tông also campaigned against Laos and
captured its capital Luang Phrabang. He made further incursions westwards into the
Irrawaddy River region in modern-day Burma before withdrawing.

Divided period (1528–1802)
The Lê dynasty was overthrown by its general named Mạc Đăng Dung in 1527. He killed
the Lê emperor and proclaimed himself emperor, starting the Mạc Dynasty. After
defeating many revolutions for two years, Mạc Đăng Dung adopted the Trần Dynasty's
practice and ceded the throne to his son, Mạc Đăng Doanh, who became Thái Thượng
Hoàng.

Meanwhile, Nguyễn Kim, a former official in the Lê court, revolted against the Mạc and
helped king Lê Trang Tông restore the Lê court in the Thanh Hóa area. Thus a civil war
began between the Northern Court (Mạc) and the Southern Court (Restored Lê). Nguyễn
Kim's side controlled the southern part of Đại Việt (from Thanhhoa to the south), leaving
the north (including Đông Kinh-Hanoi) under Mạc control. When Nguyễn Kim was
assassinated in 1545, military power fell into the hands of his son-in-law, Trịnh Kiểm. In
1558, Nguyễn Kim's son, Nguyễn Hoàng, suspecting that Trịnh Kiểm might kill him as
he had done to his brother to secure power, asked to be governor of the far south
provinces around present-day Quảng Bình to Bình Định. Hoang pretended to be insane,
so Kiem was fooled into thinking that sending Hoang south was a good move as Hoang
would be quickly killed in the lawless border regions. However, Hoang governed the
south effectively while Trịnh Kiểm, and then his son Trịnh Tùng, carried on the war
against the Mạc. Nguyễn Hoàng sent money and soldiers north to help the war but
gradually he became more and more independent, transforming their realm's economic
fortunes by turning it into an international trading post.

The civil war between the Lê/Trịnh and Mạc dynasties ended in 1592, when the army of
Trịnh Tùng conquered Hanoi and executed king Mạc Mậu Hợp. Survivors of the Mạc
royal family fled to the northern mountains in the province of Cao Bằng and continued to
rule there until 1667 when Trịnh Tạc conquered this last Mạc territory. The Lê kings,
ever since Nguyễn Kim's restoration, only acted as figureheads. After the fall of the Mạc
Dynasty, all real power in the north belonged to the Trịnh Lords.

In the year 1600, Nguyễn Hoàng also declared himself Lord (officially "Vương",
popularly "Chúa") and refused to send more money or soldiers to help the Trịnh. He also
moved his capital to Phú Xuân, modern-day Huế. Nguyễn Hoàng died in 1613 after
having ruled the south for 55 years. He was succeeded by his 6th son, Nguyễn Phúc
Nguyên, who likewise refused to acknowledge the power of the Trịnh, yet still pledged
allegiance to the Lê king.

Trịnh Tráng succeeded Trịnh Tùng, his father, upon his death in 1623. Tráng ordered
Nguyễn Phúc Nguyên to submit to his authority. The order was refused twice. In 1627,
Trịnh Tráng sent 150,000 troops southward in an unsuccessful military campaign. The
Trinh were much stronger, with a larger population, eocnomy and military, but they were
unable to vanquish the Nguyen, who had built two defensive stone walls and invested in
Portuguese artillery.

See also: Artillery of the Nguyen lords

Map of Vietnam showing (roughly) the areas controlled by the Trịnh, Nguyễn, Mac, and
Champa about the year 1640.
One of the earliest Western maps of Vietnam, published in 1651 by Alexandre de Rhodes
(north is oriented to the right).

The Trịnh-Nguyễn War lasted from 1627 until 1672. The Trịnh army staged at least
seven offensives, all of which failed to capture Phú Xuân. For a time, starting in 1651, the
Nguyễn themselves went on the offensive and attacked parts of Trịnh territory. However,
the Trịnh, under a new leader, Trịnh Tạc, forced the Nguyễn back by 1655. After one last
offensive in 1672, Trịnh Tạc agreed to a truce with the Nguyễn Lord Nguyễn Phúc Tần.
The country was effectively divided in two.

The Trịnh and the Nguyễn maintained a relative peace for the next hundred years, during
which both sides made significant accomplishments. The Trịnh created centralized
government offices in charge of state budget and producing currency, unified the weight
units into a decimal system, established printing shops to reduce the need to import
printed materials from China, opened a military academy, and compiled history books.

Meanwhile, the Nguyễn Lords continued the southward expansion by the conquest of the
remaining Cham land. Việt settlers also arrived in the sparsely populated area known as
"Water Chenla", which was the lower Mekong Delta portion of Chenla (present-day
Cambodia). Between the mid-17th century to mid-18th century, as Chenla was weakened
by internal strife and Siamese invasions, the Nguyễn Lords used various means, political
marriage, diplomatic pressure, political and military favors,... to gain the area around
present day Saigon and the Mekong Delta. The Nguyễn army at times also clashed with
the Siamese army to establish influence over Chenla.

In 1771, the Tây Sơn revolution broke out in Quynhơn, which was under the control of
the Nguyễn. The leaders of this revolution were three brothers named Nguyễn Nhạc,
Nguyễn Lữ, and Nguyễn Huệ, not related to the Nguyễn lords. By 1776, the Tây Sơn had
occupied all of the Nguyễn Lord's land and killed almost the entire royal family. The
surviving prince Nguyễn Phúc Ánh (often called Nguyễn Ánh) fled to Siam, and obtained
military support from the Siamese king. Nguyễn Ánh came back with 50000 Siamese
troops to regain power, but was defeated at the Battle of Rạch Gầm–Xoài Mút and almost
killed. Nguyễn Ánh fled Vietnam, but he did not give up.

The Tây Sơn army commanded by Nguyễn Huệ marched north in 1786 to fight the Trịnh
Lord, Trịnh Khải. The Trịnh army failed and Trịnh Khải committed suicide. The Tây Sơn
army captured the capital in less than two months. The last Lê emperor, Lê Chiêu Thống,
fled to China and petitioned the Chinese Qing Emperor for help. The Qing emperor
Qianlong supplied Lê Chiêu Thống with a massive army of around 200,000 troops to
regain his throne from the usurper. Nguyễn Huệ proclaimed himself Emperor Quang
Trung and defeated the Qing troops with 100,000 men in a surprise 7 day campaign
during the lunar new year (Tết). During his reign, Quang Trung envisioned many reforms
but died by unknown reason on the way march south in 1792, at the age of 40.

During the reign of Emperor Quang Trung, Đại Việt was actually divided into 3 political
entities. The Tây Sơn leader, Nguyễn Nhạc, ruled the centre of the country from his
capital Qui Nhơn. Emperor Quang Trung ruled the north from the capital Phú Xuân Huế.
In the South, Nguyễn Ánh, assisted by many talented recruits from the South, captured
Gia Định (present day Saigon) in 1788 and established a strong base for his force.

After Quang Trung's death, the Tây Sơn Dynasty became unstable as the remaining
brothers fought against each other and against the people who were loyal to Nguyễn
Huệ's infant son. Nguyễn Ánh sailed north in 1799, capturing Tây Sơn's stronghold Qui
Nhơn. In 1801, his force took Phú Xuân, the Tây Sơn capital. Nguyễn Ánh finally won
the war in 1802, when he sieged Thăng Long (Hanoi) and executed Nguyễn Huệ's son,
Nguyễn Quang Toản, along with many Tây Sơn generals and officials. Nguyễn Ánh
ascended the throne and called himself Emperor Gia Long. Gia is for Gia Định, the old
name of Saigon; Long is for Thăng Long, the old name of Hanoi. Hence Gia Long
implied the unification of the country. The Nguyễn dynasty lasted until Bảo Đại's
abdication in 1945. As China for centuries had referred to Đại Việt as Annam, Gia Long
asked the Chinese Qing emperor to rename the country, from Annam to Nam Việt. To
prevent any confusion of Gia Long's kingdom with Triệu Đà's ancient kingdom, the
Chinese emperor reversed the order of the two words to Việt Nam. The name Vietnam is
thus known to be used since Emperor Gia Long's reign. Recently historians have found
that this name had existed in older books in which Vietnamese referred to their country as
Vietnam.

The Period of Division with its many tragedies and dramatic historical developments
inspired many poets and gave rise to some Vietnamese masterpieces in verse such as the
epic poem The Tale of Kieu (Truyện Kiều) by Nguyễn Du, Song of a Soldier's Wife
(Chinh Phụ Ngâm) by Đặng Trần Côn and Đoàn Thị Điểm, and a collection of satirical,
erotically charged poems by the female poet Hồ Xuân Hương.

19th century and French colonization
Main article: Nguyễn Dynasty

Flag of Colonial Annam.
French army attacking the Thanh in Lạng Sơn,1885

The West's exposure in Vietnam dates back to 166 BC with the arrival of merchants from
the Roman Empire, to 1292 with the visit of Marco Polo, and the early 1500s with the
arrival of Portuguese and other European traders and missionaries.[citation needed] Alexandre
de Rhodes, a French Jesuit priest, improved on earlier work by Portuguese missionaries
and developed the Vietnamese romanized alphabet Quốc Ngữ in Dictionarium
Annamiticum Lusitanam et Latinum in 1651.[3]

Between 1627 and 1775, two powerful families had partitioned the country: the Nguyễn
Lords ruled the South and the Trịnh Lords ruled the North. The Trịnh-Nguyễn War gave
European traders the opportunities to support each side with weapons and technology: the
Portuguese assisted the Nguyễn while the Dutch helped the Trịnh.

Main articles: Gia Long and Minh Mang
See also: Citadel of Saigon

In 1784, during the conflict between Nguyễn Ánh, the surviving heir of the Nguyễn
Lords, and the Tây Sơn Dynasty, a French Catholic Bishop, Pigneaux de Behaine, sailed
to France to seek military backing for Nguyen Anh. At Louis XVI's court, Pigneaux
brokered the Little Treaty of Versailles which promised French military aid in return for
Vietnamese concessions. The French Revolution broke out and Pigneaux's plan failed to
materialize. Undaunted, Pigneaux went to the French territory of Pondicherry, India. He
secured two ships, a regiment of Indian troops, and a handful of volunteers and returned
to Vietnam in 1788. One of Pigneaux's volunteers, Jean-Marie Dayot, reorganized
Nguyễn Ánh's navy along European lines and defeated the Tây Sơn at Qui Nhơn in 1792.
A few years later, Nguyễn Ánh's forces captured Saigon, where Pigneaux died in 1799.
Another volunteer, Victor Olivier de Puymanel would later build the Gia Định fort in
central Saigon.

After Nguyễn Ánh established the Nguyễn Dynasty in 1802, he tolerated Catholicism and
employed some Europeans in his court as advisors. However, he and his successors were
conservative Confucians who resisted Westernization. The next Nguyễn emperors, Ming
Mạng, Thiệu Trị, and Tự Đức brutally suppressed Catholicism and pursued a 'closed
door' policy, perceiving the Westerners as a threat. Tens of thousands of Vietnamese and
foreign-born Christians were persecuted and trade with the West slowed during this
period. There were frequent uprisings against the Nguyens , with literally hundreds of
such events being recorded. These acts were soon being used as excuses for France to
invade Vietnam. The early Nguyễn Dynasty had engaged in many of the constructive
activities of its predecessors, building roads, digging canals, issuing a legal code, holding
examinations, sponsoring care facilities for the sick, compiling maps and history books,
and exerting influence over Cambodia and Laos. However, those feats were not enough
of an improvement in the new age of science, technology, industrialization, and
international trade and politics, especially when faced with technologically superior
European forces exerting strong influence over the region. The Nguyễn Dynasty is
usually blamed for failing to modernize the country in time to prevent French
colonization in the late 19th century.

French invasion

Main articles: Cochinchina campaign, Truong Dinh, Phan Dinh Phung, Nguyen
Trung Truc, and Phan Thanh Gian

Under the orders of Napoleon III of France, French gunships under Rigault de Genouilly
attacked the port of Đà Nẵng in 1858, causing significant damages, yet failed to gain any
foothold. De Genouilly decided to sail south and captured the poorly defended city of Gia
Định (present-day Saigon). From 1859 to 1867, French troops expanded their control
over all 6 provinces on the Mekong delta and formed a French Colony known as Cochin
China. A few years later, French troops landed in northern Vietnam (which they called
Tonkin) and captured Hà Nội twice in 1873 and 1882. The French managed to keep their
grip on Tonkin although, twice, their top commanders, Francis Garnier and Henri Riviere
were ambushed and killed. France assumed control over the whole of Vietnam after the
Franco-Chinese War (1884-1885). French Indochina was formed in October 1887 from
Annam (Trung Kỳ, central Vietnam), Tonkin (Bắc Kỳ, northern Vietnam), Cochin China
(Nam Kỳ, southern Vietnam, and Cambodia, with Laos added in 1893). Within French
Indochina, Cochin China had the status of a French Colony, Annam was a Protectorate
where the Nguyen Dynasty still ruled in name, and Tonkin had a French Governor with
local governments run by Vietnamese officials.

After Gia Định fell to French troops, many Vietnamese resistance movements broke out
in occupied areas, some led by former court officers, such as Trương Định, some by
peasants, such as Nguyễn Trung Trực, who sunk the French gunship L'Esperance using
guerilla tactics. In the north, most movements were led by former court officers and
lasted decades, with Phan Đình Phùng until 1895 and Hoàng Hoa Thám until 1911. Even
the teenage Nguyễn Emperor Hàm Nghi left the Imperial Palace of Huế in 1885 and
started the Cần Vương, or "Save the King", movement, trying to rally the people to resist
the French. He was captured in 1888 and exiled to French Algeria. Decades later, two
more Nguyễn kings, Thành Thái and Duy Tân were also exiled to Africa for having anti-
French tendencies.

20th century

In the early 20th century, Vietnamese patriots realized that they could not defeat France
without modernization. Having been exposed to Western philosophy, they aimed to
establish a republic upon independence, departing from the royalist sentiments of the Cần
Vương movements. Japan's defeat of Russia in the Russo-Japanese War served as a
perfect example of modernization helping an Asian country defeat a powerful European
empire.

There emerged two parallel movements of modernization. The first was the Đông Du
("Go East") Movement started in 1905 by Phan Bội Châu. Phan Bội Châu's plan was to
send Vietnamse students to Japan to learn modern skills, so that in the future they could
lead a successful armed revolt against the French. With Prince Cường Để, Phan Bội Châu
started two organizations in Japan: Duy Tân Hội and Việt Nam Công Hiến Hội. Due to
French diplomatic pressure, Japan later deported Phan Bội Châu to China.

Phan Chu Trinh

Phan Boi Chau

Phan Chu Trinh, who favored a peaceful, non-violent struggle to gain independence, led
the second movement Duy Tân ("Modernization"). He stressed the need to educate the
masses, modernize the country, foster understanding and tolerance between the French
and the Vietnamese, and a peaceful transition of power.

The early part of the 20th century also saw the growing in status of the Romanized Quốc
Ngữ alphabet for the Vietnamese language. Vietnamese patriots realized the potential of
Quốc Ngữ as a useful tool to quickly reduce illiteracy and to educate the masses. The
traditional Chinese scripts or the Nôm script were seen as too cumbersome and too
difficult to learn. The use of prose in literature also became popular with the appearance
of many novels; most famous were those from the literary circle Tự Lực Văn Đoàn.

Main articles: Viet Nam Quoc Dan Dang and Yen Bai mutiny

As the French suppressed both movements, and after witnessing revolutionaries in action
in China and Russia, Vietnamse revolutionaries began to turn to more radical paths. Phan
Bội Châu created the Viet Nam Quang Phuc Hoi in Guangzhou, planning armed
resistance against the French. In 1925, French agents captured him in Shanghai and
spirited him to Vietnam. Due to his popularity, Phan Bội Châu was spared from
execution and placed under house arrest until his death in 1940. In 1927, the Việt Nam
Quốc Dân Đảng (Vietnamese Nationalist Party), modeled after the Guomingtang in
China, was founded. In 1930, the party launched the armed Yen Bai mutiny in Tonkin
which resulted in its chairman, Nguyen Thai Hoc and many other leaders captured and
executed by the guillotine.

Marxism was also introduced into Vietnam with the emergence of three separate
Communist parties; the Indochinese Communist Party, Annamese Communist Party and
the Indochinese Communist Union, joined later by a Trotskyist movement led by Tạ Thu
Thâu. In 1930 the Communist International (Comintern) sent Nguyễn Ái Quốc (later Ho
Chi Minh) to Hong Kong to coordinate the unification of the parties into the Vietnamese
Communist Party with Trần Phú as the first Secretary General. Later the party changed
its name to the Indochinese Communist Party as the Comintern, under Stalin, did not
favor nationalistic sentiments. Nguyễn Ái Quốc was a leftist revolutionary living in
France since 1911. He participated in founding the French Communist Party and in 1924
traveled to the Soviet Union to join the Comintern. Through the late 1920s, he acted as a
Comintern agent to help build Communist movements in Southeast Asia. During the
1930s, the Vietnamese Communist Party was nearly wiped out under French suppression
with the execution of top leaders such as Trần Phú, Lê Hồng Phong, and Nguyễn Văn
Cừ.

In 1940, during World War II, Japan invaded Indochina, keeping the Vichy French
colonial administration in place as a Japanese puppet. In 1941 Hồ Chí Minh, formerly
known as Nguyễn Ái Quốc, arrived in northern Vietnam to form the Việt Minh Front,
short for Việt Nam Độc Lập Đồng Minh Hội. The Việt Minh Front was supposed to be an
umbrella group for all parties fighting for Vietnam's independence, but was dominated by
the Communist Party. The Việt Minh had a modest armed force and during the war
worked with the American Office of Strategic Services to collect intelligence on the
Japanese. From China, other non-Communist Vietnamese parties also joined the Việt
Minh and established armed forces with backing from the Guomingtang.

First Indochina War (1945 – 1954)
Main article: First Indochina War
In 1944-1945, millions of Vietnamese people starved to death in the Japanese occupation
of Vietnam.[4]

In early 1945, due to a combination of Japanese exploitation and poor weather, a famine
broke out in Tonkin killing between 1 and 2 million people. In March 1945, Japanese
occupying forces ousted the French administration in Indochina. Emperor Bảo Đại of the
Nguyễn Dynasty nominally declared Vietnam independent, but the Japanese remained in
occupation.

When the Japanese surrendered to the Allies in August 1945 a power vacuum was created
in Vietnam. The Việt Minh launched the "August Revolution" across the country to seize
government offices. Emperor Bảo Ðại abdicated on August 25, 1945, ending the Nguyễn
Dynasty. On September 2, 1945 Hồ Chí Minh declared Vietnam independent under the
new name of the Democratic Republic of Vietnam (DRV) and held the position of
Chairman (Chủ Tịch).

British forces landed in southern Vietnam in October, disarming the Japanese and
restoring order. The British commander South east Asia, Lord Mountbatten, sent over
20,000 troops of the 20th Indian division under General Douglas Gracey to occupy
Saigon. The first soldiers arrived on 6 September and increased to full strength over the
following weeks. In addition they re-armed Japanese prisoners of war known as Gremlin
force. The British began to withdraw in December 1945, but this was not completed until
June of the following year. The last British soldiers were killed in Vietnam in June 1946.
Altogether 40 British and Indian troops were killed and over a hundred were wounded.
Vietnamese casualties were 600. They were followed by French troops trying to re-
establish their rule. In the north, Chiang Kaishek's Guomintang army entered Vietnam
from China, also to disarm the Japanese, followed by the forces of the non-Communist
Vietnamese parties, such as Việt Nam Quốc Dân Đảng and Việt Nam Cách Mạng Đồng
Minh Hội. In 1946, Vietnam had its first National Assembly election, which drafted the
first constitution, but the situation was still precarious: the French tried to regain power
by force; some Cochin-Chinese politicians formed a seceding government of Cochin-
China (Nam Kỳ Quốc) while the non-Communist and Communist forces were engaging
each other in sporadic battle. Stalinists purged Trotskyists. Religious sects and resistance
groups formed their own militias. The Communists eventually suppressed all non-
Communist parties but failed to secure a peace deal with France.

In 1947 full scale war broke out between the Viet Minh and France. Realizing that
colonialism was coming to an end worldwide, France fashioned a semi-independent State
of Vietnam, within the French Union, with Bảo Đại as Head of State. Meanwhile, as the
Communists under Mao Zedong took over China, the Viet Minh began to receive military
aid from China. Beside supplying materials, Chinese cadres also pressured the
Vietnamese Communist Party, then under First Secretary Trường Chinh, to emulate their
brand of revolution, unleashing a purge of "bourgeois and feudal" elements from the Viet
Minh ranks, carrying out a ruthless and bloody land reform campaign (Cải Cách Ruộng
Đất), and denouncing "bourgeois and feudal" tendencies in arts and literature. Many true
patriots and devoted Communist revolutionaries in the Viet Minh suffered mistreatment
or were even executed during these movements. Many others became disenchanted and
left the Viet Minh. The United States became strongly opposed to Hồ Chí Minh. In the
1950s the government of Bảo Ðại gained recognition by the United States and the United
Kingdom.

The Việt Minh force grew significantly with China's assistance and in 1954, under the
command of General Võ Nguyên Giáp, launched a major siege against French bases in
Điện Biên Phủ. The Việt Minh force surprised Western military experts with their use of
primitive means to move artillery pieces and supplies up the mountains surrounding Điện
Biên Phủ, giving them a decisive advantage. On May 7 1954, French troops at Điện Biên
Phủ, under Christian de Castries, surrendered to the Viet Minh and in July 1954, the
Geneva Accord was signed between France and the Viet-Minh, paving the way for the
French to leave Vietnam.

Vietnam War (1954 – 1975)
Main article: Vietnam War

The Geneva Conference of 1954 ended France's colonial presence in Vietnam and
partitioned the country into two states at the 17th parallel pending unification on the basis
of internationally supervised free elections. Ngô Ðình Diệm, a former mandarin with a
strong Catholic and Confucian background, was selected as Premier of the State of
Vietnam by Bảo Đại. While Diệm was trying to settle the differences between the various
armed militias in the South, Bảo Ðại was persuaded to reduce his power. Diệm used a
referendum in 1955 to depose Bảo Đại and declare himself President of the Republic of
Vietnam (South Vietnam). The Republic of Vietnam (RVN) was proclaimed in Saigon on
October 22, 1955. The United States began to provide military and economic aid to the
RVN, training RVN personnel, and sending U.S. advisors to assist in building the
infrastructure for the new government.

Also in 1954, Vietminh forces took over North Vietnam according to the Geneva Accord.
Two million North Vietnamese civilians emigrated to South Vietnam to avoid the
imminent Communist regime. At the same time, Viet Minh armed forces from South
Vietnam were also moving to North Vietnam, as dictated by the Geneva Accord.
However, some high ranking Viet Minh cadres secretly remained in the South to follow
the local situation closely. The most important figure among those was Lê Duẩn.

Main article: 1955 State of Vietnam referendum

The Geneva Accord had promised elections to determine the government for a unified
Vietnam. However, as only France and the Viet Minh had signed the document, the
United States and Ngô Đình Diệm's government refused to abide by the agreement,
fearing that Hồ Chí Minh would win the election due to his war popularity, establishing
Communism in the whole of Vietnam. Ngô Đình Diệm took some strong measures to
secure South Vietnam from perceived internal threats. He eliminated all private militias
from the Bình Xuyên Party and the Cao Đài and Hòa Hảo religious sects. In October
1955, he deposed Bao Dai and proclaimed himself President of the newly established the
Republic of Vietnam, after rigging a referendum.[5][6] He repressed any political
opposition, arresting the famous writer Nguyễn Tường Tam, who committed suicide
while awaiting trial in jail.[7] Diệm also acted aggressively to remove Communist agents
still remaining in the South. He formed the Cần Lao Nhân Vị Party, mixing Personalist
philosophy with labor rhetorics, modeling its organization after the Communist Party,
although it was anti-Communist and pro-Catholicism. Another controversial policy was
the Strategic Hamlet Program, which aimed to build fortified villages to lock out
Communists. However, it was ineffective as many communists were already part of the
population and visually indistinguishable. It became unpopular as it limited the villagers'
freedom and altered their traditional way of life.

In 1960, at the Third Party Congress of the Vietnamese Communist Party, ostensibly
renamed the Labor Party since 1951, Lê Duẩn arrived from the South and strongly
advocated the use of revolutionary warfare to topple Diệm's regime, unifying the country,
and build Marxist-Leninist socialism. Despite some elements in the Party opposing the
use of force, Lê Duẩn won the seat of First Secretary of the Party. As Hồ Chí Minh was
aging, Lê Duẩn virtually took the helm of war from him. The first step of his war plan
was coordinating a rural uprising in the South (Đồng Khởi) and forming the National
Front for the Liberation of South Vietnam (NLF) toward the end of 1960. The figurehead
leader of the NLF was Nguyễn Hữu Thọ, a South Vietnamese lawyer, but the true
leadership was the Communist Party hierarchy in South Vietnam. Arms, supplies, and
troops came from North Vietnam into South Vietnam via a system of trails, named the
Ho Chi Minh Trail, that branched into Laos and Cambodia before entering South
Vietnam. At first, most foreign aid for North Vietnam came from China, as Lê Duẩn
distanced Vietnam from the "revisionist" policy of the Soviet Union under Nikita
Khrushchev. However, under Leonid Brezhnev, the Soviet Union picked up the pace of
aid and provided North Vietnam with heavy weapons, such as T-54 tanks, artillery, MIG
fighter planes, surface-to-air missiles, etc.

Main articles: Ngo Dinh Diem, Buddhist crisis, Hue Vesak shootings, Xa Loi
Pagoda raids, Cable 243, and Arrest and assassination of Ngo Dinh Diem
See also: Ngo Dinh Can, Ngo Dinh Nhu, and Le Quang Tung

Meanwhile, in South Vietnam, although Ngô Đình Diệm personally was respected for his
nationalism, he ran a nepotistic and authoritarian regime. Elections were routinely rigged
and Diem discriminated in favour of minority Roman Catholics on many issues. His
religious policies sparked protests from the Buddhist community after demonstrators
were killed on Vesak, Buddha's birthday, in 1963 when they were protesting a ban on the
Buddhist flag. This incident sparked mass protests calling for religious equality. The most
famous case was of Venerable Thích Quảng Đức, who burned himself to death to protest.
The images of this event made worldwide headlines and brought extreme embarrassment
for Diem. The tension was not resolved, and on August 21, the ARVN Special Forces
loyal to his brother and chief adviser Ngô Đình Nhu and commanded by Le Quang Tung
raided Buddhist pagodas across the country. In the United States, the Kennedy
administration became worried that the problems of Diệm's regime were undermining the
US's anti-Communist effort in Southeast Asia. On November 1 1963, confident the US
would not intervene or cut off aid as a result, South Vietnamese generals led by Dương
Văn Minh engineered a coup d'etat and overthrew Ngô Đình Diệm, killing both him and
hid brother Nhu.

Between 1963 and 1967, South Vietnam was extremely unstable as no government could
keep power for long. There were more coups, often more than one every year. The
Communist-run NLF expanded their operation and scored some significant military
victories. In 1965, the US, then under President Lyndon Johnson, decided to send troops
to South Vietnam to secure the country and started to bomb North Vietnam, assuming
that if South Vietnam fell to the Communists, other countries in the Southeast Asia would
follow, in accordance with the Domino Theory. Other US allies, such as Australia, New
Zealand, South Korea, Thailand, the Philippines, and Taiwan also sent troops to South
Vietnam. Although the American-led troops succeeded in containing the advance of
Communist forces, the presence of foreign troops, the widespread bombing over all of
Vietnam, and the social vices that mushroomed around US bases upset the sense of
national pride among many Vietnamese, North and South, causing many to become
sympathetic to North Vietnam and the NLF.

In 1967, South Vietnam managed to conduct a National Assembly and Presidential
election with Lt. General Nguyễn Văn Thiệu being elected to the Presidency, bringing the
government to some level of stability. However, in 1968, the NLF launched a massive
and surprise Tết Offensive (known in South Vietnam as "Biến Cố Tết Mậu Thân" or in
the North as "Cuộc Tổng Tấn Công và Nổi Dậy Tết Mậu Thân"), attacking almost all
major cities in South Vietnam over the Vietnamese New Year (Tết). NLF and North
Vietnamese captured the city of Huế, after which many mass graves were found. Many of
the executed victims had relations with the South Vietnamese government or the US
(Thảm Sát Tết Mậu Thân). Over the course of the year the NLF forces were pushed out
of all cities in South Vietnam and nearly decimated. In subsequent major offensives in
later years, North Vietnamese regulars with artillery and tanks took over the fighting. In
the months following the Tet Offensive, an American unit massacred civilian villagers,
suspected to be sheltering Viet Cong NLF guerillas, in the hamlet of My Lai in Central
Vietnam, causing an uproar in protest around the world.

In 1969, Hồ Chí Minh died, leaving wishes that his body be cremated. However, the
Communist Party embalmed his body for public display and built the Ho Chi Minh
Mausoleum on Ba Đình Square in Hà Nội, in the style of Lenin's Mausoleum in Moscow.

Although the Tết Offensive was a catastrophic military defeat for the Việt Cộng, it was a
stunning political victory as it led many Americans to view the war as unwinnable. U.S.
President Richard Nixon entered office with a pledge to end the war "with honor." He
normalized US relations with China in 1972 and entered into détente with the USSR.
Nixon thus forged a new strategy to deal with the Communist Bloc, taking advantage of
the rift between China and the Soviet Union. A costly war in Vietnam begun to appear
less effective for the cause of Communist containment. Nixon proposed "Vietnamization"
of the war, with South Vietnamese troops taking charge of the fighting, yet still receiving
American aid and, if necessary, air and naval support. The new strategy started to show
some effects: in 1970, troops from the Army of the Reublic of Vietnam (ARVN)
successfully conducted raids against North Vietnamese bases in Cambodia (Cambodian
Campaign); in 1971, the ARVN made an incursion into Southern Laos to cut off the Ho
Chi Minh Trail in Operation Lam Son 719, but the operation failed as most high positions
captured by ARVN paratroopers were overrun by North Vietnamese troops; in 1972, the
ARVN successfully held the town of An Lộc against massive attacks from North
Vietnamese regulars and recaptured the town of Quảng Trị near the demilitarised zone
(DMZ) in the centre of the country during the Easter Offensive.

At the same time, Nixon was pressuring both Hanoi and Saigon to sign the Paris Peace
Agreement of 1973, for American military forces to withdraw from Vietnam. The
pressure on Hanoi materialized with the Christmas Bombings in 1972. In South Vietnam,
Nguyễn Văn Thiệu vocally opposed any accord with the Communists, but was threatened
with withdrawal of American aid.

Despite the peace treaty, the North continued the war as had been envisioned by Lê Duẩn
and the South still tried to recapture lost territories. In the U.S., Nixon resigned after the
Watergate scandal. South Vietnam was seen as losing a strong backer. Under U.S.
President Gerald Ford, the Democratic-controlled Congress became less willing to
provide military support to South Vietnam.

In 1974, South Vietnam also fought and lost the Battle of Hoàng Sa against China over
the control of the Paracel Islands in the South China Sea. Neither North Vietnam nor the
U.S. interfered.

In early 1975, North Vietnamese military led by General Văn Tiến Dũng launched a
massive attack against the Central Highland province of Buôn Mê Thuột. South
Vietnamese troops had anticipated attack against the neighboring province of Pleiku, and
were caught off guard. President Nguyễn Văn Thiệu ordered the moving of all troops
from the Central Highland to the coastal areas, as with shrinking American aid, South
Vietnamese forces could not afford to spread too thin. However, due to lack of
experience and logistics for such a large troop movement in such a short time, the whole
South Vietnamese 2nd Corps got bogged down on narrow mountain roads, flooded with
thousands of civilian refugees, and was decimated by ambushes along the way. The
South Vietnamese First Corp near the DMZ was cut off, received conflicting orders from
Saigon on whether to fight or to retreat, and eventually collapsed. Many civilians tried to
flee to Saigon via land, air, and sea routes, suffering massive casualties along the way. In
early April 1975, South Vietnam set up a last ditch defense line at Xuân Lộc, under
commander Lê Minh Đảo. North Vietnamese troops failed to penetrate the line and had
to make a detour, which the South Vietnamese failed to stop due to lack of troops.
President Nguyễn văn Thiệu resigned. Power fell to Dương Văn Minh.

Dương Văn Minh had led the coup against Diệm in 1963. By the mid 1970s, he had
leaned toward the "Third Party" (Thành Phần Thứ Ba), South Vietnamese elites who
favored dialogues and cooperation with the North. Communist infiltrators in the South
tried to work out political deals to let Dương Văn Minh ascend to the Presidency, with
the hope that he would prevent a last stand, destructive battle for Saigon. Although many
South Vietnamese units were ready to defend Saigon, and the ARVN 4th Corp was still
intact in the Mekong Delta, Duong Van Minh ordered a surrender on April 30 1975,
sparing Saigon from destruction. Nevertheless, the reputation of the North Vietnamese
army towards perceived traitors preceeded them, and hundreds of thousands of South
Vietnamese fled the country by all means: airplanes, helicopters, ships, fishing boats, and
barges. Most were picked up by the U.S. Seventh Fleet in the South China Sea or landed
in Thailand. The seaborne refugees came to be known as "boat people". In a famous case,
a South Vietnamese pilot, with his wife and children aboard a small Cessna plane,
miraculously landed safely without a tailhook on the aircraft carrier USS Midway.

During this period, North Vietnam was a Socialist state with a centralized command
economy, an extensive security apparatus to carry out Dictatorship of the Proletariat, a
powerful propaganda machine that effectively rallied the people for the Party's causes, a
superb intelligence system that infiltrated South Vietnam (spies such as Phạm Xuân Ẩn
climbed to high government positions), and a severe suppression of political opposition.
Even some decorated veterans and famed Communist cadres, such as Trần Đức Thảo,
Nguyễn Hữu Đang, Trần Dần, Hoàng Minh Chính, were persecuted during the late 1950s
Nhân Văn Giai Phẩm events and the 1960s Trial Against the Anti-Party Revisionists (Vụ
Án Xét Lại Chống Đảng) for speaking their opinions. Nevertheless, this iron grip,
together with consistent support from the Soviet Union and China, gave North Vietnam a
militaristic advantage over South Vietnam. North Vietnamese leadership also had a steely
determination to fight, even when facing massive casualties and destruction at their end.
The young North Vietnamese were idealistically and innocently patriotic, ready to give
the ultimate sacrifice for the "liberation of the South" and the "unification of the
motherland".

Socialism after 1975
Main article: Socialist Republic of Vietnam

After April 30th, 1975, unlike the Khmer Rouge in Cambodia, the Vietnamese
Communists did not commit a "blood bath", but most government officials and military
personnel were sent to reeducation camps. Nevertheless, many North Vietnamese soldiers
and cadres began to realize that they had been indoctrinated into thinking that the South
Vietnamese people were utterly poor and exploited by the imperialists and foreign
capitalists and treated like slaves. Contradictory to what they were taught, they saw an
abundance of food and consumer goods, fashionable clothes, plenty of books and music;
things that were hard to get in the North.

In 1976, Vietnam was officially unified and renamed Socialist Republic of Vietnam
(SRVN), with its capital in Hà Nội. The Vietnamese Communist Party dropped its front
name "Labor Party" and changed the title of First Secretary, a term used by China, to
Secretary General, used by the Soviet Union, with Lê Duẩn as Secretary General. The
National Liberation Front was dissolved. The Party emphasised development of heavy
industry and collectivisation of agriculture. Over the next few years, private enterprises
were seized by the government and their owners were often sent to the New Economic
Zone to clear land. The farmers were coerced into state-controlled cooperatives.
Transportation of food and goods between provinces was deemed illegal except by the
government. Within a short period of time, Vietnam was hit with severe shortage of food
and basic necessities. The Mekong Delta, once a world-class rice-producing area, was
threatened with famine.

In foreign relations, the SRVN became increasingly aligned with the Soviet Union by
joining the Council for Mutual Economic Assistance (COMECON), and signing a
Friendship Pact, which was in fact a military alliance, with the Soviet Union. Tension
between the Vietnam and China mounted along with China's rivalry with the Soviet
Union and conflict erupted with Cambodia, China's ally. Vietnam was also subject to
trade embargos by the U.S. and its allies.

Many of those who held high positions in the old South Vietnamese government and
military, together with influential people in the literary and religious circles, were sent to
reeducation camps, which were actually hard labor prison camps. The inhumane
conditions and treatment in the camps caused many inmates to remain bitter against the
Communist Party decades later.

The SRVN government implemented a Stalinist dictatorship of the proletariat in the
South as they did in the North. The network of security apparatus (Công An) controlled
every aspect of people's life. Censorship was strict and ultra-conservative, with most pre-
1975 works in the fields of music, art, and literature being banned. All religions had to be
re-organized into state-controlled churches. Any negative comments toward the Party, the
government, Uncle Ho, or anything related to Communism might earn the person the tag
of Phản Động (Reactionary), with consequences ranging from being harassed by police,
expelled from school or workplace, to being sent to prison. Nevertheless, the Communist
authority failed to suppress the Black Market, where food, consumer goods, and banned
literature could be bought at high prices. The security apparatus also failed to stop a
nationwide clandestine network of people trying to escape the country. In many cases, the
security officers of some whole districts were bribed and even got involved in organizing
the escape schemes.

These living conditions resulted in an exodus of over a million Vietnamese secretly
escaping the country either by sea or overland through Cambodia. For the people fleeing
by sea, their wooden boats were often not sea-worthy, were packed with people like
sardines, and lacked sufficient food and water. Many were caught or shot at by the
Vietnamese coast guards, many perished at sea due to boats sinking, capsizing in storms,
starvation and thirst. Another major threat were the pirates in the Gulf of Siam, who
viciously robbed, raped, and murdered the boat people. In many cases, they massacred
the whole boat. Sometimes the women were raped for days before being sold into
prostitution. The people who crossed Cambodia faced equal dangers with mine fields,
and the Khmer Rouge and Khmer Serei guerillas, who also robbed, raped, and killed the
refugees. Some were successful in fleeing the region and landed in numbers in Malaysia,
Indonesia, the Philippines, and Hong Kong, only to wind up in United Nations refugee
camps. Some famous camps were Bidong in Malaysia, Galang in Indonesia, Bataan in the
Philippines and Songkla in Thailand. Some managed to travel as far as Australia in
crowded, open boats.

While most refugees were resettled to other countries within five years, others languished
in these camps for over a decade. In the 1990s, refugees who could not find asylum were
deported back to Vietnam. Communities of Vietnamese refugees arrived in the US,
Canada, Australia, France, West Germany, and the UK. The refugees often sent relief
packages packed with necessities, such as medicines, fabrics, toothpaste, dried food and
soap to their relatives in Vietnam to help them survive. Very few would send money as it
would be exchanged far below market rates by the Vietnamese government.

Vietnamese-led forces entering Phnom Penh in 1979.

In late 1978, following repeated raids by the Pol Pot regime's Khmer Rouge into
Vietnamese territory, Vietnam sent troops to overthrow Pol Pot. The pro-Vietnamese
People's Republic of Kampuchea was created with Heng Samrin as Chairman. Pol Pot's
Khmer Rouge allied with non-Communist guerilla forces led by Norodom Sihanouk and
Son Sann to fight against the Vietnamese forces and the new Phnom Penh regime. Some
high ranking officials of the Heng Samrin regime in the early 1980s resisted Vietnamese
control, resulting in a purge that removed Pen Sovan, Prime Minister and Secretary
General of the Cambodian People's Revolutionary Party. The war lasted until 1989 when
Vietnam withdrew its troops and handed the administration of Cambodia to the United
Nations. The Vietnamese invasion of Cambodia had prevented the genocide of millions
of Cambodians by the Khmer Rouge. In early 1979, China invaded Vietnam to
supposedly "teach Vietnam a lesson" for the invasion of Cambodia and the supposed
persecution of the Hoa people. The Sino-Vietnamese War was brief, but casualties were
high on both sides.[8]

Vietnam's third Constitution, based on that of the USSR, was written in 1980. The
Communist Party was stated by the Constitution to be the only party to represent the
people and to lead the country.

In 1980, cosmonaut Phạm Tuân became the first Vietnamese person and the first Asian to
go into space, traveling on the Soviet Soyuz 37 to service the Salyut 6 space station.
During the early 1980s, a number of overseas Vietnamese organizations were created
with the aim of overthrowing the Vietnamese Communist government through armed
struggle. Most groups attempted to infiltrate Vietnam but eventually were eliminated by
Vietnamese security and armed forces. Most notable were the organizations led by Hoàng
Cơ Minh from the US, Võ Đại Tôn from Australia, and Lê Quốc Túy from France.
Hoàng Cơ Minh was killed during an ambush in Laos. Võ Đại Tôn was captured and
imprisoned until his release, in the 1990s. Lê Quốc Túy escaped to France after many of
his comrades were arrested and executed. Lê Quốc Túy later died in France from poison.

Throughout the 1980s, Vietnam received nearly $3 billion a year in economic and
military aid from the Soviet Union and conducted most of its trade with the USSR and
other COMECON (Council for Mutual Economic Assistance) countries. Some cadres,
realizing the economic suffering of the people, began to break rules and experimented
with market-oriented enterprises. Some were punished for their efforts, but years later
would be hailed as visionary pioneers.

Changing names
See also: List of Vietnamese dynasties

For the most part of its history, the geographical boundary of present day Vietnam
covered 3 ethnically distinct nations: a Vietnamese nation, a Cham nation, and a part of
the Khmer Empire.

The Viet nation originated in the Red River Delta in present day north Vietnam and
expanded over its history to the current boundary. It went through a lot of name changes,
with Đại Việt being used the longest. Below is a summary of names:

Period Country Name Time Frame Boundary

No accurate record on its
boundary. Some legends
claim that its northern
boundary might reach the
Hồng Bàng Yangtze River. However,
Văn Lang Before 258 BC
Dynasty most modern history
textbooks in Vietnam only
claim the Red River Delta as
the home of the Lạc Việt
culture.

Thục Dynasty Âu Lạc 258 BC - 207 BC Red River delta and its
adjoining north and west
mountain regions.

Âu Lạc, Guangdong, and
Triệu Dynasty Nam Việt 207 BC - 111 BC
Guangxi.

Present-day north and north-
central of Vietnam
Chinese Han
Giao Chỉ (Jiao Zhi) 111 BC - 544 AD
Domination
(southern border expanded
down to the Ma River and
Ca River delta).
Commonly called Giao
Châu.

Vạn Xuân during half-
Subsequent
century independence
Chinese 544 AD - 967 AD Same as above.
of Anterior Lý
Dynasties
Dynasty. Officially
named An Nam by
Chinese Tang Dynasty
since 679 CE.
Đinh and
Anterior Lê Đại Cồ Việt 967 AD - 1009 AD Same as above.
Dynasty
Southern border expanded
Lý and Trần
Đại Việt 1010 AD - 1400 AD down to present-day Hue
Dynasty
area.
Hồ Dynasty Đại Ngu 1400 AD - 1407 AD Same as above.
Lê, Mạc, Trịnh-
Gradually expanded to the
Nguyễn Lords,
Đại Việt 1428 AD - 1802 AD boundary of present day
Tây Sơn
Vietnam.
Dynasty
Present-day Vietnam plus
Nguyễn
Việt Nam 1802 AD - 1887 AD some occupied territories in
Dynasty
Laos and Cambodia.
French Colony French Indochina, 1887 AD - 1945 AD Present-day Vietnam, Laos,
consisting of and Cambodia.
Cochinchina (southern
Vietnam), Annam
(central Vietnam),
Tonkin (northern
Vietnam), Cambodia,
and Laos
Democratic
Republic of
Vietnam (1945-
1976),
Việt Nam (with
variances such as
State of Vietnam
Democratic Republic
(1949-1956),
Independence of Vietnam, State of Present-day Vietnam.
Republic of
Vietnam, Republic of
Vietnam (1956-
Vietnam, Socialist
1975 in South
Republic of Vietnam)
Vietnam), Socialist
Republic of
Vietnam (1976-
present)

Almost all Vietnamese dynasties are named after the king's family name, unlike the
Chinese dynasties, whose names are dictated by the dynasty founders and often used as
the country's name.

It is still a matter of debate whether the Hồng Bàng Dynasty was real or just a symbolic
dynasty to represent the Lạc Việt nation before recorded history. The Thục, Triệu,
Anterior Lý, Ngô, Đinh, Anterior Lê, Lý, Trần, Hồ, Lê, Mạc, Tây Sơn, and Nguyễn are
usually regarded by historians as formal dynasties. Nguyễn Hue's "Tây Sơn Dynasty" is
rather a name created by historians to avoid confusion with Nguyễn Anh's Nguyễn
Dynasty.

Further reading
• Hill, John E. 2003. "Annotated Translation of the Chapter on the Western Regions
according to the Hou Hanshu." 2nd Draft Edition. [1]
• Hill, John E. 2004. The Peoples of the West from the Weilue 魏略 by Yu Huan 魚
豢: A Third Century Chinese Account Composed between 239 and 265 AD. Draft
annotated English translation. [2]
• Mesny, William. 1884. Tungking. Noronha & Co., Hong Kong.
• Nguyễn Khắc Viện 1999 . Vietnam - A Long History. Hanoi, Thế Giới Publishers.
• Stevens, Keith. 1996. "A Jersey Adventurer in China: Gun Runner, Customs
Officer, and Business Entrepreneur and General in the Chinese Imperial Army.
1842-1919." Journal of the Hong Kong Branch of the Royal Asiatic Society. Vol.
32 (1992). Published in 1996.
• Francis Fitzgerald. 1972. Fire in the Lake: The Vietnamese and the Americans in
Vietnam. Little, Brown and Company.
• Hung, Hoang Duy. 2005. A Common Quest for Vietnam's Future. Viet Long
Publishing.
• The Office of the United Nations High Commissioner for Refugees. 2000. The
State of The World's Refugees 2000: Fifty Years of Humanitarian Action -
Chapter 4: Flight from Indochina (PDF). [3]
• Lê Văn Hưu & Ngô Sĩ Liên. Đại Việt Sử Ký Toàn Thư.
• Trần Trọng Kim. Việt Nam Sử Lược. Trung Tâm Học Liệu 1971.
• Phạm Văn Sơn. Việt Sử Toàn Thư.
• Taylor, Keith W. The Birth Of Vietnam.
• Trần Dân Tiên. Những Mẫu Chuyện Về Đời Hoạt Động Của Hồ Chủ Tịch.
• Văn Tiến Dũng. Đại Thắng Mùa Xuân.
• Bui Diem. In The Jaws Of History.
• Nguyen Tien Hung, Jerrold L. Schecter. The Palace File.
• Phạm Huấn. Cuộc Triệt Thoái Cao Nguyên 1975.
• Hành Trình Biển Đông Vol 1 and 2. Anthology of memoirs by Vietnamese boat
people.
• Nguyễn Khắc Ngữ. Nguồn Gốc Dân Tộc Việt Nam. Nhóm Nghiên Cứu Sử Địa.
• Văn Phố Hoàng Đống. Niên Biểu Lịch Sử Việt Nam Thời Kỳ 1945-1975. Đại Nam
2003.
• Lê Duẩn. Đề Cương Cách Mạng Miền Nam.
• Nhat Tien, Duong Phuc, Vu Thanh Thuy. Pirates in the Gulf of Siam.
• Nguyễn Văn Huy, Tìm hiểu cộng đồng người Chăm tại Việt Nam.

References
Ship
From Wikipedia, the free encyclopedia
Jump to: navigation, search

For other uses, see Ship (disambiguation).

Italian Full rigged ship Amerigo Vespucci in New York Harbor, 1976

A ship /ʃɪp/ Audio (US) (help·info) is a large vessel that floats on water. Ships are
generally distinguished from boats based on size. Ships may be found on lakes, seas, and
rivers and they allow for a variety of activities, such as the transport of persons or goods,
fishing, entertainment, public safety, and warfare.

Ships and boats have developed alongside mankind. In major wars, and in day to day life,
they have become an integral part of modern commercial and military systems. Fishing
boats are used by millions of fishermen throughout the world. Military forces operate
highly sophisticated vessels to transport and support forces ashore. Commercial vessels,
nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007.[1]

These vessels were also key in history's great explorations and scientific and
technological development. Navigators such as Zheng He spread such inventions as the
compass and gunpowder. Ships have been used for such purposes as colonization and the
slave trade, and have served scientific, cultural, and humanitarian needs.

As Thor Heyerdahl demonstrated with his tiny boat the Kon-Tiki, it is possible to achieve
great things with a simple log raft. From Mesolithic canoes to today's powerful nuclear-
powered aircraft carriers, ships tell the history of humankind.

Contents
[hide]

• 1 Nomenclature
• 2 History
o 2.1 Prehistory and antiquity
o 2.2 Through the Renaissance
o 2.3 Specialization and modernization
o 2.4 Today
• 3 Types of ships
o 3.1 Commercial vessels
o 3.2 Military vessels
o 3.3 Fishing vessels
o 3.4 Inland and coastal boats
o 3.5 Other
• 4 Architecture
o 4.1 The hull
o 4.2 Propulsion systems
 4.2.1 Pre-mechanisation
 4.2.2 Reciprocating steam engines
 4.2.3 Steam turbines
 4.2.3.1 LNG carriers
 4.2.3.2 Nuclear-powered steam turbines
 4.2.4 Reciprocating diesel engines
 4.2.5 Gas turbines
o 4.3 Steering systems
o 4.4 Holds, compartments, and the superstructure
o 4.5 Equipment
• 5 Design considerations
o 5.1 Hydrostatics
o 5.2 Hydrodynamics
• 6 Lifecycle
o 6.1 Design
o 6.2 Construction
o 6.3 Repair and conversion
o 6.4 End of service
• 7 Measuring ships
• 8 Ship pollution
o 8.1 Oil spills
o 8.2 Ballast water
o 8.3 Exhaust emissions
• 9 See also
o 9.1 Model ships
o 9.2 Lists
• 10 Notes
• 11 References

• 12 External links

[edit] Nomenclature
Main parts of ship. 1: Smokestack or Funnel; 2: Stern; 3: Propeller and Rudder;
4: Portside (the right side is known as starboard); 5: Anchor; 6: Bulbous bow; 7: Bow;
8: Deck; 9: Superstructure
For more details on this topic, see Glossary of nautical terms.

Ships can usually be distinguished from boats based on size and the ship's ability to
operate independently for extended periods.[2] A commonly used rule of thumb is that if
one vessel can carry another, the larger of the two is a ship.[3] As dinghies are common on
sailing yachts as small as 35 feet (11 m), this rule of thumb is not foolproof. In a more
technical and now rare sense, the term ship refers to a sailing ship with at least 3 square-
rigged masts and a full bowsprit.

A number of large vessels are traditionally referred to as boats. Submarines are a prime
example.[4] Other types of large vessels which are traditionally called boats are the Great
Lakes freighter, the riverboat, and the ferryboat.[citation needed] Though large enough to carry
their own boats and heavy cargoes, these vessels are designed for operation on inland or
protected coastal waters.

[edit] History
[edit] Prehistory and antiquity

A raft is among the simplest boat designs.

The history of boats parallels the human adventure. The first known boats date back to
the Neolithic Period, about 10,000 years ago. These early vessels had limited function:
they could move on water, but that was it. They were used mainly for hunting and
fishing. The oldest dugout canoes found by archaeologists were often cut from coniferous
tree logs, using simple stone tools

By around 3000 BC, Ancient Egyptians already knew how to assemble planks of wood
into a ship hull.[5] They used woven straps to lash the planks together,[5] and reeds or grass
stuffed between the planks helped to seal the seams.[5] The Greek historian and
geographer Agatharchides had documented ship-faring among the early Egyptians:
"During the prosperous period of the Old Kingdom, between the 30th and 25th centuries
B. C., the river-routes were kept in order, and Egyptian ships sailed the Red Sea as far as
the myrrh-country."[6]

At about the same time, people living near Kongens Lyngby in Denmark invented the
segregated hull, which allowed the size of boats to gradually be increased. Boats soon
developed into keel boats similar to today's wooden pleasure craft.

The first navigators began to use animal skins or woven fabrics as sails. Affixed to the
top of a pole set upright in a boat, these sails gave early ships range. This allowed men to
explore widely, allowing, for example the settlement of Oceania about 3,000 years ago.

The ancient Egyptians were perfectly at ease building sailboats. A remarkable example of
their shipbuilding skills was the Khufu ship, a vessel 143 feet (44 m) in length entombed
at the foot of the Great Pyramid of Giza around 2,500 BC and found intact in 1954.
According to Herodotus, the Egyptians made the first circumnavigation of Africa around
600 BC.

The Phoenicians and Greeks gradually mastered navigation at sea aboard triremes,
exploring and colonizing the Mediterranean via ship. Around 340 BC, the Greek
navigator Pytheas of Massalia ventured from Greece to Western Europe and Great
Britain.[7]

Before the introduction of the compass, celestial navigation was the main method for
navigation at sea. In China, early versions of the magnetic compass were being developed
and used in navigation between 1040 and 1117.[8] The true mariner's compass, using a
pivoting needle in a dry box, was invented in Europe no later than 1300.[9][10]

[edit] Through the Renaissance

The carrack Santa María of Christopher Columbus
Until the Renaissance, navigational technology remained comparatively primitive. This
absence of technology didn't prevent some civilizations from becoming sea powers.
Examples include the maritime republics of Genoa and Venice, and the Byzantine navy.
The Vikings used their knarrs to explore North America, trade in the Baltic Sea and
plunder many of the coastal regions of Western Europe.

Towards the end of the fourteenth century, ships like the carrack began to develop towers
on the bow and stern. These towers decreased the vessel's stability, and in the fifteenth
century, caravels became more widely used. The towers were gradually replaced by the
forecastle and sterncastle, as in the carrack Santa María of Christopher Columbus. This
increased freeboard allowed another innovation: the freeing port, and the artillery
associated with it.

A Japanese atakebune from the 16th century

In the sixteenth century, the use of freeboard and freeing ports become widespread on
galleons. The English modified their vessels to maximize their firepower and
demonstrated the effectiveness of their doctrine, in 1588, by defeating the Spanish
Armada.

At this time, ships were developing in Asia in much the same way as Europe. Japan used
defensive naval techniques in the Mongol invasions of Japan in 1281. It is likely that the
Mongols of the time took advantage of both European and Asian shipbuilding techniques.
In Japan, during the Sengoku era from the fifteenth to seventeenth century, the great
struggle for feudal supremacy was fought, in part, by coastal fleets of several hundred
boats, including the atakebune.

Fifty years before Christopher Columbus, Chinese navigator Zheng He traveled the world
at the head of what was for the time a huge armada. The largest of his ships had nine
masts, were 130 metres (430 ft) long and had a beam of 55 metres (180 ft). His fleet
carried 30,000 men aboard 70 vessels, with the goal of bringing glory to the Chinese
emperor.

[edit] Specialization and modernization
The British Temeraire and French ships Redoutable and Bucentaure at the Battle of
Trafalgar

Parallel to the development of warships, ships in service of marine fishery and trade also
developed in the period between antiquity and the Renaissance. Still primarily a coastal
endeavor, fishing is largely practiced by individuals with little other money using small
boats.

Maritime trade was driven by the development of shipping companies with significant
financial resources. Canal barges, towed by draft animals on an adjacent towpath,
contended with the railway up to and past the early days of the industrial revolution. Flat-
bottomed and flexible scow boats also became widely used for transporting small
cargoes. Mercantile trade went hand-in-hand with exploration, which is self-financing by
the commercial benefits of exploration.

During the first half of the eighteenth century, the French Navy began to develop a new
type of vessel, featuring seventy-four guns. This type of ship became the backbone of all
European fighting fleets. These ships were 56 metres (180 ft) long and their construction
required 2,800 oak trees and 40 kilometres (25 mi) of rope; they carried a crew of about
800 sailors and soldiers.

A small pleasure boat and a tugboat in Rotterdam

Ship designs stayed fairly unchanged until the late nineteenth century. The industrial
revolution, new mechanical methods of propulsion, and the ability to construct ships from
metal triggered an explosion in ship design. Factors including the quest for more efficient
ships, the end of long running and wasteful maritime conflicts, and the increased
financial capacity of industrial powers created an avalanche of more specialized boats
and ships. Ships built for entirely new functions, such as firefighting, rescue, and
research, also began to appear.
In light of this, classification of vessels by type or function can be difficult. Even using
very broad functional classifications such as fishery, trade, military, and exploration fails
to classify most of the old ships. This difficulty is increased by the fact that the terms
such as sloop and frigate are used by old and new ships alike, and often the modern
vessels sometimes have little in common with their predecessors.

[edit] Today

In 2007, the world's fleet included 34,882 commercial vessels with gross tonnage of more
than 1,000 tons,[11] totaling 1.04 billion tons.[1] These ships carried 7.4 billion tons of
cargo in 2006, a sum that grew by 8% over the previous year.[1] In terms of tonnage, 39%
of these ships are tankers, 26% are bulk carriers, 17% container ships and 15% were
other types.[1]

In 2002, there were 1,240 warships operating in the world, not counting small vessels
such as patrol boats. The United States accounted for 3 million tons worth of these
vessels, Russia 1.35 million tons, the United Kingdom 504,660 tons and China 402,830
tons. The twentieth century saw many naval engagements during the two world wars, the
Cold War, and the rise to power of naval forces of the two blocs. The world's major
powers have recently used their naval power in cases such as the United Kingdom in the
Falkland Islands and the United States in Iraq. Warships were also key in history's great
explorations and scientific and technological development. Navigators such as Zheng He
spread such inventions as the compass and gunpowder. On one hand, ships have been
used for colonization and the slave trade. On the other, they also have served scientific,
cultural, and humanitarian needs.

The harbor at Fuglafjørður, Faroe Islands shows seven typical Faroe boats used for
fishing.

The size of the world's fishing fleet is more difficult to estimate. The largest of these are
counted as commercial vessels, but the smallest are legion. Fishing vessels can be found
in most seaside villages in the world. As of 2004, the United Nations Food and
Agriculture Organization estimated 4 million fishing vessels were operating worldwide.
[12]
The same study estimated that the world's 29 million fishermen[13] caught 85.8 million
metric tons of fish and shellfish that year.[14]

[edit] Types of ships
Ships are difficult to classify, mainly because there are so many criteria to base
classification on. One classification is based on propulsion; with ships categorised as
either a sailing ship or a motorship. Sailing ships are ships which are propelled solely by
means of sails. Motorships are ships which are propelled by mechanical means to propel
itself. Motorships include ships that propel itself trough the use of both sail and
mechanical means.

Other classification systems exist that use criteria such as:

• The number of hulls, giving categories like monohull, catamaran, trimaran.
• The shape and size, giving categories like dinghy, keelboat, and icebreaker.
• The building materials used, giving steel, aluminum, wood, fiberglass, and plastic.
• The type of propulsion system used, giving human-propelled, mechanical, and
sails.
• The epoch in which the vessel was used, triremes of Ancient Greece, man' o'
wars, eighteenth century.
• The geographic origin of the vessel, many vessels are associated with a particular
region, such as the pinnace of Northern Europe, the gondolas of Venice, and the
junks of China.
• The manufacturer, series, or class.

Another way to categorize ships and boats is based on their use, as described by Paulet
and Presles.[15] This system includes military ships, commercial vessels, fishing boats,
pleasure craft and competitive boats. In this section, ships are classified using the first
four of those categories, and adding a section for lake and river boats, and one for vessels
which fall outside these categories.

[edit] Commercial vessels

Commercial vessels or merchant ships can be divided into three broad categories: cargo
ships, passenger ships, and special-purpose ships.[16] Cargo ships transport dry and liquid
cargo. Dry cargo can be transported in bulk by bulk carriers, packed directly onto a
general cargo ship in break-bulk, packed in shipping containers as aboard a container
ship, or driven aboard as in roll-on roll-off ships. Liquid cargo is generally carried in bulk
aboard tankers, such as oil tankers, chemical tankers and LNG tankers.

Passenger ships range in size from small river ferries to giant cruise ships. This type of
vessel includes ferries, which move passengers and vehicles on short trips; ocean liners,
which carry passengers on one-way trips; and cruise ships, which typically transport
passengers on round-trip voyages promoting leisure activities onboard and in the ports
they visit.

Special-purpose vessels are not used for transport but are designed to perform other
specific tasks. Examples include tugboats, pilot boats, rescue boats, cable ships, research
vessels, survey vessels, and ice breakers.
Most commercial vessels have full hull-forms to maximize cargo capacity.[citation needed]
Hulls are usually made of steel, although aluminum can be used on faster craft, and
fiberglass on the smallest service vessels.[citation needed] Commercial vessels generally have a
crew headed by a captain, with deck officers and marine engineers on larger vessels.
Special-purpose vessels often have specialized crew if necessary, for example scientists
aboard research vessels. Commercial vessels are typically powered by a single propeller
driven by a diesel engine.[citation needed] Vessels which operate at the higher end of the speed
spectrum may use pump-jet engines or sometimes gas turbine engines.[citation needed]

Two modern container A ferry in Hong- The research vessel
A pilot boat near the
ships in San Francisco Kong Pourquoi pas? at Brest,
port of Rotterdam
France

[edit] Military vessels

There are many types of naval vessels currently and through history. Modern naval
vessels can be broken down into three categories: warships, submarines, and support and
auxiliary vessels.

Modern warships are generally divided into seven main categories, which are: aircraft
carriers, cruisers, destroyers, frigates, corvettes, submarines and amphibious assault
ships. Battleships encompass an eighth category, but are not in current service with any
navy in the world.[17]

Most military submarines are either attack submarines or ballistic submarines. Until
World War II , the primary role of the diesel/electric submarine was anti-ship warfare,
inserting and removing covert agents and military forces, and intelligence-gathering.
With the development of the homing torpedo, better sonar systems, and nuclear
propulsion, submarines also became able to effectively hunt each other. The development
of submarine-launched nuclear missiles and submarine-launched cruise missiles gave
submarines a substantial and long-ranged ability to attack both land and sea targets with a
variety of weapons ranging from cluster bombs to nuclear weapons.

Most navies also include many types of support and auxiliary vessels, such as
minesweepers, patrol boats, offshore patrol vessels, replenishment ships , and hospital
ships which are designated medical treatment facilities.[18]

Combat vessels like cruisers and destroyers usually have fine hulls to maximize speed
and maneuverability.[19] They also usually have advanced electronics and communication
systems, as well as weapons.
American aircraft carrier Harry S. American battleship USS
French landing craft
Truman and a replenishment ship Iowa fires an artillery salvo
Rapière near Toulon

[edit] Fishing vessels

Fishing vessels are a subset of commercial vessels, but generally small in size and often
subject to different regulations and classification. They can be categorized by several
criteria: architecture, the type of fish they catch, the fishing method used, geographical
origin, and technical features such as rigging. As of 2004, the world's fishing fleet
consisted of some 4 million vessels.[12] Of these, 1.3 million were decked vessels with
enclosed areas and the rest were open vessels.[12] Most decked vessels were mechanized,
but two-thirds of the open vessels were traditional craft propelled by sails and oars.[12]
More than 60% of all existing large fishing vessels[20] were built in Japan, Peru, the
Russian Federation, Spain or the United States of America.[21]

Fishing boats are generally small, often little more than 30 metres (98 ft) but up to
100 metres (330 ft) for a large tuna or whaling ship. Aboard a fish processing vessel, the
catch can be made ready for market and sold more quickly once the ship makes port.
Special purpose vessels have special gear. For example, trawlers have winches and arms,
stern-trawlers have a rear ramp, and tuna seiners have skiffs.

In 2004, 85.8 million metric tons of fish were caught in the marine capture fishery.[22]
Anchoveta represented the largest single catch at 10.7 million metric tons.[22] That year,
the top ten marine capture species also included Alaska pollock, Blue whiting, Skipjack
tuna, Atlantic herring, Chub mackerel, Japanese anchovy, Chilean jack mackerel,
Largehead hairtail, and Yellowfin tuna.[22] Other species including salmon, shrimp,
lobster, clams, squid and crab, are also commercially fished.

Modern commercial fishermen use many methods. One is fishing by nets, such as purse
seine, beach seine, lift nets, gillnets, or entangling nets. Another is trawling, including
bottom trawl. Hooks and lines are used in methods like long-line fishing and hand-line
fishing). Another method is the use of fishing trap.

Fishing boat in Cap- A trawler at Saint- An oyster boat at La The Albatun Dos, a tuna
Haïtien, Haïti Nazaire Trinité-sur-Mer boat at work near
Victoria, Seychelles

[edit] Inland and coastal boats

Many types of boats and ships are designed for inland and coastal waterways. These are
the vessels that trade upon the lakes, rivers and canals.

Barges are a prime example of inland vessels. Flat-bottomed boats built to transport
heavy goods, most barges are not self-propelled and need to be moved by tugboats
towing or towboats pushing them. Barges towed along canals by draft animals on an
adjacent towpath contended with the railway in the early industrial revolution but were
out competed in the carriage of high value items due to the higher speed, falling costs,
and route flexibility of rail transport.

Riverboats and inland ferries are specially designed to carry passengers, cargo, or both in
the challenging river environment. Rivers present special hazards to vessels. They usually
have varying water flows that alternately lead to high speed water flows or protruding
rock hazards. Changing siltation patterns may cause the sudden appearance of shoal
waters, and often floating or sunken logs and trees (called snags) can endanger the hulls
and propulsion of riverboats. Riverboats are generally of shallow draft, being broad of
beam and rather square in plan, with a low freeboard and high topsides. Riverboats can
survive with this type of configuration as they do not have to withstand the high winds or
large waves that are seen on large lakes, seas, or oceans.

Lake freighters, also called lakers, are cargo vessels that ply the Great Lakes. The most
well-known is the SS Edmund Fitzgerald, the latest major vessel to be wrecked on the
Lakes. These vessels are traditionally called boats, not ships. Visiting ocean-going
vessels are called "salties." Due to their additional beam, very large salties are never seen
inland of the Saint Lawrence Seaway. Because the largest of the Soo Locks is larger than
any Seaway lock, salties that can pass through the Seaway may travel anywhere in the
Great Lakes. Because of their deeper draft, salties may accept partial loads on the Great
Lakes, "topping off" when they have exited the Seaway. Similarly, the largest lakers are
confined to the Upper Lakes (Superior, Michigan, Huron, Erie) because they are too large
to use the Seaway locks, beginning at the Welland Canal that bypasses the Niagara River.

Since the freshwater lakes are less corrosive to ships than the salt water of the oceans,
lakers tend to last much longer than ocean freighters. Lakers older than 50 years are not
unusual, and as of 2005, all were over 20 years of age.[23]

The St. Mary's Challenger, built in 1906 as the William P Snyder, is the oldest laker still
working on the Lakes. Similarly, the E.M. Ford, built in 1898 as the Presque Isle, was
sailing the lakes 98 years later in 1996. As of 2007 the Ford was still afloat as a
stationary transfer vessel at a riverside cement silo in Saginaw, Michigan.
Riverboat
Riverboat Natchez on Commuter boat on The lake freighter
Temptation on the
the Mississippi River the Seine SS Edmund Fitzgerald
Rhine

[edit] Other

The wide variety of vessels at work on the earth's waters defy a simple classification
scheme. A representative few that fail to fit into the above categories include:

• Historical boats, frequently used as museum ships, training ships, or as good-will
ambassadors of a country abroad.
• Houseboats, floating structures used as dwellings.
• Scientific, technical, and industrial vessels such as mobile offshore drilling units,
offshore wind farms, survey ships, and research vessels.
• Submarines, for underwater navigation and exploration

A bathyscaphe at the
A houseboat near A mobile offshore
The Polish sailing Kerala drilling unit in the Gulf oceanographic museum
frigate Dar Pomorza of Mexico in Monaco

[edit] Architecture
Further information: Naval architecture

Some components exist in vessels of any size and purpose. Every vessel has a hull of
sorts. Every vessel has some sort of propulsion, whether it's a pole, an ox, or a nuclear
reactor. Most vessels have some sort of steering system. Other characteristics are
common, but not as universal, such as compartments, holds, a superstructure, and
equipment such as anchors and winches.

[edit] The hull
A ship's hull endures harsh conditions at sea, as illustrated by this reefer ship in bad
weather.

For a ship to float, its weight must be less than that of the water displaced by the ship's
hull. There are many types of hulls, from logs lashed together to form a raft to the
advanced hulls of America's Cup sailboats. A vessel may have a single hull (called a
monohull design), two in the case of catamarans, or three in the case of trimarans.
Vessels with more than three hulls are rare, but some experiments have been conducted
with designs such as pentamarans. Multiple hulls are generally parallel to each other and
connected by rigid arms.

Hulls have several elements. The bow is the foremost part of the hull. Many ships feature
a bulbous bow. The keel is at the very bottom of the hull, extending the entire length of
the ship. The rear part of the hull is known as the stern, and many hulls have a flat back
known as a transom. Common hull appendages include propellers for propulsion, rudders
for steering, and stabilizers to quell a ship's rolling motion. Other hull features can be
related to the vessel's work, such as fishing gear and sonar domes.

Hulls are subject to various hydrostatic and hydrodynamic constraints. The key
hydrostatic constraint is that it must be able to support the entire weight of the boat, and
maintain stability even with often unevenly distributed weight. Hydrodynamic constraints
include the ability to withstand shock waves, weather collisions and groundings.

Older ships and pleasure craft often have or had wooden hulls. Steel is used for most
commercial vessels. Aluminium is frequently used for fast vessels, and composite
materials are often found in sailboats and pleasure craft. Some ships have been made with
concrete hulls.

[edit] Propulsion systems
A fishing boat uses a traditional propulsion system in Mozambique

The turbosail, a hybrid propulsion system invented by Jacques-Yves Cousteau

Propulsion systems for ships and boats vary from the simple paddle to the largest diesel
engines in the world. These systems fall into three categories: human propulsion, sailing,
and mechanical propulsion. Human propulsion includes the pole, still widely used in
marshy areas, rowing which was used even on large galleys, and the pedals. In modern
times, human propulsion is found mainly on small boats or as auxiliary propulsion on
sailboats.

Propulsion by sail generally consists of a sail hoisted on an erect mast, supported by stays
and spars and controlled by ropes. Sail systems were the dominant form of propulsion
until the nineteenth century. They are now generally used for recreation and racing,
although experimental sail systems, such as the kites/royals, turbosails, rotorsails,
wingsails and SkySails's own kite buoy-system have been used on larger modern vessels
for fuel savings.

Mechanical propulsion systems generally consist of a motor or engine turning a propeller.
Steam engines were first used for this purpose, but have mostly been replaced by two-
stroke or four-stroke diesel engines, outboard motors, and gas turbine engines on faster
ships. Electric motors have sometimes been used, such as on submarines. Nuclear
reactors are sometimes employed to propel warships and icebreakers.

There are many variations of propeller systems, including twin, contra-rotating,
controllable-pitch, and nozzle-style propellers. Smaller vessels tend to have a single
propeller. Aircraft carriers uses up to four propellers, supplemented with bow- and stern-
thrusters. Power is transmitted from the engine to the propeller by way of a propeller
shaft, which may or may not be connected to a gearbox.

[edit] Pre-mechanisation
Ships of the world in 1460, according to the Fra Mauro map

Until the application of the steam engine to ships in the early 19th century, oars propelled
galleys, or the wind propelled sailing ships. Before mechanisation, merchant ships always
used sail, but as long as naval warfare depended on ships closing to ram or to fight hand-
to-hand, galleys dominated in marine conflicts because of their maneuverability and
speed. The Greek navies that fought in the Peloponnesian War used triremes, as did the
Romans at the Battle of Actium. The use of large numbers of cannon from the 16th
century meant that maneuverability took second place to broadside weight; this led to the
dominance of the sail-powered warship.

[edit] Reciprocating steam engines

The development of piston-engined steamships was a complex process. Early steamships
were fueled by wood, later ones by coal or fuel oil. Early ships used stern or side paddle
wheels, while later ones used screw propellers.

The first commercial success accrued to Robert Fulton's North River Steamboat (often
called Clermont) in the US in 1807, followed in Europe by the 45-foot Comet of 1812.
Steam propulsion progressed considerably over the rest of the 19th century. Notable
developments included the steam surface condenser, which eliminated the use of sea
water in the ship's boilers. This permitted higher steam pressures, and thus the use of
higher efficiency multiple expansion (compound) engines. As the means of transmitting
the engine's power, paddle wheels gave way to more efficient screw propellers.

[edit] Steam turbines
SS Ukkopekka uses a triple expansion steam engine

Steam turbines were fueled by coal or, later, fuel oil or nuclear power. The marine steam
turbine developed by Sir Charles Algernon Parsons raised the power to weight ratio. He
achieved publicity by demonstrating it unofficially in the 100-foot Turbinia at the
Spithead naval review in 1897. This facilitated a generation of high-speed liners in the
first half of the 20th century and rendered the reciprocating steam engine obsolete, first in
warships and later in merchant vessels.

In the early 20th century, heavy fuel oil came into more general use and began to replace
coal as the fuel of choice in steamships. Its great advantages were convenience, reduced
manning due to removing the need for trimmers and stokers, and reduced space needed
for fuel bunkers.

In the second half of the 20th century, rising fuel costs almost led to the demise of the
steam turbine. Most new ships since around 1960 have been built with diesel engines.
The last major passenger ship built with steam turbines was the Fairsky, launched in
1984. Similarly, many steam ships were re-engined to improve fuel efficiency. One high
profile example was the 1968 built Queen Elizabeth 2 which had her steam turbines
replaced with a diesel-electric propulsion plant in 1986.

Most new-build ships with steam turbines are specialist vessels such as nuclear-powered
vessels, and certain merchant vessels (notably Liquefied Natural Gas (LNG) and coal
carriers) where the cargo can be used as bunker fuel.

[edit] LNG carriers

New LNG carriers (a high growth area of shipping) continue to be built with steam
turbines. The natural gas is stored in a liquid state in cryogenic vessels aboard these
ships, and a small amount of 'boil off' gas is needed to maintain the pressure and
temperature inside the vessels within operating limits. The 'boil off' gas provides the fuel
for the ship's boilers, which provide steam for the turbines, the simplest way to deal with
the gas. Technology to operate internal combustion engines (modified marine two-stroke
diesel engines) on this gas has improved, however, so such engines are starting to appear
in LNG carriers; with their greater thermal efficiency, less gas is burnt. Developments
have also been made in the process of re-liquefying 'boil off' gas, letting it be returned to
the cryogenic tanks. The financial returns on LNG are potentially greater than the cost of
the marine-grade fuel oil burnt in conventional diesel engines, so the re-liquefaction
process is starting to be used on diesel engine propelled LNG carriers. Another factor
driving the change from turbines to diesel engines for LNG carriers is the shortage of
steam turbine qualified seagoing engineers. With the lack of turbine powered ships in
other shipping sectors, and the rapid rise in size of the worldwide LNG fleet, not enough
have been trained to meet the demand. It may be that the days are numbered for marine
steam turbine propulsion systems, even though all but sixteen of the orders for new LNG
carriers at the end of 2004 were for steam turbine propelled ships.[24]
The NS Savannah was the first nuclear-powered cargo-passenger ship

[edit] Nuclear-powered steam turbines

In these vessels, the reactor heats steam to drive the turbines. Partly due to concerns
about safety and waste disposal, nuclear propulsion is rare except in specialist vessels. In
large aircraft carriers, the space formerly used for ship's bunkerage could be used instead
to bunker aviation fuel. In submarines, the ability to run submerged at high speed and in
relative quiet for long periods holds obvious advantages. A few cruisers have also
employed nuclear power; as of 2006, the only ones remaining in service are the Russian
Kirov class. An example of a non-military ship with nuclear marine propulsion is the
Arktika class icebreaker with 75,000 shaft horsepower. Commercial experiments such as
the NS Savannah proved uneconomical compared with conventional propulsion.

[edit] Reciprocating diesel engines

A modern diesel engine aboard a cargo ship

A bird's eye view of a ship's engineroom

About 99% of modern ships use diesel reciprocating engines[citation needed]. The rotating
crankshaft can power the propeller directly for slow speed engines, via a gearbox for
medium and high speed engines, or via an alternator and electric motor in diesel-electric
vessels.

The reciprocating marine diesel engine first came into use in 1903 when the diesel
electric rivertanker Vandal was put in service by Branobel. Diesel engines soon offered
greater efficiency than the steam turbine, but for many years had an inferior power-to-
space ratio.

Diesel engines today are broadly classified according to

• Their operating cycle: two-stroke or four-stroke
• Their construction: Crosshead, trunk, or opposed piston
• Their speed
o Slow speed: any engine with a maximum operating speed up to 300
revs/minute, although most large 2-stroke slow speed diesel engines
operate below 120 revs/minute. Some very long stroke engines have a
maximum speed of around 80 revs/minute. The largest, most powerful
engines in the world are slow speed, two stroke, crosshead diesels.
o Medium speed: any engine with a maximum operating speed in the range
300-900 revs/minute. Many modern 4-stroke medium speed diesel engines
have a maximum operating speed of around 500 rpm.
o High speed: any engine with a maximum operating speed above 900
revs/minute.

Most modern larger merchant ships use either slow speed, two stroke, crosshead engines,
or medium speed, four stroke, trunk engines. Some smaller vessels may use high speed
diesel engines.

The size of the different types of engines is an important factor in selecting what will be
installed in a new ship. Slow speed two-stroke engines are much taller, but the area
needed, length and width, is smaller than that needed for four-stroke medium speed diesel
engines. As space higher up in passenger ships and ferries is at a premium, these ships
tend to use multiple medium speed engines resulting in a longer, lower engine room than
that needed for two-stroke diesel engines. Multiple engine installations also give
redundancy in the event of mechanical failure of one or more engines and greater
efficiency over a wider range of operating conditions.

As modern ships' propellers are at their most efficient at the operating speed of most slow
speed diesel engines, ships with these engines do not generally need gearboxes. Usually
such propulsion systems consist of either one or two propeller shafts each with its own
direct drive engine. Ships propelled by medium or high speed diesel engines may have
one or two (sometimes more) propellers, commonly with one or more engines driving
each propeller shaft through a gearbox. Where more than one engine is geared to a single
shaft, each engine will most likely drive through a clutch, allowing engines not being
used to be disconnected from the gearbox while others keep running. This arrangement
lets maintenance be carried out while under way, even far from port.
[edit] Gas turbines

Many warships built since the 1960s have used gas turbines for propulsion, as have a few
passenger ships, like the jetfoil. Gas turbines are commonly used in combination with
other types of engine. Most recently, the Queen Mary 2 has had gas turbines installed in
addition to diesel engines. Due to their poor thermal efficiency at low power (cruising)
output, it is common for ships using them to have diesel engines for cruising, with gas
turbines reserved for when higher speeds are needed. Some warships and a few modern
cruise ships have also used the steam turbines to improve the efficiency of their gas
turbines in a combined cycle, where wasted heat from a gas turbine exhaust is utilized to
boil water and create steam for driving a steam turbine. In such combined cycles, thermal
efficiency can be the same or slightly greater than that of diesel engines alone; however,
the grade of fuel needed for these gas turbines is far more costly than that needed for the
diesel engines, so the running costs are still higher.

[edit] Steering systems

The rudder and propeller on a newly built ferry

On boats with simple propulsion systems, such as paddles, steering systems may not be
necessary. In more advanced designs, such as boats propelled by engines or sails, a
steering system becomes more necessary. The most common is a rudder, a submerged
plane located at the rear of the hull. Rudders are rotated to generate a lateral force which
turns the boat. Rudders can be rotated by a tiller, manual wheels, or electro-hydraulic
systems. Autopilot systems combine mechanical rudders with navigation systems. Ducted
propellers are sometimes used for steering.

Some propulsion systems are inherently steering systems. Examples include the outboard
motor, the bow thruster, and the Z-drive. Some sails, such as jibs and the mizzen sail on a
ketch rig, are used more for steering than propulsion.

[edit] Holds, compartments, and the superstructure

Larger boats and ships generally have multiple decks and compartments. Separate
berthings and heads are found on sailboats over about 25 feet (7.6 m). Fishing boats and
cargo ships typically have one or more cargo holds. Most larger vessels have an engine
room, a galley, and various compartments for work. Tanks are used to store fuel, engine
oil, and fresh water. Ballast tanks are equipped to change a ship's trim and modify its
stability.

Superstructures are found above the main deck. On sailboats, these are usually very low.
On modern cargo ships, they are almost always located near the ship's stern. On
passenger ships and warships, the superstructure generally extends far forward.

[edit] Equipment

Shipboard equipment varies from ship to ship depending on such factors as the ship's era,
design, area of operation, and purpose. Some types of equipment that are widely found
include:

• Masts can be the home of antennas, navigation lights, radar transponders, fog
signals, and similar devices often required by law.
• Ground tackle includes equipment such as mooring winches, windlasses, and
anchors. Anchors are used to moor ships in shallow water. They are connected to
the ship by a rope or chain. On larger vessels, the chain runs through a hawsepipe.
• Cargo equipment such as cranes and cargo booms are used to load and unload
cargo and ship's stores.
• Safety equipment such as lifeboats, liferafts, fire extinguishers, and survival suits
are carried aboard many vessels for emergency use.

[edit] Design considerations
[edit] Hydrostatics

Some vessels, like the LCAC, can operate in a non-displacement mode.

Boats and ships are kept on (or slightly above) the water in three ways:

• For most vessels, known as displacement vessels, the vessel's weight is offset by
that of the water displaced by the hull.
• For planing ships and boats, such as the hydrofoil, the lift developed by the
movement of the foil through the water increases with the vessel's speed, until the
vessel is foilborne.
• For non-displacement craft such as hovercraft and air-cushion vehicles, the vessel
is suspended over the water by a cushion of high-pressure air it projects
downwards against the surface of the water.

A vessel is in equilibrium when the upwards and downwards forces are of equal
magnitude. As a vessel is lowered into the water its weight remains constant but the
corresponding weight of water displaced by its hull increases. When the two forces are
equal, the boat floats. If weight is evenly distributed throughout the vessel, it floats
without trim or heel.

A vessel's stability is considered in both this hydrostatic sense as well as a hydrodynamic
sense, when subjected to movement, rolling and pitching, and the action of waves and
wind. Stability problems can lead to excessive pitching and rolling, and eventually
capsizing and sinking.

[edit] Hydrodynamics

A system of waves forms as Dona Delfina gains speed and begins to plane.

The advance of a vessel through water is resisted by the water. This resistance can be
broken down into several components, the main ones being the friction of the water on
the hull and wave making resistance. To reduce resistance and therefore increase the
speed for a given power, it is necessary to reduce the wetted surface and use submerged
hull shapes that produce low amplitude waves. To do so, high-speed vessels are often
more slender, with fewer or smaller appendages. The friction of the water is also reduced
by regular maintenance of the hull to remove the sea creatures and algae that accumulate
there. Antifouling paint is commonly used to assist in this. Advanced designs such as the
bulbous bow assist in decreasing wave resistance.

A simple way of considering wave-making resistance is to look at the hull in relation to
its wake. At speeds lower than the wave propagation speed, the wave rapidly dissipates to
the sides. As the hull approaches the wave propagation speed, however, the wake at the
bow begins to build up faster than it can dissipate, and so it grows in amplitude. Since the
water is not able to "get out of the way of the hull fast enough", the hull, in essence, has
to climb over or push through the bow wave. This results in an exponential increase in
resistance with increasing speed.

This hull speed is found by the formula:
Or, in metric units:

When the vessel exceeds a speed/length ratio of 0.94, it starts to outrun most of its bow
wave, and the hull actually settles slightly in the water as it is now only supported by two
wave peaks. As the vessel exceeds a speed/length ratio of 1.34, the hull speed, the
wavelength is now longer than the hull, and the stern is no longer supported by the wake,
causing the stern to squat, and the bow rise. The hull is now starting to climb its own bow
wave, and resistance begins to increase at a very high rate. While it is possible to drive a
displacement hull faster than a speed/length ratio of 1.34, it is prohibitively expensive to
do so. Most large vessels operate at speed/length ratios well below that level, at
speed/length ratios of under 1.0.

Vessels move along the three axes: 1. heave, 2. sway, 3. surge, 4. yaw, 5. pitching, 6. roll

For large projects with adequate funding, hydrodynamic resistance can be tested
experimentally in a hull testing pool or using tools of computational fluid dynamics.

Vessels are also subject to ocean surface waves and sea swell as well as effects of wind
and weather. These movements can be stressful for passengers and equipment, and must
be controlled if possible. The rolling movement can be controlled, to an extent, by
ballasting or by devices such as fin stabilizers. Pitching movement is more difficult to
limit and can be dangerous if the bow submerges in the waves, a phenomenon called
pounding. Sometimes, ships must change course or speed to stop violent rolling or
pitching.

[edit] Lifecycle
A ship will pass through several stages during its career. The first is usually an initial
contract to build the ship, the details of which can vary widely based on relationships
between the shipowners, operators, designers and the shipyard. Then, the design phase
carried out by a naval architect. Then the ship is constructed in a shipyard. After
construction, the vessel is launched and goes into service. Ships end their careers in a
number of ways, ranging from shipwrecks to service as a museum ship to the scrapyard.

Lines plan for the hull of a basic cargo ship

[edit] Design

A vessel's design starts with a specification, which a naval architect uses to create a
project outline, assess required dimensions, and create a basic layout of spaces and a
rough displacement. After this initial rough draft, the architect can create an initial hull
design, a general profile and an initial overview of the ship's propulsion. At this stage, the
designer can iterate on the ship's design, adding detail and refining the design at each
stage.

The designer will typically produce an overall plan, a general specification describing the
peculiarities of the vessel, and construction blueprints to be used at the building site.
Designs for larger or more complex vessels may also include sail plans, electrical
schematics, and plumbing and ventilation plans.

A ship launching at the Northern Shipyard in Gdansk, Poland

[edit] Construction

Ship construction takes place in a shipyard, and can last from a few months for a unit
produced in series, to several years to reconstruct a wooden boat like the frigate
Hermione, to more than 10 years for an aircraft carrier. Hull materials and vessel size
play a large part in determining the method of construction. The hull of a mass-produced
fiberglass sailboat is constructed from a mold, while the steel hull of a cargo ship is made
from large sections welded together as they are built.

Generally, construction starts with the hull, and on vessels over about 30 meters, by the
laying of the keel. This is done in a drydock or on land. Once the hull is assembled and
painted, it is launched. The last stages, such as raising the superstructure and adding
equipment and accommodation, can be done after the vessel is afloat.

Once completed, the vessel is delivered to the customer. Ship launching is often a
ceremony of some significance, and is usually when the vessel is formally named. A
typical small rowboat can cost under US$100, $1,000 for a small speedboat, tens of
thousands of dollars for a cruising sailboat, and about $2,000,000 for a Vendée Globe
class sailboat. A 25 metres (82 ft) trawler may cost $2.5 million, and a 1,000-person-
capacity high-speed passenger ferry can cost in the neighborhood of $50 million. A ship's
cost partly depends on its complexity: a small, general cargo ship will cost $20 million, a
Panamax-sized bulk carrier around $35 million, a supertanker around $105 million and a
large LNG carrier nearly $200 million. The most expensive ships generally are so due to
the cost of embedded electronics: a Seawolf class submarine costs around $2 billion, and
an aircraft carrier goes for about $3.5 billion.
An able seaman uses a needlegun scaler while refurbishing a mooring winch at sea.

[edit] Repair and conversion

Ships undergo nearly constant maintenance during their career, whether they be
underway, pierside, or in some cases, in periods of reduced operating status between
charters or shipping seasons.

Most ships, however, require trips to special facilities such as a drydock at regular
intervals. Tasks often done at drydock include removing biological growths on the hull,
sandblasting and repainting the hull, and replacing sacrificial anodes used to protect
submerged equipment from corrosion. Major repairs to the propulsion and steering
systems as well as major electrical systems are also often performed at dry dock.

Vessels that sustain major damage at sea may be repaired at a facility equipped for major
repairs, such as a shipyard. Ships may also be converted for a new purpose: oil tankers
are often converted into floating production storage and offloading units.

[edit] End of service

Shipbreaking near Chittagong, Bangladesh

Most ocean-going cargo ships have a life expectancy of between 20 and 30 years. A
sailboat made of plywood or fiberglass can last between 30 and 40 years. Solid wooden
ships can last much longer but require regular maintenance. Carefully maintained steel-
hulled yachts can have a lifespan of over 100 years.

As ships age, forces such as corrosion, osmosis, and rotting compromise hull strength,
and a vessel becomes too dangerous to sail. At this point, it can be scuttled at sea or
scrapped by shipbreakers. Ships can also be used as museum ships, or expended to
construct breakwaters or artificial reefs.

Many ships do not make it to the scrapyard, and are lost in fires, collisions, grounding, or
sinking at sea.

[edit] Measuring ships
One can measure ships in terms of overall length, length of the ship at the waterline,
beam (breadth), depth (distance between the crown of the weather deck and the top of the
keelson), draft (distance between the highest waterline and the bottom of the ship) and
tonnage. A number of different tonnage definitions exist and are used when describing
merchant ships for the purpose of tolls, taxation, etc.

In Britain until Samuel Plimsoll's Merchant Shipping Act of 1876, ship-owners could
load their vessels until their decks were almost awash, resulting in a dangerously unstable
condition. Anyone who signed on to such a ship for a voyage and, upon realizing the
danger, chose to leave the ship, could end up in jail. Plimsoll, a Member of Parliament,
realised the problem and engaged some engineers to derive a fairly simple formula to
determine the position of a line on the side of any specific ship's hull which, when it
reached the surface of the water during loading of cargo, meant the ship had reached its
maximum safe loading level. To this day, that mark, called the "Plimsoll Line", exists on
ships' sides, and consists of a circle with a horizontal line through the centre. On the
Great Lakes of North America the circle is replaced with a diamond. Because different
types of water (summer, fresh, tropical fresh, winter north Atlantic) have different
densities, subsequent regulations required painting a group of lines forward of the
Plimsoll mark to indicate the safe depth (or freeboard above the surface) to which a
specific ship could load in water of various densities. Hence the "ladder" of lines seen
forward of the Plimsoll mark to this day. This is called the "freeboard mark" or "load line
mark" in the marine industry.

[edit] Ship pollution
Ship pollution is the pollution of air and water by shipping. It is a problem that has been
accelerating as trade has become increasingly globalized, posing an increasing threat to
the world’s oceans and waterways as globalization continues. It is expected that, “…
shipping traffic to and from the USA is projected to double by 2020."[25] Because of
increased traffic in ocean ports, pollution from ships also directly affects coastal areas.
The pollution produced affects biodiversity, climate, food, and human health. However,
the degree to which humans are polluting and how it affects the world is highly debated
and has been a hot international topic for the past 30 years.

[edit] Oil spills
The Exxon Valdez spilled 10.8 million gallons of oil into Alaska's Prince William Sound.
[26]

Oil spills have devastating effects on the environment. Crude oil contains polycyclic
aromatic hydrocarbons (PAHs) which are very difficult to clean up, and last for years in
the sediment and marine environment.[27] Marine species constantly exposed to PAHs can
exhibit developmental problems, susceptibility to disease, and abnormal reproductive
cycles.

By the sheer amount of oil carried, modern oil tankers must be considered something of a
threat to the environment. An oil tanker can carry 2 million barrels (320,000 m3) of crude
oil, or 62,000,000 gallons. This is more than six times the amount spilled in the widely
known Exxon Valdez incident. In this spill, the ship ran aground and dumped 10.8 million
gallons of oil into the ocean in March of 1989. Despite efforts of scientists, managers,
and volunteers, over 400,000 seabirds, about 1,000 sea otters, and immense numbers of
fish were killed.[27]

The International Tanker Owners Pollution Federation has researched 9,351 accidental
spills since 1974.[28] According to this study, most spills result from routine operations
such as loading cargo, discharging cargo, and taking on fuel oil.[28] 91% of the operational
oil spills were small, resulting in less than 7 tons per spill.[28] Spills resulting from
accidents like collisions, groundings, hull failures, and explosions are much larger, with
84% of these involving losses of over 700 tons.[28]

Following the Exxon Valdez spill, the United States passed the Oil Pollution Act of 1990
(OPA-90), which included a stipulation that all tankers entering its waters be double-
hulled by 2015. Following the sinkings of the Erika (1999) and Prestige (2002), the
European Union passed its own stringent anti-pollution packages (known as Erika I, II,
and III), which require all tankers entering its waters to be double-hulled by 2010. The
Erika packages are controversial because they introduced the new legal concept of
"serious negligence".[29]

[edit] Ballast water
A cargo ship pumps ballast water over the side

When a large vessel such as a container ship or an oil tanker unloads cargo, seawater is
pumped into compartments in the hull to help stabilize and balance the ship. During
loading, this ballast water is pumped out from these compartments.

One of the problems with ballast water transfer is the transport of harmful organisms.
Meinesz[30] believes that one of the worst cases of a single invasive species causing harm
to an ecosystem can be attributed to a seemingly harmless jellyfish. Mnemiopsis leidyi, a
species of comb jellyfish that inhabits estuaries from the United States to the Valdés
peninsula in Argentina along the Atlantic coast, has caused notable damage in the Black
Sea. It was first introduced in 1982, and thought to have been transported to the Black
Sea in a ship’s ballast water. The population of the jellyfish shot up exponentially and, by
1988, it was wreaking havoc upon the local fishing industry. "The anchovy catch fell
from 204,000 tons in 1984 to 200 tons in 1993; sprat from 24,600 tons in 1984 to
12,000 tons in 1993; horse mackerel from 4,000 tons in 1984 to zero in 1993."[30] Now
that the jellyfish have exhausted the zooplankton, including fish larvae, their numbers
have fallen dramatically, yet they continue to maintain a stranglehold on the ecosystem.
Recently the jellyfish have been discovered in the Caspian Sea. Invasive species can take
over once occupied areas, facilitate the spread of new diseases, introduce new genetic
material, alter landscapes and jeopardize the ability of native species to obtain food. "On
land and in the sea, invasive species are responsible for about 137 billion dollars in lost
revenue and management costs in the U.S. each year."[27]

Ballast and bilge discharge from ships can also spread human pathogens and other
harmful diseases and toxins potentially causing health issues for humans and marine life
alike.[31] Discharges into coastal waters, along with other sources of marine pollution,
have the potential to be toxic to marine plants, animals, and microorganisms, causing
alterations such as changes in growth, disruption of hormone cycles, birth defects,
suppression of the immune system, and disorders resulting in cancer, tumors, and genetic
abnormalities or even death.[27]
[edit] Exhaust emissions

Exhaust emissions from ships are considered to be a significant source of air pollution.
“Seagoing vessels are responsible for an estimated 14 percent of emissions of nitrogen
from fossil fuels and 16 percent of the emissions of sulfur from petroleum uses into the
atmosphere.”[27] In Europe ships make up a large percentage of the sulfur introduced to
the air, “…as much sulfur as all the cars, lorries and factories in Europe put together.”[32]
“By 2010, up to 40% of air pollution over land could come from ships.”[32] Sulfur in the
air creates acid rain which damages crops and buildings. When inhaled sulfur is known to
cause respiratory problems and increase the risk of a heart attack.[32]

[edit] See also
Nautical portal
• Airship • Marine electronics • Ship transport
• Boat • Maritime history • Spaceship
• Chartering (shipping) • Maritime law • Train ferry
• Dynamic positioning • Sailing • Training ship
• Ferry • Sailor
• Whaler
• Flag State • Ship burial

[edit] Model ships

• Ship model basin
• Ship model
• Ship replica

[edit] Lists

• List of ships
• List of world's longest ships
• List of civilian nuclear ships
• List of fictional ships
• List of historical ship types
• List of shipwrecks

[edit] Notes
Bicycle
From Wikipedia, the free encyclopedia

(Redirected from Bycicle)
Jump to: navigation, search
A common utility bicycle

Wooden Dandy horse (around 1820), the first two-wheeler and as such the archetype of
the bicycle

The bicycle, bike, or cycle is a pedal-driven, human-powered vehicle with two wheels
attached to a frame, one behind the other.

Bicycles were introduced in the 19th century and now number about one billion
worldwide.[1] They are the principal means of transportation in many regions. They also
provide a popular form of recreation, and have been adapted for such uses as children's
toys, adult fitness, military and police applications, courier services, and competitive
sports.

The basic shape and configuration of a typical bicycle has changed little since the first
chain-driven model was developed around 1885.[2] Many details have been improved,
especially since the advent of modern materials and computer-aided design. These have
allowed for a proliferation of specialized designs for particular types of cycling.

The bicycle has had a considerable effect on human society, in both the cultural and
industrial realms. In its early years, bicycle construction drew on pre-existing
technologies; more recently, bicycle technology has, in turn, contributed both to old and
new areas.

Contents
[hide]

• 1 History
• 2 Uses for bicycles
• 3 Technical aspects
o 3.1 Types of bicycles
o 3.2 Dynamics
o 3.3 Performance
o 3.4 Construction and parts
 3.4.1 Frame
 3.4.2 Drivetrain and gearing
 3.4.3 Steering and seating
 3.4.4 Brakes
 3.4.5 Suspension
 3.4.6 Wheels
 3.4.7 Accessories, repairs, and tools
 3.4.8 Standards
 3.4.9 Parts
• 4 Social and historical aspects
o 4.1 In daily life
o 4.2 Female emancipation
o 4.3 Economic implications
o 4.4 Legal requirements
• 5 See also
• 6 Notes
• 7 References

• 8 External links

[edit] History
Main article: History of the bicycle

Multiple innovators contributed to the history of the bicycle by developing precursor
human-powered vehicles. The documented ancestors of today's modern bicycle were
known as push bikes (still called push bikes outside of North America), draisines, or
hobby horses. Being the first human means of transport to make use of the two-wheeler
principle, the draisine (or mistmashine, "running machine"), invented by the German
Baron Karl von Drais, is regarded as the archetype of the bicycle. It was introduced by
Drais to the public in Mannheim in summer 1817 and in Paris in 1818.[3] Its rider sat
astride a wooden frame supported by two in-line wheels and pushed the vehicle along
with his/her feet while steering the front wheel.
A penny-farthing or ordinary bicycle photographed in the Škoda museum in the Czech
Republic

In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design
in a new direction by adding a mechanical crank drive with pedals on an enlarged front
wheel. Another French inventor by the name of Douglas Grasso had a failed prototype of
Pierre Lallement's bicycle several years earlier. Several why-not-the-rear-wheel
inventions followed, the best known being the rod-driven velocipede by Scotsman
Thomas McCall in 1869. The French creation, made of iron and wood, developed into the
"penny-farthing" (more formally an "ordinary bicycle", a retronym, since there were then
no other kind).[4] It featured a tubular steel frame on which were mounted wire spoked
wheels with solid rubber tires. These bicycles were difficult to ride due to their very high
seat and poor weight distribution.

Bicycle in Plymouth, England at the start of the 20th century

The dwarf ordinary addressed some of these faults by reducing the front wheel diameter
and setting the seat further back. This necessitated the addition of gearing, effected in a
variety of ways, to attain sufficient speed. Having to both pedal and steer via the front
wheel remained a problem. J. K. Starley, J. H. Lawson, and Shergold solved this problem
by introducing the chain drive (originated by Henry Lawson's unsuccessful "bicyclette"),
[5]
connecting the frame-mounted pedals to the rear wheel. These models were known as
dwarf safeties, or safety bicycles, for their lower seat height and better weight
distribution. Starley's 1885 Rover is usually described as the first recognizably modern
bicycle. Soon, the seat tube was added, creating the double-triangle diamond frame of the
modern bike.
Further innovations increased comfort and ushered in a second bicycle craze, the 1890s'
Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the pneumatic
tire, which soon became universal. Soon after, the rear freewheel was developed,
enabling the rider to coast. This refinement led to the 1898 invention of coaster brakes.
Derailleur gears and hand-operated cable-pull brakes were also developed during these
years, but were only slowly adopted by casual riders. By the turn of the century, cycling
clubs flourished on both sides of the Atlantic, and touring and racing became widely
popular.

Bicycles and horse buggies were the two mainstays of private transportation just prior to
the automobile, and the grading of smooth roads in the late 19th century was stimulated
by the widespread advertising, production, and use of these devices.

[edit] Uses for bicycles

Transporting milk churns in Kolkata, India.

Bicycles have been and are employed for many uses:

Working bicycle in Amsterdam, Netherlands.

• Utility: bicycle commuting and utility cycling
• Work: mail delivery, paramedics, police, and general delivery.
• Recreation: bicycle touring, mountain biking, BMX and physical fitness.
• Racing: track racing, criterium, roller racing and time trial to multi-stage events
like the Tour of California, Giro d'Italia, the Tour de France, the Vuelta a España,
the Volta a Portugal, among others.
• Military: scouting, troop movement, supply of provisions, and patrol. See bicycle
infantry.
• Show: entertainment and performance, e.g. circus clowns. Used as instrument by
Frank Zappa.

[edit] Technical aspects

A Half Wheeler trailer bike at the Golden Gate Bridge

The bicycle has undergone continual adaptation and improvement since its inception.
These innovations have continued with the advent of modern materials and computer-
aided design, allowing for a proliferation of specialized bicycle types.

[edit] Types of bicycles

Main article: List of bicycle types

Bicycles can be categorized in different ways: e.g. by function, by number of riders, by
general construction, by gearing or by means of propulsion. The more common types
include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid
bicycles, cruiser bicycles, and BMX bicycles. Less common are tandems, lowriders, tall
bikes, fixed gear (fixed-wheel), folding models and recumbents (one of which was used
to set the IHPVA Hour record).

Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively
one, three and four wheels, but are often referred to informally as "bikes".

Bicycles leaning in a turn

[edit] Dynamics
Main article: Bicycle and motorcycle dynamics

A bicycle stays upright by being steered so as to keep its center of gravity over its wheels.
This steering is usually provided by the rider, but under certain conditions may be
provided by the bicycle itself.

A bicycle must lean in order to turn. This lean is induced by a method known as
countersteering, which can be performed by the rider turning the handlebars directly with
the hands or indirectly by leaning the bicycle.

Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the
front wheel in order to flip longitudinally. The act of purposefully using this force to lift
the rear wheel and balance on the front without tipping over is a trick known as a stoppie,
endo or front wheelie.

[edit] Performance

Main article: Bicycle performance

A racing upright bicycle

The bicycle is extraordinarily efficient in both biological and mechanical terms. The
bicycle is the most efficient self-powered means of transportation in terms of energy a
person must expend to travel a given distance.[6] From a mechanical viewpoint, up to 99%
of the energy delivered by the rider into the pedals is transmitted to the wheels, although
the use of gearing mechanisms may reduce this by 10-15%.[7][8] In terms of the ratio of
cargo weight a bicycle can carry to total weight, it is also a most efficient means of cargo
transportation.

A recumbent bicycle

A human being traveling on a bicycle at low to medium speeds of around 10-15 mph (15-
25 km/h), using only the energy required to walk, is the most energy-efficient means of
transport generally available. Air drag, which is proportional to the square of speed,
requires dramatically higher power outputs as speeds increase. A bicycle which places
the rider in a seated position, supine position or, more rarely, prone position, and which
may be covered in an aerodynamic fairing to achieve very low air drag, is referred to as a
recumbent bicycle or human powered vehicle. On an upright bicycle, the rider's body
creates about 75% of the total drag of the bicycle/rider combination.

In addition, the carbon dioxide generated in the production and transportation of the food
required by the bicyclist, per mile traveled, is less than 1/10th that generated by energy
efficient cars.[9]

[edit] Construction and parts

In its early years, bicycle construction drew on pre-existing technologies. More recently,
bicycle technology has in turn contributed ideas in both old and new areas.

[edit] Frame

Main article: Bicycle frame

Diagram of a bicycle.

The great majority of today's bicycles have a frame with upright seating which looks
much like the first chain-driven bike.[2] Such upright bicycles almost always feature the
diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle.
The front triangle consists of the head tube, top tube, down tube and seat tube. The head
tube contains the headset, the set of bearings that allows the fork to turn smoothly for
steering and balance. The top tube connects the head tube to the seat tube at the top, and
the down tube connects the head tube to the bottom bracket. The rear triangle consists of
the seat tube and paired chain stays and seat stays. The chain stays run parallel to the
chain, connecting the bottom bracket to the rear dropouts. The seat stays connect the top
of the seat tube (at or near the same point as the top tube) to the rear dropouts.

A Triumph with a step-through frame.

Historically, women's bicycle frames had a top tube that connected in the middle of the
seat tube instead of the top, resulting in a lower standover height at the expense of
compromised structural integrity, since this places a strong bending load in the seat tube,
and bicycle frame members are typically weak in bending. This design, referred to as a
step-through frame, allows the rider to mount and dismount in a dignified way while
wearing a skirt or dress. While some women's bicycles continue to use this frame style,
there is also a variation, the mixte, which splits the top tube into two small top tubes that
bypass the seat tube and connect to the rear dropouts. The ease of stepping through is also
appreciated by those with limited flexibility or other joint problems. Because of its
persistent image as a "women's" bicycle, step-through frames are not common for larger
frames.

Another style is the recumbent bicycle. These are inherently more aerodynamic than
upright versions, as the rider may lean back onto a support and operate pedals that are on
about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but
this type was banned from competition in 1934 by the Union Cycliste Internationale.[10]

Historically, materials used in bicycles have followed a similar pattern as in aircraft, the
goal being high strength and low weight. Since the late 1930s alloy steels have been used
for frame and fork tubes in higher quality machines. Celluloid found application in
mudguards, and aluminum alloys are increasingly used in components such as
handlebars, seat post, and brake levers. In the 1980s aluminum alloy frames became
popular, and their affordability now makes them common. More expensive carbon fiber
and titanium frames are now also available, as well as advanced steel alloys and even
bamboo.

[edit] Drivetrain and gearing

A set of rear sprockets (also known as a cassette) and a derailleur
For more details on this topic, see bicycle gearing.

Since cyclists' legs are most efficient over a narrow range of pedalling speeds (cadence),
a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while
covering varied terrain. As a first approximation, utility bicycles often use a hub gear
with a small number (3 to 5) of widely-spaced gears, road bicycles and racing bicycles
use derailleur gears with a moderate number (10 to 22) of closely-spaced gears, while
mountain bicycles, hybrid bicycles, and touring bicycles use dérailleur gears with a larger
number (15 to 30) of moderately-spaced gears, often including an extremely low gear
(granny gear) for climbing steep hills.

Different gears and ranges of gears are appropriate for different people and styles of
cycling. Multi-speed bicycles allow gear selection to suit the circumstances, e.g. it may
be comfortable to use a high gear when cycling downhill, a medium gear when cycling
on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the
pedals leads to fewer rotations of the rear wheel. This allows the energy required to move
the same distance to be distributed over more pedal turns, reducing fatigue when riding
uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make
fewer pedal cycles to maintain a given speed, but with more effort per turn of the pedals.

The drivetrain begins with pedals which rotate the cranks, which are held in axis by the
bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A
relatively small number of bicycles use a shaft drive to transmit power. A very small
number of bicycles (mainly single-speed bicycles intended for short-distance commuting)
use a belt drive as an oil-free way of transmitting power.

A bicycle with shaft drive instead of a chain

With a chain drive transmission, a chainring attached to a crank drives the chain, which
in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are
four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings,
up to 10 sprockets, hub gear built in to rear wheel (3-speed to 14-speed). The most
common options are either a rear hub or multiple chain rings combined with multiple
sprockets (other combinations of options are possible but less common).

With a shaft drive transmission, a gear set at the bottom bracket turns the shaft, which
then turns the rear wheel via a gear set connected to the wheel's hub. There is some small
loss of efficiency due to the two gear sets needed. The only gearing option with a shaft
drive is to use a hub gear.

[edit] Steering and seating
Conventional dropdown handlebars with added aerobars

The handlebars turn the fork and the front wheel via the stem, which rotates within the
headset. Three styles of handlebar are common. Upright handlebars, the norm in Europe
and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip
and comfortable upright position. Drop handlebars are "dropped", offering the cyclist
either an aerodynamic "crouched" position or a more upright posture in which the hands
grip the brake lever mounts. Mountain bikes feature a straight handlebar which can
provide better low-speed handling due to the wider nature of the bars.

A Selle San Marco saddle designed for women

Saddles also vary with rider preference, from the cushioned ones favored by short-
distance riders to narrower saddles which allow more room for leg swings. Comfort
depends on riding position. With comfort bikes and hybrids the cyclist sits high over the
seat, their weight directed down onto the saddle, such that a wider and more cushioned
saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly
distributed between the handlebars and saddle, the hips are flexed, and a narrower and
harder saddle is more efficient. Differing saddle designs exist for male and female
cyclists, accommodating the genders' differing anatomies, although bikes typically are
sold with saddles most appropriate for men.

A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable
than a saddle, especially riders who suffer from certain types of seat, back, neck,
shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat
steering.

[edit] Brakes

Main article: Bicycle brake systems
Linear-pull brake on rear wheel of a mountain bike

Modern bicycle brakes are either rim brakes, in which friction pads are compressed
against the wheel rims, internal hub brakes, in which the friction pads are contained
within the wheel hubs, or disc brakes. Disc brakes are common on off-road bicycles,
tandems and recumbent bicycles.

A front disc brake, mounted to the fork and hub

With hand-operated brakes, force is applied to brake levers mounted on the handlebars
and transmitted via Bowden cables or hydraulic lines to the friction pads. A rear hub
brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes
which were popular in North America until the 1960s, and are still common in children's
bicycles.

Track bicycles do not have brakes. Brakes are not required for riding on a track because
all riders ride in the same direction around a track which does not necessitate sharp
deceleration. Track riders are still able to slow down because all track bicycles are fixed-
gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so
when the rear wheel is moving, the crank is moving. To slow down one may apply
resistance to the pedals.

[edit] Suspension

Main article: Bicycle suspension
This mountain bicycle features oversized tires, a full-suspension frame, two disc brakes
and handlebars oriented perpendicular to the bike's axis

Bicycle suspension refers to the system or systems used to suspend the rider and all or
part of the bicycle. This serves two purposes:

• To keep the wheels in continuous contact with rough surfaces in order to improve
control.

• To isolate the rider and luggage from jarring due to rough surfaces.

Bicycle suspensions are used primarily on mountain bicycles, but are also common on
hybrid bicycles, and can even be found on some road bicycles, as they can help deal with
problematic vibration. Suspension is especially important on recumbent bicycles, since
while an upright bicycle rider can stand on the pedals to achieve some of the benefits of
suspension, a recumbent rider cannot.

[edit] Wheels

Main article: Bicycle wheel

The wheel axle fits into dropouts in the frame and forks. A pair of wheels may be called a
wheelset, especially in the context of ready-built "off the shelf", performance-oriented
wheels.

Tires vary enormously. Skinny, road-racing tires may be completely smooth, or (slick).
On the opposite extreme, off-road tires are much wider and thicker, and usually have a
deep tread for gripping in muddy conditions.

[edit] Accessories, repairs, and tools
Touring bicycle equipped with head lamp, pump, rear rack, fenders/mud-guards, water
bottles and cages, and numerous saddle-bags.

Puncture repair kit with tire levers, sandpaper to clean off an area of the inner tube
around the puncture, a tube of rubber solution (vulcanising fluid), round and oval patches,
a metal grater and piece of chalk to make chalk powder (to dust over excess rubber
solution). Kits often also include a wax crayon to mark the puncture location.

Some components, which are often optional accessories on sports bicycles, are standard
features on utility bicycles to enhance their usefulness and comfort. Mudguards, or
fenders, protect the cyclist and moving parts from spray when riding through wet areas
and chainguards protect clothes from oil on the chain while preventing clothing from
being caught between the chain and crankset teeth. Kick stands keep a bicycle upright
when parked. Front-mounted baskets for carrying goods are often used. Luggage carriers
and panniers mounted above the rear tire can be used to carry equipment or cargo.
Parents sometimes add rear-mounted child seats and/or an auxiliary saddle fitted to the
crossbar to transport children.

Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper
position on the pedals, and enable the cyclist to pull as well as push the pedals—although
not without their hazards, eg. may lock foot in when needed to prevent a fall. Technical
accessories include cyclocomputers for measuring speed, distance, etc. Other accessories
include lights, reflectors, security locks, mirror, water bottles and cages, and bell.[11]

Bicycle helmets may help reduce injury in the event of a collision or accident, and a
certified helmet is legally required for some riders in some jurisdictions. Helmets are
classified as an accessory[11] or an item of clothing by others.[12]

Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may
contain any combination of a hand pump or CO2 Pump, tire levers, spare tubes, self-
adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal
grater (to roughing the tube surface to be patched),[13][14] and sometimes even a block of
French chalk.), wrenches, hex keys, screwdrivers, and a chain tool. There are also cycling
specific multi-tools that combine many of these implements into a single compact device.
More specialized bicycle components may require more complex tools, including
proprietary tools specific for a given manufacturer.
Some bicycle parts, particularly hub-based gearing systems, are complex, and many
cyclists prefer to leave maintenance and repairs to professional bicycle mechanics. In
some areas it is possible to purchase road-side assistance from companies such as the
Better World Club. Other cyclists maintain their own bicycles, perhaps as part of their
enjoyment of the hobby of cycling or simply for economic reasons. The ability to repair
and maintain your own bicycle is also celebrated within the DIY movement.

[edit] Standards

A number of formal and industry standards exist for bicycle components to help make
spare parts exchangeable and to maintain a minimum product safety.

The International Organization for Standardization, ISO, has a special technical
committee for cycles, TC149, that has the following scope: "Standardization in the field
of cycles, their components and accessories with particular reference to terminology,
testing methods and requirements for performance and safety, and interchangeability."

CEN, European Committee for Standardisation, also has a specific Technical Committee,
TC333, that defines European standards for cycles. Their mandate states that EN cycle
standards shall harmonise with ISO standards. Some CEN cycle standards were
developed before ISO published their standards, leading to strong European influences in
this area. European cycle standards tend to describe minimum safety requirements, while
ISO standards have historically harmonized parts geometry.[15]

[edit] Parts

For details on specific bicycle parts, see list of bicycle parts and category:bicycle parts.

[edit] Social and historical aspects
The bicycle has had a considerable effect on human society, in both the cultural and
industrial realms.

[edit] In daily life

A commuting bike in Amsterdam
Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements
by allowing workers to commute from more spacious dwellings in the suburbs. They also
reduced dependence on horses. Bicycles allowed people to travel for leisure into the
country, since bicycles were three times as energy efficient as walking and three to four
times as fast.

A bike-sharing station in Barcelona

Recently, several European cities have implemented successful schemes known as
community bicycle programs or bike-sharing. These initiatives complement a city's
public transport system and offer an alternative to motorized traffic to help reduce
congestion and pollution. Users take a bicycle at a parking station, use it for a limited
amount of time, and then return it to the same or different station. Examples include
Bicing in Barcelona, Vélo'v in Lyon and Vélib' in Paris.

A man uses a bicycle to cargo goods in Ouagadougou, Burkina Faso (2007)

In cities where the bicycle is not an integral part of the planned transportation system,
commuters often use bicycles as elements of a mixed-mode commute, where the bike is
used to travel to and from train stations or other forms of rapid transit. Folding bicycles
are useful in these scenarios, as they are less cumbersome when carried aboard. Los
Angeles removed a small amount of seating on some trains to make more room for
bicycles and wheel chairs [16].
Bicycles offer an important mode of transport in many developing contries. Until
recently, bicycles have been a staple of everyday life throughout Asian countries. They
are the most frequently used method of transport for commuting to work, school,
shopping, and life in general. As a result, bicycles there are almost always equipped with
baskets.

[edit] Female emancipation

Woman with bicycle, 1890s

The diamond-frame safety bicycle gave women unprecedented mobility, contributing to
their emancipation in Western nations. As bicycles became safer and cheaper, more
women had access to the personal freedom they embodied, and so the bicycle came to
symbolize the New Woman of the late nineteenth century, especially in Britain and the
United States.

The bicycle was recognized by nineteenth-century feminists and suffragists as a "freedom
machine" for women. American Susan B. Anthony said in a New York World interview
on February 2, 1896: "Let me tell you what I think of bicycling. I think it has done more
to emancipate women than anything else in the world. It gives women a feeling of
freedom and self-reliance. I stand and rejoice every time I see a woman ride by on a
wheel...the picture of free, untrammeled womanhood." In 1895 Frances Willard, the
tightly-laced president of the Women’s Christian Temperance Union, wrote a book called
How I Learned to Ride the Bicycle, in which she praised the bicycle she learned to ride
late in life, and which she named "Gladys", for its "gladdening effect" on her health and
political optimism. Willard used a cycling metaphor to urge other suffragists to action,
proclaiming, "I would not waste my life in friction when it could be turned into
momentum."

Columbia Bicycles advertisement from 1886

Male anger at the freedom symbolized by the New (bicycling) Woman was demonstrated
when the male undergraduates of Cambridge University showed their opposition to the
admission of women as full members of the university by hanging a woman bicyclist in
effigy in the main town square. This was as late as 1897.[17] The bicycle craze in the
1890s also led to a movement for so-called rational dress, which helped liberate women
from corsets and ankle-length skirts and other restrictive garments, substituting the then-
shocking bloomers.

[edit] Economic implications

Bicycle manufacturing proved to be a training ground for other industries and led to the
development of advanced metalworking techniques, both for the frames themselves and
for special components such as ball bearings, washers, and sprockets. These techniques
later enabled skilled metalworkers and mechanics to develop the components used in
early automobiles and aircraft.

They also served to teach the industrial models later adopted, including mechanization
and mass production (later copied and adopted by Ford and General Motors),[18] vertical
integration[19] (also later copied and adopted by Ford), aggressive advertising[20] (as much
as ten percent of all advertising in U.S. periodicals in 1898 was by bicycle makers),[21]
lobbying for better roads (which had the side benefit of acting as advertising, and of
improving sales by providing more places to ride),[22] all first practised by Pope.[23] In
addition, bicycle makers adopted the annual model change[24][25] (later derided as planned
obsolescence, and usually credited to General Motors), which proved very successful.[26]

Furthermore, bicycles were an early example of conspicuous consumption, being adopted
by the fashionable elites.[27] In addition, by serving as a platform for accessories, which
could ultimately cost more than the bicycle itself, it paved the way for the likes of the
Barbie doll.[28]

Moreover, they helped create, or enhance, new kinds of businesses, such as bicycle
messengers,[29] travelling seamstresses,[30] riding academies,[31] and racing rinks[32] (Their
board tracks were later adapted to early motorcycle and automobile racing.) As well,
there were a variety of new inventions, such as spoke tighteners,[33] and specialized lights,
[34]
socks and shoes,[35] and even cameras (such as the Eastman Company's Poco).[36]
Probably the best known and most widely used of these inventions, adopted well beyond
cycling, is Charles Bennett's Bike Web, which came to be called the "jock strap".[37]
They also presaged a move away from public transit[38] that would explode with the
introduction of the automobile. This liberation would be repeated again with the
appearance of the snowmobile.[39]

J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and
then simply the Rover Company when it started making cars. The Morris Motor
Company (in Oxford) and Škoda also began in the bicycle business, as did the Wright
brothers.[40] Alistair Craig, whose company eventually emerged to become the engine
manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in
March 1885.

In general, U.S. and European cycle manufacturers used to assemble cycles from their
own frames and components made by other companies, although very large companies
(such as Raleigh) used to make almost every part of a bicycle (including bottom brackets,
axles, etc.) In recent years, those bicycle makers have greatly changed their methods of
production. Now, almost none of them produce their own frames.

Many newer or smaller companies only design and market their products; the actual
production is done by Asian companies. For example, some sixty percent of the world's
bicycles are now being made in China. Despite this shift in production, as nations such as
China and India become more wealthy, their own use of bicycles has declined due to the
increasing affordability of cars and motorcycles. One of the major reasons for the
proliferation of Chinese-made bicycles in foreign markets is the lower cost of labour in
China.[41]

One of the profound economic implications of bicycle use is that it liberates the user from
oil consumption (Ballantine, 1972). H.G. Wells said: “Every time I see an adult on a
bicycle, I no longer despair for the future of the human race.” (Quotegarden.com[1]). The
bicycle is a cheap, fast, healthy and environmentally friendly mode of transport (Illich,
1974)

[edit] Legal requirements

Reflectors for riding after dark
Early in its development, like in the case of automobiles, there were restrictions on the
operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope
litigated on behalf of cyclists[42]

The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle
to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is
considered an operator. The traffic codes of many countries reflect these definitions and
demand that a bicycle satisfy certain legal requirements, sometimes even including
licensing, before it can be used on public roads. In many jurisdictions, it is an offence to
use a bicycle that is not in roadworthy condition.

In most jurisdictions, bicycles must have functioning front and rear lights when ridden
after dark. As some generator or dynamo-driven lamps only operate while moving, rear
reflectors are frequently also mandatory. Since a moving bicycle makes little noise, some
countries insist that bicycles have a warning bell for use when approaching pedestrians,
equestrians, and other cyclists.

[edit] See also

Wikibooks' [[wikibooks:|]] has more about this subject:
Bicycle repair
Listen to this article (info/dl)

This audio file was created from a revision dated 2007-12-07, and does not reflect subsequent edits to the article. (Audio help)
More spoken articles
Sustainable development portal

• Cycling - use of bicycles

General

• Bicycle safety
• Bicycle commuting
• Bicycle industry and List of bicycle manufacturing companies
• Bicycle and human powered vehicle museums, list of
• Bicycle lighting
• Bicycle lock
• Bicycle locker
• Bicycle parts
• Bicycle tools
• Trampe bicycle lift
Special uses and related vehicle types

• Balance bicycle
• Beach cruiser
• Bicycle trailer
• Boda-boda
• Cycle rickshaw
• Faired bicycle
• Folding bicycle
• Freight bicycle
• Infantry bicycle
• Monowheel
• Quadricycle
• Shaft-driven bicycle
• Tandem bicycle
• Trailer bike
• Tricycle
• Utility cycling
• Unicycle
• Velocipede
• Workbike

Other

• Human-powered transport
• Environment topics, list of
• Safety standards
• Transportation technology, timeline of

[edit] Notes

Concrete
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about the construction material. For other uses, see Concrete
(disambiguation).

Concrete being poured, raked and vibrated into place in residential construction in
Toronto, Ontario, Canada.
1930s vibrated concrete, manufactured in Croydon and installed by the LMS railway
after an art deco refurbishment in Meols.

concrete factory with cement mixer trucks

Concrete is a construction material composed of cement (commonly Portland cement) as
well as other cementitious materials such as fly ash and slag cement, aggregate (generally
a coarse aggregate such as gravel, limestone, or granite, plus a fine aggregate such as
sand), water, and chemical admixtures. The word concrete comes from the Latin word
"concretus", which means "hardened" or "hard".
Concrete solidifies and hardens after mixing with water and placement due to a chemical
process known as hydration. The water reacts with the cement, which bonds the other
components together, eventually creating a stone-like material. Concrete is used to make
pavements, architectural structures, foundations, motorways/roads, bridges/overpasses,
parking structures, brick/block walls and footings for gates, fences and poles.

Concrete is used more than any other man-made material in the world.[1] As of 2006,
about 7 cubic kilometres of concrete are made each year—more than one cubic metre for
every person on Earth.[2] Concrete powers a US $35-billion industry which employs more
than two million workers in the United States alone.[citation needed] More than 55,000 miles
(89,000 km) of highways in America are paved with this material. The People's Republic
of China currently consumes 40% of the world's cement/concrete production.

Contents
[hide]

• 1 History
• 2 Composition
o 2.1 Cement
o 2.2 Water
o 2.3 Aggregates
o 2.4 Reinforcement
o 2.5 Chemical admixtures
o 2.6 Mineral admixtures and blended cements
• 3 Concrete Production
o 3.1 Mixing Concrete
o 3.2 Workability
o 3.3 Curing
• 4 Properties
o 4.1 Strength
o 4.2 Elasticity
o 4.3 Expansion and shrinkage
o 4.4 Cracking
 4.4.1 Shrinkage cracking
 4.4.2 Tension cracking
o 4.5 Creep
o 4.6 Physical properties
• 5 Damage modes
o 5.1 Fire
o 5.2 Aggregate expansion
o 5.3 Sea water effects
o 5.4 Bacterial corrosion
o 5.5 Chemical damage
 5.5.1 Carbonation
 5.5.2 Chlorides
 5.5.3 Sulphates
o 5.6 Leaching
o 5.7 Physical damage
• 6 Types of concrete
o 6.1 Mix Design
o 6.2 Regular concrete
o 6.3 High-strength concrete
o 6.4 High-performance concrete
o 6.5 Self-consolidating concretes
o 6.6 Shotcrete
o 6.7 Pervious concrete
 6.7.1 Installation
 6.7.2 Characteristics
o 6.8 Cellular concrete
o 6.9 Cork-cement composites
o 6.10 Roller-compacted concrete
o 6.11 Glass concrete
o 6.12 Asphalt concrete
o 6.13 Rapid strength concrete
o 6.14 Rubberized concrete
o 6.15 Polymer concrete
o 6.16 Geopolymer or green concrete
o 6.17 Limecrete
o 6.18 Refractory Cement
• 7 Concrete testing
• 8 Concrete recycling
• 9 Use of concrete in structures
o 9.1 Mass concrete structures
o 9.2 Reinforced concrete structures
o 9.3 Prestressed concrete structures
• 10 See also
• 11 References
• 12 External links
o 12.1 Related article and publications

o 12.2 Industry associations

[edit] History
Aqueduct of Segovia

Many ancient civilizations used forms of concrete using dried mud, straw, and other
materials.

During the Roman Empire, Roman concrete was made from quicklime, pozzolanic
ash/pozzolana, and an aggregate of pumice; it was very similar to modern Portland
cement concrete. The widespread use of concrete in many Roman structures has ensured
that many survive almost intact to the present day. The Baths of Caracalla in Rome are
just one example of the longevity of concrete, which allowed the Romans to build this
and similar structures across the Roman Empire. Many Roman aqueducts have masonry
cladding to a concrete core, a technique they used in structures such as the Pantheon,
Rome, the interior dome of which is unclad concrete.

The secret of concrete was lost for 13 centuries until 1756, when the British engineer
John Smeaton pioneered the use of hydraulic lime in concrete, using pebbles and
powdered brick as aggregate. Portland cement was first used in concrete in the early
1840s.

Recently, the use of recycled materials as concrete ingredients is gaining popularity
because of increasingly stringent environmental legislation. The most conspicuous of
these is fly ash, a by-product of coal-fired power plants. This has a significant impact by
reducing the amount of quarrying and landfill space required, and, as it acts as a cement
replacement, reduces the amount of cement required to produce a solid concrete. As
cement production creates massive quantities of carbon dioxide, cement-replacement
technology such as this will play an important role in future attempts to cut carbon
dioxide emissions.

Concrete additives have been used since Roman and Egyptian times, when it was
discovered that adding volcanic ash to the mix allowed it to set under water. Similarly,
the Romans knew that adding horse hair made concrete less liable to crack while it
hardened, and adding blood made it more frost-resistant[3].
In modern times, researchers have experimented with the addition of other materials to
create concrete with improved properties, such as higher strength or electrical
conductivity.

[edit] Composition

A shovel being used to mix cement with sand.

There are many types of concrete available, created by varying the proportions of the
main ingredients below.

The mix design depends on the type of structure being built, how the concrete will be
mixed and delivered, and how it will be placed to form this structure.

[edit] Cement

Portland cement is the most common type of cement in general usage. It is a basic
ingredient of concrete, mortar, and plaster. English engineer Joseph Aspdin patented
Portland cement in 1824; it was named because of its similarity in colour to Portland
limestone, quarried from the English Isle of Portland and used extensively in London
architecture. It consists of a mixture of oxides of calcium, silicon and aluminium.
Portland cement and similar materials are made by heating limestone (a source of
calcium) with clay, and grinding this product (called clinker) with a source of sulfate
(most commonly gypsum).

[edit] Water

Combining water with a cementitious material forms a cement paste by the process of
hydration. The cement paste glues the aggregate together, fills voids within it, and allows
it to flow more easily.

Less water in the cement paste will yield a stronger, more durable concrete; more water
will give an easier-flowing concrete with a higher slump.[4]

Impure water used to make concrete can cause problems, when setting, or in causing
premature failure of the structure.
Hydration involves many different reactions, often occurring at the same time. As the
reactions proceed, the products of the cement hydration process gradually bond together
the individual sand and gravel particles, and other components of the concrete, to form a
solid mass.

[edit] Aggregates

Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel
and crushed stone are mainly used for this purpose. Recycled aggregates (from
construction, demolition and excavation waste) are increasingly used as partial
replacements of natural aggregates, while a number of manufactured aggregates,
including air-cooled blast furnace slag and bottom ash are also permitted.

Decorative stones such as quartzite, small river stones or crushed glass are sometimes
added to the surface of concrete for a decorative "exposed aggregate" finish, popular
among landscape designers.

[edit] Reinforcement

Installing rebar in a floor slab during a concrete pour

Concrete is strong in compression, as the aggregate efficiently carries the compression
load. However, it is weak in tension as the cement holding the aggregate in place can
crack, allowing the structure to fail. Reinforced concrete solves these problems by adding
metal reinforcing bars, glass fiber, or plastic fiber to carry tensile loads.

[edit] Chemical admixtures

Chemical admixtures are materials in the form of powder or fluids that are added to the
concrete to give it certain characteristics not obtainable with plain concrete mixes. In
normal use, admixture dosages are less than 5% by mass of cement, and are added to the
concrete at the time of batching/mixing.[5] The most common types of admixtures [6] are:

• Accelerators speed up the hydration (hardening) of the concrete. Typical materials
used are CaCl2 and NaCl.
• Retarders slow the hydration of concrete, and are used in large or difficult pours
where partial setting before the pour is complete is undesirable. A typical retarder
is sugar (C6H12O6).
• Air entrainments add and distribute tiny air bubbles in the concrete, which will
reduce damage during freeze-thaw cycles thereby increasing the concrete's
durability. However, entrained air is a trade-off with strength, as each 1% of air
may result in 5% decrease in compressive strength.
• Plasticizers (water-reducing admixtures) increase the workability of plastic or
"fresh" concrete, allowing it be placed more easily, with less consolidating effort.
Superplasticizers (high-range water-reducing admixtures) are a class of
plasticizers which have fewer deleterious effects when used to significantly
increase workability. Alternatively, plasticizers can be used to reduce the water
content of a concrete (and have been called water reducers due to this
application) while maintaining workability. This improves its strength and
durability characteristics.
• Pigments can be used to change the color of concrete, for aesthetics.
• Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in
concrete.
• Bonding agents are used to create a bond between old and new concrete.
• Pumping aids improve pumpability, thicken the paste, and reduce dewatering –
the tendency for the water to separate out of the paste.

[edit] Mineral admixtures and blended cements

There are inorganic materials that also have pozzolanic or latent hydraulic properties.
These very fine-grained materials are added to the concrete mix to improve the properties
of concrete (mineral admixtures),[5] or as a replacement for Portland cement (blended
cements).[7]

• Fly ash: A by product of coal fired electric generating plants, it is used to partially
replace Portland cement (by up to 60% by mass). The properties of fly ash depend
on the type of coal burnt. In general, silicious fly ash is pozzolanic, while
calcareous fly ash has latent hydraulic properties.[8]
• Ground granulated blast furnace slag (GGBFS or GGBS): A by product of steel
production, is used to partially replace Portland cement (by up to 80% by mass). It
has latent hydraulic properties.[9]
• Silica fume: A by-product of the production of silicon and ferrosilicon alloys.
Silica fume is similar to fly ash, but has a particle size 100 times smaller. This
results in a higher surface to volume ratio and a much faster pozzolanic reaction.
Silica fume is used to increase strength and durability of concrete, but generally
requires the use of superplasticizers for workability.[10]
• High Reactivity Metakaolin (HRM): Metakaolin produces concrete with strength
and durability similar to concrete made with silica fume. While silica fume is
usually dark gray or black in color, high reactivity metakaolin is usually bright
white in color, making it the preferred choice for architectural concrete where
appearance is important.

[edit] Concrete Production
The processes used vary dramatically, from hand tools to heavy industry, but result in the
concrete being placed where it cures into a final form.

When initially mixed together, Portland cement and water rapidly form a gel, formed of
tangled chains of interlocking crystals. These continue to react over time, with the
initially fluid gel often aiding in placement by improving workability. As the concrete
sets, the chains of crystals join up, and form a rigid structure, gluing the aggregate
particles in place. During curing, more of the cement reacts with the residual water
(Hydration).

This curing process develops physical and chemical properties. Among other qualities,
mechanical strength, low moisture permeability, and chemical and volumetric stability.

[edit] Mixing Concrete

Cement being mixed with sand and water to form concrete

Thorough mixing is essential for the production of uniform, high quality concrete.
Therefore, equipment and methods should be capable of effectively mixing concrete
materials containing the largest specified aggregate to produce uniform mixtures of the
lowest slump practical for the work. Separate paste mixing has shown that the mixing of
cement and water into a paste before combining these materials with aggregates can
increase the compressive strength of the resulting concrete.[11] The paste is generally
mixed in a high-speed, shear-type mixer at a w/cm (water to cement ratio) of 0.30 to 0.45
by mass. The cement paste premix may include admixtures, e.g. accelerators or retarders,
plasticizers, pigments, or fumed silica. The latter is added to fill the gaps between the
cement particles. This reduces the particle distance and leads to a higher final
compressive strength and a higher water impermeability.[12] The premixed paste is then
blended with aggregates and any remaining batch water, and final mixing is completed in
conventional concrete mixing equipment.[13]

High-Energy Mixed Concrete (HEM concrete) is produced by means of high-speed
mixing of cement, water and sand with net specific energy consumption at least 5
kilojoules per kilogram of the mix. It is then added to a plasticizer admixture and mixed
after that with aggregates in conventional concrete mixer. This paste can be used itself or
foamed (expanded) for lightweight concrete.[14] Sand effectively dissipates energy in this
mixing process. HEM concrete fast hardens in ordinary and low temperature conditions,
and possesses increased volume of gel, drastically reducing capillarity in solid and porous
materials. It is recommended for precast concrete in order to reduce quantity of cement,
as well as concrete roof and siding tiles, paving stones and lightweight concrete block
production.

[edit] Workability

Pouring a concrete floor for a commercial building, (slab-on-grade)

Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly
with the desired work (vibration) and without reducing the concrete's quality.
Workability depends on water content, aggregate (shape and size distribution),
cementitious content and age (level of hydration), and can be modified by adding
chemical admixtures. Raising the water content or adding chemical admixtures will
increase concrete workability. Excessive water will lead to increased bleeding (surface
water) and/or segregation of aggregates (when the cement and aggregates start to
separate), with the resulting concrete having reduced quality. The use of an aggregate
with an undesirable gradation can result in a very harsh mix design with a very low
slump, which cannot be readily made more workable by addition of reasonable amounts
of water.

Workability can be measured by the Concrete Slump Test, a simplistic measure of the
plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test
standards. Slump is normally measured by filling an "Abrams cone" with a sample from a
fresh batch of concrete. The cone is placed with the wide end down onto a level, non-
absorptive surface. It is then filled in three layers of equal volume, with each layer being
tamped with a steel rod in order to consolidate the layer. When the cone is carefully lifted
off, the enclosed material will slump a certain amount due to gravity. A relatively dry
sample will slump very little, having a slump value of one or two inches (25 or 50 mm).
A relatively wet concrete sample may slump as much as six or seven inches (150 to 175
mm).

Slump can be increased by adding chemical admixtures such as mid-range or high-range
water reducing agents (super-plasticizers) without changing the water/cement ratio. It is
bad practice to add excessive water upon delivery to the jobsite, however in a properly
designed mixture it is important to reasonably achieve the specified slump prior to
placement as design factors such as air content, internal water for hydration/strength gain,
etc. are dependent on placement at design slump values.

High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring
methods. One of these methods includes placing the cone on the narrow end and
observing how the mix flows through the cone while it is gradually lifted.

[edit] Curing

A concrete slab ponded while curing
Concrete columns curing while wrapped in plastic

In all but the least critical applications, care needs to be taken to properly cure concrete,
and achieve best strength and hardness. This happens after the concrete has been placed.
Cement requires a moist, controlled environment to gain strength and harden fully. The
cement paste hardens over time, initially setting and becoming rigid though very weak,
and gaining in strength in the days and weeks following. In around 3 weeks, over 90% of
the final strength is typically reached though it may continue to strengthen for decades.[15]

Hydration and hardening of concrete during the first three days is critical. Abnormally
fast drying and shrinkage due to factors such as evaporation from wind during placement
may lead to increased tensile stresses at a time when it has not yet gained significant
strength, resulting in greater shrinkage cracking. The early strength of the concrete can be
increased by keeping it damp for a longer period during the curing process. Minimizing
stress prior to curing minimizes cracking. High early-strength concrete is designed to
hydrate faster, often by increased use of cement which increases shrinkage and cracking.

During this period concrete needs to be in conditions with a controlled temperature and
humid atmosphere, in practice this is achieved by spraying or ponding the concrete
surface with water, thereby protecting concrete mass from ill effects of ambient
conditions. The pictures to the right show two of many ways to achieve this, ponding –
submerging setting concrete in water, and wrapping in plastic to contain the water in the
mix.

Properly curing concrete leads to increased strength and lower permeability, and avoids
cracking where the surface dries out prematurely. Care must also be taken to avoid
freezing, or overheating due to the exothermic setting of cement (the Hoover Dam used
pipes carrying coolant during setting to avoid damaging overheating). Improper curing
can cause scaling, reduced strength, poor abrasion resistance and cracking.

[edit] Properties
[edit] Strength

Concrete has relatively high compressive strength, but significantly lower tensile
strength. It is fair to assume that a concrete sample's tensile strength is about 10%-15% of
its compressive strength.[16] As a result, without compensating, concrete would almost
always fail from tensile stresses – even when loaded in compression. The practical
implication of this is that concrete elements subjected to tensile stresses must be
reinforced with materials that are strong in tension.

Reinforced concrete is the most common form of concrete. The reinforcement is often
steel, rebar (mesh, spiral, bars and other forms). Structural fibers of various materials are
also used.

Concrete can also be prestressed (reducing tensile stress) using internal steel cables
(tendons), allowing for beams or slabs with a longer span than is practical with reinforced
concrete alone. Inspection of concrete structures can be non-destructive if carried out
with equipment such as a Schmidt hammer, which is used to estimate concrete strength.

The ultimate strength of concrete is influenced by the water-cementitious ratio (w/cm),
the design constituents, and the mixing, placement and curing methods employed. All
things being equal, concrete with a lower water-cement (cementitious) ratio makes a
stronger concrete than that with a higher ratio. The total quantity of cementitious
materials (Portland cement, slag cement, pozzolans) can affect strength, water demand,
shrinkage, abrasion resistance and density. All concrete will crack independent of
whether or not it has sufficient compressive strength. In fact, high Portland cement
content mixtures can actually crack more readily due to increased hydration rate. As
concrete transforms from its plastic state, hydrating to a solid, the material undergoes
shrinkage. Plastic shrinkage cracks can occur soon after placement but if the evaporation
rate is high they often can actually occur during finishing operations, for example in hot
weather or a breezy day. In very high strength concrete mixtures (greater than 10,000 psi)
the crushing strength of the aggregate can be a limiting factor to the ultimate compressive
strength. In lean concretes (with a high water-cement ratio) the crushing strength of the
aggregates is not so significant.
The internal forces in common shapes of structure, such as arches, vaults, columns and
walls are predominantly compressive forces, with floors and pavements subjected to
tensile forces. Compressive strength is widely used for specification requirement and
quality control of concrete. The engineer knows his target tensile (flexural) requirements
and will express these in terms of compressive strength.

Wired.com reported on April 13, 2007 that a team from the University of Tehran,
competing in a contest sponsored by the American Concrete Institute, demonstrated
several blocks of concretes with abnormally high compressive strengths between 50,000
and 60,000 PSI at 28 days.[17] The blocks appeared to use an aggregate of steel fibres and
quartz – a mineral with a compressive strength of 160,000 PSI, much higher than typical
high-strength aggregates such as granite (15,000-20,000 PSI).

[edit] Elasticity

The modulus of elasticity of concrete is a function of the modulus of elasticity of the
aggregates and the cement matrix and their relative proportions. The modulus of
elasticity of concrete is relatively constant at low stress levels but starts decreasing at
higher stress levels as matrix cracking develops. The elastic modulus of the hardened
paste may be in the order of 10-30 GPa and aggregates about 45 to 85 GPa. The concrete
composite is then in the range of 30 to 50 GPa.

The American Concrete Institute allows the modulus of elasticity to be calculated using
the following equation:[16]

(psi)

where

wc = weight of concrete (pounds per cubic foot) and where
f'c = compressive strength of concrete at 28 days (psi)

This equation is completely empirical and is not based on theory. Note that the value of
Ec found is in units of psi. For normalweight concrete (defined as concrete with a wc of
150 pcf) Ec is permitted to be taken as .

[edit] Expansion and shrinkage

Concrete has a very low coefficient of thermal expansion. However, if no provision is
made for expansion, very large forces can be created, causing cracks in parts of the
structure not capable of withstanding the force or the repeated cycles of expansion and
contraction.

As concrete matures it continues to shrink, due to the ongoing reaction taking place in the
material, although the rate of shrinkage falls relatively quickly and keeps reducing over
time (for all practical purposes concrete is usually considered to not shrink due to
hydration any further after 30 years). The relative shrinkage and expansion of concrete
and brickwork require careful accommodation when the two forms of construction
interface.

Because concrete is continuously shrinking for years after it is initially placed, it is
generally accepted that under thermal loading it will never expand to its originally placed
volume.

[edit] Cracking

Salginatobel Bridge

All concrete structures will crack to some extent. One of the early designers of reinforced
concrete, Robert Maillart, employed reinforced concrete in a number of arched bridges.
His first bridge was simple, using a large volume of concrete. He then realized that much
of the concrete was very cracked, and could not be a part of the structure under
compressive loads, yet the structure clearly worked. His later designs simply removed the
cracked areas, leaving slender, beautiful concrete arches. The Salginatobel Bridge is an
example of this.

Concrete cracks due to tensile stress induced by shrinkage or stresses occurring during
setting or use. Various means are used to overcome this. Fiber reinforced concrete uses
fine fibers distributed throughout the mix or larger metal or other reinforcement elements
to limit the size and extent of cracks. In many large structures joints or concealed saw-
cuts are placed in the concrete as it sets to make the inevitable cracks occur where they
can be managed and out of sight. Water tanks and highways are examples of structures
requiring crack control.

[edit] Shrinkage cracking

Shrinkage cracks occur when concrete members undergo restrained volumetric changes
(shrinkage) as a result of either drying, autogenous shrinkage or thermal effects. Restraint
is provided either externally (i.e. supports, walls, and other boundary conditions) or
internally (differential drying shrinkage, reinforcement). Once the tensile strength of the
concrete is exceeded, a crack will develop. The number and width of shrinkage cracks
that develop are influenced by the amount of shrinkage that occurs, the amount of
restraint present and the amount and spacing of reinforcement provided.
Plastic-shrinkage cracks are immediately apparent, visible within 0 to 2 days of
placement, while drying-shrinkage cracks develop over time.

[edit] Tension cracking

Concrete members may be put into tension by applied loads. This is most common in
concrete beams where a transversely applied load will put one surface into compression
and the opposite surface into tension due to induced bending. The portion of the beam
that is in tension may crack. The size and length of cracks is dependent on the magnitude
of the bending moment and the design of the reinforcing in the beam at the point under
consideration. Reinforced concrete beams are designed to crack in tension rather than in
compression. This is achieved by providing reinforcing steel which yields before failure
of the concrete in compression occurs and allowing remediation, repair, or if necessary,
evacuation of an unsafe area.

[edit] Creep

Because it is a fluid, concrete can be pumped to where it is needed. Here a concrete
transport truck is feeding concrete to a concrete pumper, which is pumping it to where a
slab is being poured.

Creep is the term used to describe the permanent movement or deformation of a material
in order to relieve stresses within the material. Concrete which is subjected to long-
duration forces is prone to creep. Short-duration forces (such as wind or earthquakes) do
not cause creep. Creep can sometimes reduce the amount of cracking that occurs in a
concrete structure or element, but it also must be controlled. The amount of primary and
secondary reinforcing in concrete structures contributes to a reduction in the amount of
shrinkage, creep and cracking.

[edit] Physical properties

The coefficient of thermal expansion of Portland cement concrete is 0.000008 to
0.000012 (per degree Celsius) (8-12 1/MK).[18] The density varies, but is around 150
pounds per cubic foot (2400 kg/m³).[19]

[edit] Damage modes
[edit] Fire

Due to its low thermal conductivity, a layer of concrete is frequently used for fireproofing
of steel structures. However, concrete itself may be damaged by fire.
Up to about 300 °C, the concrete undergoes normal thermal expansion. Above that
temperature, shrinkage occurs due to water loss; however, the aggregate continues
expanding, which causes internal stresses. Up to about 500 °C, the major structural
changes are carbonation and coarsening of pores. At 573 °C, quartz undergoes rapid
expansion due to Phase transition, and at 900 °C calcite starts shrinking due to
decomposition. At 450-550 °C the cement hydrate decomposes, yielding calcium oxide.
Calcium carbonate decomposes at about 600 °C. Rehydration of the calcium oxide on
cooling of the structure causes expansion, which can cause damage to material which
withstood fire without falling apart. Concrete in buildings that experienced a fire and
were left standing for several years shows extensive degree of carbonation.

Concrete exposed to up to 100 °C is normally considered as healthy. The parts of a
concrete structure that is exposed to temperatures above approximately 300 °C
(dependent of water/cement ratio) will most likely get a pink color. Over approximately
600 °C the concrete will turn light grey, and over approximately 1000 °C it turns yellow-
brown.[20] One rule of thumb is to consider all pink colored concrete as damaged, and to
be removed.

Fire will expose the concrete to gases and liquids that can be harmful to the concrete,
among other salts and acids that occur when gasses produced by fire come into contact
with water.

[edit] Aggregate expansion

Various types of aggregate undergo chemical reactions in concrete, leading to damaging
expansive phenomena. The most common are those containing reactive silica, that can
react (in the presence of water) with the alkalis in concrete (K2O and Na2O, coming
principally from cement). Among the more reactive mineral components of some
aggregates are opal, chalcedony, flint and strained quartz. Following the reaction (Alkali
Silica Reaction or ASR), an expansive gel forms, that creates extensive cracks and
damage on structural members. On the surface of concrete pavements the ASR can cause
pop-outs, i.e. the expulsion of small cones (up to 3 cm about in diameter) in
correspondence of aggregate particles. When some aggregates containing dolomite are
used, a dedolomitization reaction occurs where the magnesium carbonate compound
reacts with hydroxyl ions and yields magnesium hydroxide and a carbonate ion. The
resulting expansion may cause destruction of the material. Far less common are pop-outs
caused by the presence of pyrite, an iron sulfide that generates expansion by forming iron
oxide and ettringite. Other reactions and recrystallizations, e.g. hydration of clay minerals
in some aggregates, may lead to destructive expansion as well.

[edit] Sea water effects

Concrete exposed to sea water is susceptible to its corrosive effects. The effects are more
pronounced above the tidal zone than where the concrete is permanently submerged. In
the submerged zone, magnesium and hydrogen carbonate ions precipitate a layer of
brucite, about 30 micrometers thick, on which a slower deposition of calcium carbonate
as aragonite occurs. These layers somewhat protect the concrete from other processes,
which include attack by magnesium, chloride and sulfate ions and carbonation. Above the
water surface, mechanical damage may occur by erosion by waves themselves or sand
and gravel they carry, and by crystallization of salts from water soaking into the concrete
pores and then drying up. Pozzolanic cements and cements using more than 60% of slag
as aggregate are more resistant to sea water than pure Portland cement.

[edit] Bacterial corrosion

Bacteria themselves do not have noticeable effect on concrete. However, anaerobic
bacteria (Thiobacillus) in untreated sewage tend to produce hydrogen sulfide, which is
then oxidized by aerobic bacteria present in biofilm on the concrete surface above the
water level to sulfuric acid which dissolves the carbonates in the cured cement and causes
strength loss. Concrete floors lying on ground that contains pyrite are also at risk. Using
limestone as the aggregate makes the concrete more resistant to acids, and the sewage
may be pretreated by ways increasing pH or oxidizing or precipitating the sulfides in
order to inhibit the activity of sulfide utilizing bacteria.

[edit] Chemical damage

[edit] Carbonation

Carbonation-initiated deterioration of concrete (at Hippodrome Wellington)

Carbon dioxide from air can react with the calcium hydroxide in concrete to form
calcium carbonate. This process is called carbonation, which is essentially the reversal of
the chemical process of calcination of lime taking place in a cement kiln. Carbonation of
concrete is a slow and continuous process progressing from the outer surface inward, but
slows down with increasing diffusion depth. Carbonation has two effects: it increases
mechanical strength of concrete, but it also decreases alkalinity, which is essential for
corrosion prevention of the reinforcement steel. Below a pH of 10, the steel's thin layer of
surface passivation dissolves and corrosion is promoted. For the latter reason,
carbonation is an unwanted process in concrete chemistry. Carbonation can be tested by
applying Phenolphthalein solution, a pH indicator, over a fresh fracture surface, which
indicates non-carbonated and thus alkaline areas with a violet color.

[edit] Chlorides
Chlorides, particularly calcium chloride, have been used to shorten the setting time of
concrete.[21] However, calcium chloride and (to a lesser extent) sodium chloride have
been shown to leach calcium hydroxide and cause chemical changes in Portland cement,
leading to loss of strength,[22] as well as attacking the steel reinforcement present in most
concrete.

[edit] Sulphates

Sulphates in solution in contact with concrete can cause chemical changes to the cement,
which can cause significant microstructural effects leading to the weakening of the
cement binder.

[edit] Leaching

Please help improve this section by expanding it. Further information might be
found on the talk page. (March 2008)

Leaching is a self healing of cracks with chemical process in concrete.

[edit] Physical damage

Damage can occur during the casting and de-shuttering processes. The corners of beams
for instance, can be damaged during the removal of shuttering because they are less
effectively compacted by means of vibration (improved by using form-vibrators). Other
physical damage can be caused by the use of steel shuttering without base plates. The
steel shuttering pinches the top surface of a concrete slab due to the weight of the next
slab being constructed.

[edit] Types of concrete

A highway paved with concrete.
Regular concrete paving blocks

Concrete in sidewalk stamped with contractor name and date it was laid

[edit] Mix Design

Modern concrete mix designs can be complex. The design of a concrete, or the way the
weights of the components of a concrete is determined, is specified by the American
Concrete Institute, the specifications of the project, and the building code where the
project is located.

The design begins by determining the "durability" requirements of the concrete. These
requirments take into consideration the weather conditions (freeze-thaw) that the concrete
will be exposed to in service, and the required design strength, or f'c, at twenty eight (28)
days after placement. The compressive strength of a concrete, fc, is determined by taking
properly molded, standard-cured, 4"x8" or 6"x12", cylinder samples.

Many factors need to be taken into account, from the cost of the various additives and
aggregates, to the trade offs between, the "slump" for easy mixing and placement and
ultimate performance. These factors are also specified by the Americam Concrete
Institute, project specifications, and the local building code where the project is located.

A mix is then designed using cement (Portland or other cementitious material), coarse
and fine aggregates, water and chemical admixtures. The method of mixing will also be
specified, as well as conditions that it may be used in.

This allows a user of the concrete to be confident that the structure will perform properly.

Various types of concrete have been developed for specialist application and have
become known by these names.

[edit] Regular concrete

Regular concrete is the lay term describing concrete that is produced by following the
mixing instructions that are commonly published on packets of cement, typically using
sand or other common material as the aggregate, and often mixed in improvised
containers. This concrete can be produced to yield a varying strength from about 10 MPa
(1450 psi) to about 40 MPa (5800 psi), depending on the purpose, ranging from blinding
to structural concrete respectively. Many types of pre-mixed concrete are available which
include powdered cement mixed with an aggregate, needing only water.

Typically, a batch of concrete can be made by using 1 part Portland cement, 2 parts dry
sand, 3 parts dry stone, 1/2 part water. The parts are in terms of weight – not volume. For
example, 1-cubic-foot (0.028 m3) of concrete would be made using 22 lb (10.0 kg)
cement, 10 lb (4.5 kg) water, 41 lb (19 kg) dry sand, 70 lb (32 kg) dry stone (1/2" to 3/4"
stone). This would make 1-cubic-foot (0.028 m3) of concrete and would weigh about
143 lb (65 kg). The sand should be mortar or brick sand (washed and filtered if possible)
and the stone should be washed if possible. Organic materials (leaves, twigs, etc) should
be removed from the sand and stone to ensure the highest strength.

[edit] High-strength concrete

High-strength concrete has a compressive strength generally greater than 6,000 pounds
per square inch (40 MPa = 5800 psi). High-strength concrete is made by lowering the
water-cement (W/C) ratio to 0.35 or lower. Often silica fume is added to prevent the
formation of free calcium hydroxide crystals in the cement matrix, which might reduce
the strength at the cement-aggregate bond.

Low W/C ratios and the use of silica fume make concrete mixes significantly less
workable, which is particularly likely to be a problem in high-strength concrete
applications where dense rebar cages are likely to be used. To compensate for the
reduced workability, superplasticizers are commonly added to high-strength mixtures.
Aggregate must be selected carefully for high-strength mixes, as weaker aggregates may
not be strong enough to resist the loads imposed on the concrete and cause failure to start
in the aggregate rather than in the matrix or at a void, as normally occurs in regular
concrete.

In some applications of high-strength concrete the design criterion is the elastic modulus
rather than the ultimate compressive strength.

[edit] High-performance concrete

High-performance concrete (HPC) is a relatively new term used to describe concrete that
conforms to a set of standards above those of the most common applications, but not
limited to strength. While all high-strength concrete is also high-performance, not all
high-performance concrete is high-strength. Some examples of such standards currently
used in relation to HPC are:

• Ease of placement
• Compaction without segregation
• Early age strength
• Long-term mechanical properties
• Permeability
• Density
• Heat of hydration
• Toughness
• Volume stability
• Long life in severe environments

[edit] Self-consolidating concretes

During the 1980s a number of countries including Japan, Sweden and France developed
concretes that are self-compacting, known as self-consolidating concrete in the United
States. This self-consolidating concrete (SCCs) is characterized by:

• extreme fluidity as measured by flow, typically between 650-750 mm on a flow
table, rather than slump(height)
• no need for vibrators to compact the concrete
• placement being easier.
• no bleed water, or aggregate segregation
• Increased Liquid Head Pressure, Can be detrimental to Safety and workmanship

SCC can save up to 50% in labor costs due to 80% faster pouring and reduced wear and
tear on formwork.

As of 2005, self-consolidating concretes account for 10-15% of concrete sales in some
European countries. In the US precast concrete industry, SCC represents over 75% of
concrete production. 38 departments of transportation in the US accept the use of SCC
for road and bridge projects.

This emerging technology is made possible by the use of polycarboxylates plasticizer
instead of older naphthalene based polymers, and viscosity modifiers to address
aggregate segregation.

[edit] Shotcrete

Main article: Shotcrete

Shotcrete (also known by the trade name Gunite) uses compressed air to shoot concrete
onto (or into) a frame or structure. Shotcrete is frequently used against vertical soil or
rock surfaces, as it eliminates the need for formwork. It is sometimes used for rock
support, especially in tunneling. Shotcrete is also used for applications where seepage is
an issue to limit the amount of water entering a construction site due to a high water table
or other subterranean sources. This type of concrete is often used as a quick fix for
weathering for loose soil types in construction zones.

There are two application methods for shotcrete.
• dry-mix – the dry mixture of cement and aggregates is filled into the machine and
conveyed with compressed air through the hoses. The water needed for the
hydration is added at the nozzle.
• wet-mix – the mixes are prepared with all necessary water for hydration. The
mixes are pumped through the hoses. At the nozzle compressed air is added for
spraying.

For both methods additives such as accelerators and fiber reinforcement may be used.[23]

[edit] Pervious concrete

Pervious concrete contains a network of holes or voids, to allow air or water to move
through the concrete.

This allows water to drain naturally through it, and can both remove the normal surface-
water drainage infrastructure, and allow replenishment of groundwater when
conventional concrete does not.

It is formed by leaving out some or all of the fine aggregate (fines), the remaining large
aggregate then is bound by a relatively small amount of Portland Cement. When set,
typically between 15% and 25% of the concrete volume are voids, allowing water to
drain at around 5 gal/ft²/ min or 200 L/m²/min) through the concrete.

[edit] Installation

Pervious is installed by being poured into forms, then screeded off, to level (not smooth)
the surface, then packed or tamped into place. Due to the low water content and air
permeability, within 5-15 minuets of tamping, the concrete must be covered with a 6-mil
poly plastic, or it will dry out prematurely and not properly hydrate and cure.

[edit] Characteristics

Pervious can significantly reduce noise, by allowing air to be squeezed between vehicle
tires and the roadway to escape. Unfortunately this product cannot be used on major state
highways yet due to the high psi ratings required by most U.S. states. Pervious has been
tested up to 4500psi so far.

[edit] Cellular concrete

Aerated concrete produced by the addition of an air entraining agent to the concrete (or a
lightweight aggregate like expanded clay pellets or cork granules and vermiculite) is
sometimes called Cellular concrete.

See also: Aerated autoclaved concrete

[edit] Cork-cement composites
Waste Cork granules are obtained during production of bottle stoppers from the treated
bark of Cork oak.[24] These granules have a density of about 300 kg/m³, lower than most
lightweight aggregates used for making lightweight concrete. Cork granules do not
significantly influence cement hydration, but cork dust may.[25] Cork cement composites
have several advantages over standard concrete, such as lower thermal conductivities,
lower densities and good energy absorption characteristics. These composites can be
made of density from 400 to 1500 kg/m³, compressive strength from 1 to 26 MPa, and
flexural strength from 0.5 to 4.0 MPa.

[edit] Roller-compacted concrete

Roller-compacted concrete, sometimes called rollcrete, is a low-cement-content stiff
concrete placed using techniques borrowed from earthmoving and paving work. The
concrete is placed on the surface to be covered, and is compacted in place using large
heavy rollers typically used in earthwork. The concrete mix achieves a high density and
cures over time into a strong monolithic block.[26] Roller-compacted concrete is typically
used for concrete pavement, but has also been used to build concrete dams, as the low
cement content causes less heat to be generated while curing than typical for
conventionally placed massive concrete pours.

[edit] Glass concrete

The use of recycled glass as aggregate in concrete has become popular in modern times,
with large scale research being carried out at Columbia University in New York. This
greatly enhances the aesthetic appeal of the concrete. Recent research findings have
shown that concrete made with recycled glass aggregates have shown better long term
strength and better thermal insulation due to its better thermal properties of the glass
aggregates. [27]

[edit] Asphalt concrete

Strictly speaking, asphalt is a form of concrete as well, with bituminous materials
replacing cement as the binder.

[edit] Rapid strength concrete

This type of concrete is able to develop high resistance within few hours after being
manufactured. This feature has advantages such as removing the formwork early and to
move forward in the building process at record time, repair road surfaces that become
fully operational in just a few hours.

[edit] Rubberized concrete

While "rubberized asphalt concrete" is common, rubberized Portland cement concrete
("rubberized PCC") is still undergoing experimental tests, as of 2007[28] [29] [30] [31].
[edit] Polymer concrete

Polymer concrete is concrete which uses polymers to bind the aggregate. Polymer
concrete can gain a lot of strength in a short amount of time. For example, a polymer mix
may reach 5000 psi in only four hours. Polymer concrete is generally more expensive
than conventional concretes.

[edit] Geopolymer or green concrete

Geopolymer concrete is a greener alternative to ordinary Portland cement made from
inorganic aluminosilicate (Al-Si) polymer compounds that can utilise 100% recycled
industrial waste (e.g. fly ash and slag) as the manufacturing inputs resulting in up to 80%
lower carbon dioxide emissions. Greater chemical and thermal resistance, and better
mechanical properties, are said to be achieved by the manufacturer at both atmospheric
and extreme conditions.[32]

Similar concretes have not only been used in Ancient Rome (see Roman concrete) as
mentioned but also in the former Soviet Union in the 1950s and 1960s. Buildings in the
Ukraine are still standing after 45 years so that this kind of formulation has a sound track
record.[33]

[edit] Limecrete

Limecrete or lime concrete is concrete where cement is replaced by lime.[34]

[edit] Refractory Cement

High-temperature applications, such as masonry ovens and the like, generally require the
use of a refractory cement; concretes based on Portland cement can be damaged or
destroyed by elevated temperatures, but refractory concretes are better able to withstand
such conditions.

[edit] Concrete testing
Compression testing of a concrete cylinder

Same cylinder after failure

Engineers usually specify the required compressive strength of concrete, which is
normally given as the 28 day compressive strength in megapascals (MPa) or pounds per
square inch (psi). Twenty eight days is a long wait to determine if desired strengths are
going to be obtained, so three-day and seven-day strengths can be useful to predict the
ultimate 28-day compressive strength of the concrete. A 25% strength gain between 7 and
28 days is often observed with 100% OPC (ordinary Portland cement) mixtures, and up
to 40% strength gain can be realized with the inclusion of pozzolans and supplementary
cementitious materials (SCMs) such as fly ash and/or slag cement. As strength gain
depends on the type of mixture, its constituents, the use of standard curing, proper testing
and care of cylinders in transport, etc. it becomes imperative to proactively rely on testing
the fundamental properties of concrete in its fresh, plastic state.

Concrete is typically sampled while being placed, with testing protocols requiring that
test samples be cured under laboratory conditions (standard cured). Additional samples
may be field cured (non-standard) for the purpose of early 'stripping' strengths, that is,
form removal, evaluation of curing, etc. but the standard cured cylinders comprise
acceptance criteria. Concrete tests can measure the "plastic" (unhydrated) properties of
concrete prior to, and during placement. As these properties affect the hardened
compressive strength and durability of concrete (resistance to freeze-thaw), the properties
of workability (slump/flow), temperature, density and age are monitored to ensure the
production and placement of 'quality' concrete. Tests are performed per
National/Regional methods and practices. The most used methods are ASTM
International, European Committee for Standardization and Canadian Standards
Association. Requirements for technicians performing concrete tests are normally given
in the actual methods. Structural design, material design and properties are often
specified in accordance with national/regional design codes.

Compressive-strength tests are conducted using an instrumented hydraulic ram to
compress a cylindrical or cubic sample to failure. Tensile strength tests are conducted
either by three-point bending of a prismatic beam specimen or by compression along the
sides of a cylindrical specimen.

[edit] Concrete recycling
Main article: Concrete recycling

Concrete recycling is an increasingly common method of disposing of concrete
structures. Concrete debris was once routinely shipped to landfills for disposal, but
recycling is increasing due to improved environmental awareness, governmental laws,
and economic benefits.

Concrete, which must be free of trash, wood, paper and other such materials, is collected
from demolition sites and put through a crushing machine, often along with asphalt,
bricks, and rocks.

Reinforced concrete contains rebar and other metallic reinforcements, which are removed
with magnets and recycled elsewhere. The remaining aggregate chunks are sorted by size.
Larger chunks may go through the crusher again. Smaller pieces of concrete are used as
gravel for new construction projects. Aggregate base gravel is laid down as the lowest
layer in a road, with fresh concrete or asphalt placed over it. Crushed recycled concrete
can sometimes be used as the dry aggregate for brand new concrete if it is free of
contaminants, though the use of recycled concrete limits strength and is not allowed in
many jurisdictions. On March 3, 1983, a government funded research team (the VIRL
research.codep) approximated that almost 17% of worldwide landfill was by-products of
concrete based waste.

Recycling concrete provides environmental benefits, conserving landfill space and use as
aggregate reduces the need for gravel mining.

[edit] Use of concrete in structures
The interior of the Pantheon in the 18th century, painted by Giovanni Paolo Panini

The Baths of Caracalla, in 2003

[edit] Mass concrete structures

These include gravity dams such as the Itaipu, Hoover Dam and the Three Gorges Dam
and large breakwaters. Concrete that is poured all at once in one block (so that there are
no weak points where the concrete is "welded" together) is used for tornado shelters.

[edit] Reinforced concrete structures

Main article: Reinforced concrete

Reinforced concrete contains steel reinforcing that is designed and placed in structural
members at specific positions to cater for all the stress conditions that the member is
required to accommodate.

[edit] Prestressed concrete structures

Main article: Prestressed concrete
Prestressed concrete is a form of reinforced concrete which builds in compressive
stresses during construction to oppose those found when in use. This can greatly reduce
the weight of beams or slabs, by better distributing the stresses in the structure to make
optimal use of the reinforcement.

For example a horizontal beam will tend to sag down. If the reinforcement along the
bottom of the beam is prestressed, it can counteract this.

In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons
or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete,
after casting.

[edit] See also
• Anthropic rock • High Reactivity Metakaolin
• Biorock • Mortar
• Brutalist architecture, encouraging • Plasticizer
visible concrete surfaces • Prefabricated
• Building construction • Pykrete, a composite material of
• Cement ice and cellulose
• Cement accelerator • Slab-on-grade foundations
• Concrete moisture meters • Types of concrete
o Geopolymer, a class of synthetic o Asphalt concrete
aluminosilicate materials o Aerated Autoclaved
o Hempcrete, a mixture with hemp Concrete
hurds o Decorative concrete
o Mudcrete, a soil-cement mixture o Fibre reinforced concrete
o Papercrete, a paper-cement o Prestressed concrete
mixture o Precast concrete
o Portland cement, the classical o Ready-mix concrete
concrete cement o Reinforced concrete
• Concrete canoe o Seacrete
• Concrete mixer o Terrazzo concrete
• Concrete masonry unit
• Concrete recycling • Bundwall
• Concrete step barrier
• Fireproofing
• Formwork
o Controlled permeability
formwork
• LiTraCon

• High performance fiber reinforced
cementitious composites
[edit] References

Cement
From Wikipedia, the free encyclopedia

Jump to: navigation, search
For other uses, see Cement (disambiguation).

In the most general sense of the word, a cement is a binder, a substance which sets and
hardens independently, and can bind other materials together. The word "cement" traces
to the Romans, who used the term "opus caementicium" to describe masonry which
resembled concrete and was made from crushed rock with burnt lime as binder. The
volcanic ash and pulverized brick additives which were added to the burnt lime to obtain
a hydraulic binder were later referred to as cementum, cimentum, cäment and cement.
Cements used in construction are characterized as hydraulic or non-hydraulic.

The most important use of cement is the production of mortar and concrete - the bonding
of natural or artificial aggregates to form a strong building material which is durable in
the face of normal environmental effects.

Cement should not be confused with concrete as the term cement explicitly refers to the
dry powder substance. Upon the addition of water and/or additives the cement mixture is
referred to as concrete, especially if aggregates have been added.

Contents
[hide]

• 1 Hydraulic vs. non-hydraulic cement
• 2 History
o 2.1 Early uses
o 2.2 Modern cement
• 3 Types of modern cement
o 3.1 Portland cement
o 3.2 Portland cement blends
o 3.3 Non-Portland hydraulic cements
• 4 Environmental and social impacts
o 4.1 Climate
o 4.2 Fuels and raw materials
o 4.3 Local impacts
• 5 Cement business
• 6 See also
• 7 Further reading
• 8 References

• 9 External links

[edit] Hydraulic vs. non-hydraulic cement
Hydraulic cements are materials that set and harden after being combined with water, as a
result of chemical reactions with the mixing water, and that, after hardening, retain
strength and stability even under water. The key requirement for this strength and
stability is that the hydrates formed on immediate reaction with water be essentially
insoluble in water. Most construction cements today are hydraulic, and most of these are
based on Portland cement, which is made primarily from limestone, certain clay
minerals, and gypsum in a high temperature process that drives off carbon dioxide and
chemically combines the primary ingredients into new compounds. Non-hydraulic
cements include such materials as (non-hydraulic) lime and gypsum plasters, which must
be kept dry in order to gain strength, and oxychloride cements, which have liquid
components. Lime mortars, for example, "set" only by drying out, and gain strength only
very slowly by absorption of carbon dioxide from the atmosphere to re-form calcium
carbonate through carbonatation.

Setting and hardening of hydraulic cements is caused by the formation of water-
containing compounds, which form as a result of reactions between cement components
and water. The reaction and the reaction products are referred to as hydration and
hydrates or hydrate phases, respectively. As a result of the immediate start of the
reactions, a stiffening can be observed which is initially slight but which increases with
time. The point at which the stiffening reaches a certain level is referred to as the start of
setting. Further consolidation is called setting, after which the hardening phase begins.
The compressive strength of the material then grows steadily, over a period that ranges
from a few days in the case of "ultra-rapid-hardening" cements to several years in the
case of ordinary cements.

[edit] History
[edit] Early uses

The earliest construction cements are as old as construction,[1] and were non-hydraulic.
Wherever primitive mud bricks were used, they were bedded together with a thin layer of
clay slurry. Mud-based materials were also used for rendering on the walls of timber or
wattle and daub structures. Lime was probably used for the first time as an additive in
these renders, and for stabilizing mud floors. A "daub" consisting of mud, cow dung and
lime produces a tough coating, due to coagulation by the lime, of proteins in the cow
dung. This simple system was common in Europe until quite recent times. With the
advent of fired bricks, and their use in larger structures, various cultures started to
experiment with higher-strength mortars based on bitumen (in Mesopotamia), gypsum (in
Egypt) and lime (in many parts of the world).

It is uncertain where it was first discovered that a combination of hydrated non-hydraulic
lime and a pozzolan produces a hydraulic mixture, but concrete made from such mixtures
was first used on a large scale by Roman engineers[2]. They used both natural pozzolans
(trass or pumice) and artificial pozzolans (ground brick or pottery) in these concretes.
Many excellent examples of structures made from these concretes are still standing,
notably the huge monolithic dome of the Pantheon in Rome and the massive Baths of
Caracalla.[3] The vast system of Roman aqueducts also made extensive use of hydraulic
cement.[4] The use of structural concrete disappeared in medieval Europe, although weak
pozzolanic concretes continued to be used as a core fill in stone walls and columns.

[edit] Modern cement

Modern hydraulic cements began to be developed from the start of the Industrial
Revolution (around 1800), driven by three main needs:

• Hydraulic renders for finishing brick buildings in wet climates
• Hydraulic mortars for masonry construction of harbor works etc, in contact with
sea water.
• Development of strong concretes.

In Britain particularly, good quality building stone became ever more expensive during a
period of rapid growth, and it became a common practice to construct prestige buildings
from the new industrial bricks, and to finish them with a stucco to imitate stone.
Hydraulic limes were favored for this, but the need for a fast set time encouraged the
development of new cements. Most famous among these was Parker's "Roman cement."[5]
This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in
fact, nothing like any material used by the Romans, but was a "Natural cement" made by
burning septaria - nodules that are found in certain clay deposits, and that contain both
clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder.
This product, made into a mortar with sand, set in 5-15 minutes. The success of "Roman
Cement" led other manufacturers to develop rival products by burning artificial mixtures
of clay and chalk.

John Smeaton made an important contribution to the development of cements when he
was planning the construction of the third Eddystone Lighthouse (1755-9) in the English
Channel. He needed a hydraulic mortar that would set and develop some strength in the
twelve hour period between successive high tides. He performed an exhaustive market
research on the available hydraulic limes, visiting their production sites, and noted that
the "hydraulicity" of the lime was directly related to the clay content of the limestone
from which it was made. Smeaton was a civil engineer by profession, and took the idea
no further. Apparently unaware of Smeaton's work, the same principle was identified by
Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a
method of combining chalk and clay into an intimate mixture, and, burning this, produced
an "artificial cement" in 1817. James Frost,[6] working in Britain, produced what he called
"British cement" in a similar manner around the same time, but did not obtain a patent
until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland
cement, because the render made from it was in color similar to the prestigious Portland
stone.

All the above products could not compete with lime/pozzolan concretes because of fast-
setting (giving insufficient time for placement) and low early strengths (requiring a delay
of many weeks before formwork could be removed). Hydraulic limes, "natural" cements
and "artificial" cements all rely upon their belite content for strength development. Belite
develops strength slowly. Because they were burned at temperatures below 1250 °C, they
contained no alite, which is responsible for early strength in modern cements. The first
cement to consistently contain alite was made by Joseph Aspdin's son William in the
early 1840s. This was what we call today "modern" Portland cement. Because of the air
of mystery with which William Aspdin surrounded his product, others (e.g. Vicat and I C
Johnson) have claimed precedence in this invention, but recent analysis[7] of both his
concrete and raw cement have shown that William Aspdin's product made at Northfleet,
Kent was a true alite-based cement. However, Aspdin's methods were "rule-of-thumb":
Vicat is responsible for establishing the chemical basis of these cements, and Johnson
established the importance of sintering the mix in the kiln.

William Aspdin's innovation was counter-intuitive for manufacturers of "artificial
cements", because they required more lime in the mix (a problem for his father), because
they required a much higher kiln temperature (and therefore more fuel) and because the
resulting clinker was very hard and rapidly wore down the millstones which were the
only available grinding technology of the time. Manufacturing costs were therefore
considerably higher, but the product set reasonably slowly and developed strength
quickly, thus opening up a market for use in concrete. The use of concrete in construction
grew rapidly from 1850 onwards, and was soon the dominant use for cements. Thus
Portland cement began its predominant role.

[edit] Types of modern cement

A pallet with portland cement

[edit] Portland cement

Main article: Portland cement

Cement is made by heating limestone with small quantities of other materials (such as
clay) to 1450°C in a kiln. The resulting hard substance, called ‘clinker’, is then ground
with a small amount of gypsum into a powder to make ‘Ordinary Portland Cement’, the
most commonly used type of cement (often referred to as OPC).

Portland cement is a basic ingredient of concrete, mortar and most non-speciality grout.
The most common use for Portland cement is in the production of concrete. Concrete is a
composite material consisting of aggregate (gravel and sand), cement, and water. As a
construction material, concrete can be cast in almost any shape desired, and once
hardened, can become a structural (load bearing) element. Portland cement may be gray
or white.

[edit] Portland cement blends

These are often available as inter-ground mixtures from cement manufacturers, but
similar formulations are often also mixed from the ground components at the concrete
mixing plant.[8]

Portland Blastfurnace Cement contains up to 70% ground granulated blast furnace
slag, with the rest Portland clinker and a little gypsum. All compositions produce high
ultimate strength, but as slag content is increased, early strength is reduced, while sulfate
resistance increases and heat evolution diminishes. Used as an economic alternative to
Portland sulfate-resisting and low-heat cements.[9]
Portland Flyash Cement contains up to 30% fly ash. The flyash is pozzolanic, so that
ultimate strength is maintained. Because flyash addition allows a lower concrete water
content, early strength can also be maintained. Where good quality cheap flyash is
available, this can be an economic alternative to ordinary Portland cement.[10]

Portland Pozzolan Cement includes fly ash cement, since fly ash is a pozzolan, but also
includes cements made from other natural or artificial pozzolans. In countries where
volcanic ashes are available (e.g. Italy, Chile, Mexico, the Philippines) these cements are
often the most common form in use.

Portland Silica Fume cement. Addition of silica fume can yield exceptionally high
strengths, and cements containing 5-20% silica fume are occasionally produced.
However, silica fume is more usually added to Portland cement at the concrete mixer.[11]

Masonry Cements are used for preparing bricklaying mortars and stuccos, and must not
be used in concrete. They are usually complex proprietary formulations containing
Portland clinker and a number of other ingredients that may include limestone, hydrated
lime, air entrainers, retarders, waterproofers and coloring agents. They are formulated to
yield workable mortars that allow rapid and consistent masonry work. Subtle variations
of Masonry cement in the US are Plastic Cements and Stucco Cements. These are
designed to produce controlled bond with masonry blocks.

Expansive Cements contain, in addition to Portland clinker, expansive clinkers (usually
sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage that is
normally encountered with hydraulic cements. This allows large floor slabs (up to 60 m
square) to be prepared without contraction joints.

White blended cements may be made using white clinker and white supplementary
materials such as high-purity metakaolin.

Colored cements are used for decorative purposes. In some standards, the addition of
pigments to produce "colored Portland cement" is allowed. In other standards (e.g.
ASTM), pigments are not allowed constituents of Portland cement, and colored cements
are sold as "blended hydraulic cements".

Very finely ground cements are made from mixtures of cement with sand or with slag
or other pozzolan type minerals which are extremely finely ground. Such cements can
have the same physical characteristics as normal cement but with 50% less cement
particularly due to there increased surface area for the chemical reaction. Even with
intensive grinding they can use up to 50% less energy to fabricate than ordinary Portland
cements.

[edit] Non-Portland hydraulic cements

Pozzolan-lime cements. Mixtures of ground pozzolan and lime are the cements used by
the Romans, and are to be found in Roman structures still standing (e.g. the Pantheon in
Rome). They develop strength slowly, but their ultimate strength can be very high. The
hydration products that produce strength are essentially the same as those produced by
Portland cement.

Slag-lime cements. Ground granulated blast furnace slag is not hydraulic on its own, but
is “activated” by addition of alkalis, most economically using lime. They are similar to
pozzolan lime cements in their properties. Only granulated slag (i.e. water-quenched,
glassy slag) is effective as a cement component.

Supersulfated cements. These contain about 80% ground granulated blast furnace slag,
15% gypsum or anhydrite and a little Portland clinker or lime as an activator. They
produce strength by formation of ettringite, with strength growth similar to a slow
Portland cement. They exhibit good resistance to aggressive agents, including sulfate.

Calcium aluminate cements are hydraulic cements made primarily from limestone and
bauxite. The active ingredients are monocalcium aluminate CaAl2O4 (CA in Cement
chemist notation) and Mayenite Ca12Al14O33 (C12A7 in CCN). Strength forms by hydration
to calcium aluminate hydrates. They are well-adapted for use in refractory (high-
temperature resistant) concretes, e.g. for furnace linings.

Calcium sulfoaluminate cements are made from clinkers that include ye’elimite
(Ca4(AlO2)6SO4 or C4A3 in Cement chemist’s notation) as a primary phase. They are used
in expansive cements, in ultra-high early strength cements, and in "low-energy" cements.
Hydration produces ettringite, and specialized physical properties (such as expansion or
rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions.
Their use as a low-energy alternative to Portland cement has been pioneered in China,
where several million tonnes per year are produced[12][13]. Energy requirements are lower
because of the lower kiln temperatures required for reaction, and the lower amount of
limestone (which must be endothermically decarbonated) in the mix. In addition, the
lower limestone content and lower fuel consumption leads to a CO2 emission around half
that associated with Portland clinker. However, SO2 emissions are usually significantly
higher.

“Natural” Cements correspond to certain cements of the pre-Portland era, produced by
burning argillaceous limestones at moderate temperatures. The level of clay components
in the limestone (around 30-35%) is such that large amounts of belite (the low-early
strength, high-late strength mineral in Portland cement) are formed without the formation
of excessive amounts free lime. As with any natural material, such cements have very
variable properties.

Geopolymer cements are made from mixtures of water-soluble alkali metal silicates and
aluminosilicate mineral powders such as fly ash and metakaolin.

[edit] Environmental and social impacts
Cement manufacture causes environmental impacts at all stages of the process. These
include emissions of airborne pollution in the form of dust, gases, noise and vibration
when operating machinery and during blasting in quarries, and damage to countryside
from quarrying. Equipment to reduce dust emissions during quarrying and manufacture
of cement is widely used, and equipment to trap and separate exhaust gases are coming
into increased use. Environmental protection also includes the re-integration of quarries
into the countryside after they have been closed down by returning them to nature or re-
cultivating them.

[edit] Climate

Cement manufacture contributes greenhouse gases both directly through the production
of carbon dioxide when calcium carbonate is heated, producing lime and carbon
dioxide[14], and also indirectly through the use of energy, particularly if the energy is
sourced from fossil fuels. The cement industry produces 5% of global man-made CO2
emissions, of which 50% is from the chemical process, and 40% from burning fuel.[15]
The amount of CO2 emitted by the cement industry is nearly 900 kg of CO2 for every
1000 kg of cement produced. [16]

[edit] Fuels and raw materials

A cement plant consumes 3 to 6 GJ of fuel per tonne of clinker produced, depending on
the raw materials and the process used. Most cement kilns today use coal and petroleum
coke as primary fuels, and to a lesser extent natural gas and fuel oil. Selected waste and
by-products with recoverable calorific value can be used as fuels in a cement kiln,
replacing a portion of conventional fossil fuels, like coal, if they meet strict
specifications. Selected waste and by-products containing useful minerals such as
calcium, silica, alumina, and iron can be used as raw materials in the kiln, replacing raw
materials such as clay, shale, and limestone. Because some materials have both useful
mineral content and recoverable calorific value, the distinction between alternative fuels
and raw materials is not always clear. For example, sewage sludge has a low but
significant calorific value, and burns to give ash containing minerals useful in the clinker
matrix.[17]

[edit] Local impacts

Producing cement has significant positive and negative impacts at a local level. On the
positive side, the cement industry may create employment and business opportunities for
local people, particularly in remote locations in developing countries where there are few
other opportunities for economic development. Negative impacts include disturbance to
the landscape, dust and noise, and disruption to local biodiversity from quarrying
limestone (the raw material for cement).

[edit] Cement business
Cement output in 2004

In 2002 the world production of hydraulic cement was 1,800 million metric tons. The top
three producers were China with 704, India with 100, and the United States with 91
million metric tons for a combined total of about half the world total by the world's three
most populous states.[18]

"For the past 18 years, China consistently has produced more cement than any other
country in the world. [...] China's cement export peaked in 1994 with 11 million tons
shipped out and has been in steady decline ever since. Only 5.18 million tons were
exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out
of the market as Thailand is asking as little as $20 for the same quality."[19]

"Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion
metric tons in 2008, driven by slowing but healthy growth in construction expenditures.
Cement consumed in China will amount to 44% of global demand, and China will remain
the world's largest national consumer of cement by a large margin."[20]

In 2006 it was estimated that China manufactured 1.235 billion metric tons of cement,
which is 44% of the world total cement production.[21]

[edit] See also
• BET theory
• Cement chemist notation
• Cement render
• Fly ash
• Portland cement
• Rosendale cement

[edit] Further reading
• Aitcin, Pierre-Claude (2000). "Cements of yesterday and today: Concrete of
tomorrow". Cement and Concrete Research 30 (9): 1349–1359.
doi:10.1016/S0008-8846(00)00365-3.
http://www.sciencedirect.com/science/article/B6TWG-41PP28Y-
1/2/e7cda14fbbf68ce0c24c7c4fc76a5865. Retrieved on 9 April 2008.

• Friedrich W. Locher: Cement : Principles of production and use, Duesseldorf,
Germany: Verlag Bau + Technik GmbH, 2006, ISBN 3-7640-0420-7
• Javed I. Bhatty, F. MacGregor Miller, Steven H. Kosmatka; editors: Innovations
in Portland Cement Manufacturing, SP400, Portland Cement Association, Skokie,
Illinois, USA, 2004, ISBN 0-89312-234-3
• "Cement Industry Is at Center of Climate Change Debate" article by Elizabeth
Rosenthal in the New York Times October 26, 2007

• Neville, A.M. (1996). Properties of concrete. Fourth and final edition standards.
Pearson, Prentice Hall. ISBN 0-582-23070-5. OCLC 33837400.

• Taylor, H.F.W. (1990). Cement Chemistry. Academic Press. pp. 475. ISBN 0-12-
683900-X.

[edit] References

Rice
From Wikipedia, the free encyclopedia

Jump to: navigation, search
Rice, white, long-grain, regular,
raw, unenriched
Nutritional value per 100 g (3.5 oz)
Energy 370 kcal 1530 kJ

Carbohydrates 79 g
- Sugars 0.12 g
- Dietary fiber 1.3 g
Fat 0.66 g
Protein 7.13 g
11.62
Water
g
Thiamin (Vit. B1) 0.070 mg 5%
Riboflavin (Vit. B2) 0.049 mg 3%
Niacin (Vit. B3) 1.6 mg 11%
Pantothenic acid (B5) 1.014
20%
mg
Vitamin B6 0.164 mg 13%
Folate (Vit. B9) 8 μg 2%
Calcium 28 mg 3%
Iron 0.80 mg 6%
Magnesium 25 mg 7%
Manganese 1.088 mg 54%
Phosphorus 115 mg 16%
Potassium 115 mg 2%
Zinc 1.09 mg 11%
Percentages are relative to US
recommendations for adults.
Source: USDA Nutrient database

Oryza sativa

The planting of rice is often a labour-intensive process

Unpolished rice with bran.
Japanese short-grain rice

Japanese short-grain rice
For other uses, see Rice (disambiguation).

Rice is a cereal foodstuff which forms an important part of the diet of many people
worldwide and as such it is a staple food for many.

Domesticated rice comprises two species of food crops in the Oryza genus of the Poaceae
("true grass") family: Asian rice, Oryza sativa is native to tropical and subtropical
southern Asia; African rice, Oryza glaberrima, is native to West Africa.[1]

The name wild rice is usually used for species of the different but related genus Zizania,
both wild and domesticated, although the term may be used for primitive or uncultivated
varieties of Oryza.

Rice is grown as a monocarpic annual plant, although in tropical areas it can survive as a
perennial and can produce a ratoon crop and survive for up to 20 years.[2] Rice can grow
to 1–1.8 m tall, occasionally more depending on the variety and soil fertility. The grass
has long, slender leaves 50–100 cm long and 2–2.5 cm broad. The small wind-pollinated
flowers are produced in a branched arching to pendulous inflorescence 30–50 cm long.
The edible seed is a grain (caryopsis) 5–12 mm long and 2–3 mm thick.

Rice is a staple food for a large part of the world's human population, especially in
tropical Latin America, and East, South and Southeast Asia, making it the second-most
consumed cereal grain.[3] A traditional food plant in Africa, Rice has the potential to
improve nutrition, boost food security, foster rural development and support sustainable
landcare.[4] Rice provides more than one fifth of the calories consumed worldwide by
humans.[5] In early 2008, some governments and retailers began rationing supplies of the
grain due to fears of a global rice shortage.[6][7]

Rice cultivation is well-suited to countries and regions with low labor costs and high
rainfall, as it is very labor-intensive to cultivate and requires plenty of water for
cultivation. On the other hand, mechanized cultivation is extremely oil-intensive, more
than other food products with the exception of beef and dairy products.[citation needed] Rice
can be grown practically anywhere, even on a steep hill or mountain. Although its species
are native to South Asia and certain parts of Africa, centuries of trade and exportation
have made it commonplace in many cultures.

The traditional method for cultivating rice is flooding the fields whilst, or after, setting
the young seedlings. This simple method requires sound planning and servicing of the
water damming and channeling, but reduces the growth of less robust weed and pest
plants that have no submerged growth state, and deters vermin. While with rice growing
and cultivation the flooding is not mandatory, all other methods of irrigation require
higher effort in weed and pest control during growth periods and a different approach for
fertilizing the soil.

Contents
[hide]

• 1 Classification
• 2 Etymology
• 3 Preparation as food
• 4 Cooking
• 5 Rice growing ecology
• 6 History of rice domestication & cultivation
o 6.1 Continental East Asia
o 6.2 South Asia
o 6.3 Korean peninsula and Japanese archipelago
o 6.4 Southeast Asia
o 6.5 Africa
o 6.6 Middle East
o 6.7 Europe
o 6.8 United States
o 6.9 Australia
• 7 World production and trade
o 7.1 Production and export
o 7.2 Price
o 7.3 Worldwide consumption
o 7.4 Environmental impacts
• 8 Pests and diseases
• 9 Cultivars
• 10 Biotechnology
o 10.1 High-yielding varieties
o 10.2 Potentials for the future
o 10.3 Golden rice
o 10.4 Expression of human proteins
• 11 Others
• 12 See also
• 13 References
o 13.1 General References
• 14 External links
o 14.1 General
o 14.2 Rice research & development
o 14.3 Rice in agriculture
o 14.4 Rice as food
o 14.5 Rice as fuel
o 14.6 Rice economics

o 14.7 Rice genome

[edit] Classification
There are two species of domesticated rice, Oryza sativa (Asian) and Oryza glaberrima
(African).

Oryza sativa contains two major subspecies: the sticky, short-grained japonica or sinica
variety, and the non-sticky, long-grained indica variety. Japonica are usually cultivated
in dry fields, in temperate East Asia, upland areas of Southeast Asia and high elevations
in South Asia, while indica are mainly lowland rices, grown mostly submerged,
throughout tropical Asia.[8]

A third subspecies, which is broad-grained and thrives under tropical conditions, was
identified based on morphology and initially called javanica, but is now known as
tropical japonica. Examples of this variety include the medium grain “Tinawon” and
“Unoy” cultivars, which are grown in the high-elevation rice terraces of the Cordillera
Mountains of northern Luzon, Philippines.[9]

Glaszmann (1987) used isozymes to sort Oryza sativa into six groups: japonica,
aromatic, indica, aus, rayada, and ashina.[10]

Garris et al (2004) used SSRs to sort Oryza sativa into five groups; temperate japonica,
tropical japonica and aromatic comprise the japonica varieties, while indica and aus
comprise the indica varieties.[11]

[edit] Etymology
According to the Microsoft Encarta Dictionary (2004) and the Chambers Dictionary of
Etymology (1988), the word 'rice' has an Indo-Iranian origin. It came to English from
Greek óryza, via Latin oriza, Italian riso and finally Old French ris (the same as present
day French riz).

It has been speculated that the Indo-Iranian vrihi itself is borrowed from a Dravidian vari
(< PDr. *warinci)[12] or even a Munda language term for rice, or the Tamil name arisi
(அரிசி) from which the Arabic ar-ruzz, from which the Portuguese and Spanish word
arroz originated.

[edit] Preparation as food

Broker of rice in the 1820's Japan. "36 Views of Mount Fuji" Hokusai

Old fashioned way of rice polishing in Japan."36 Views of Mount Fuji" Hokusai

The seeds of the rice plant are first milled using a rice huller to remove the chaff (the
outer husks of the grain). At this point in the process, the product is called brown rice.
The milling may be continued, removing the 'bran' (i.e. the rest of the husk and the germ),
thereby creating white rice. White rice, which keeps longer, lacks some important
nutrients; in a limited diet which does not supplement the rice, brown rice helps to
prevent the deficiency disease beriberi.

White rice may be also buffed with glucose or talc powder (often called polished rice,
though this term may also refer to white rice in general), parboiled, or processed into
flour. White rice may also be enriched by adding nutrients, especially those lost during
the milling process. While the cheapest method of enriching involves adding a powdered
blend of nutrients that will easily wash off (in the United States, rice which has been so
treated requires a label warning against rinsing), more sophisticated methods apply
nutrients directly to the grain, coating the grain with a water insoluble substance which is
resistant to washing.

Terraced rice paddy on a hill slope in Indonesia.

Despite the hypothetical health risks of talc (such as stomach cancer),[13] talc-coated rice
remains the norm in some countries due to its attractive shiny appearance, but it has been
banned in some and is no longer widely used in others such as the United States. Even
where talc is not used, glucose, starch, or other coatings may be used to improve the
appearance of the grains; for this reason, many rice lovers still recommend washing all
rice in order to create a better-tasting rice with a better consistency, despite the
recommendation of suppliers. Much of the rice produced today is water polished.[citation
needed]

Rice bran, called nuka in Japan, is a valuable commodity in Asia and is used for many
daily needs. It is a moist, oily inner layer which is heated to produce an oil. It is also used
as a pickling bed in making rice bran pickles and Takuan.

Raw rice may be ground into flour for many uses, including making many kinds of
beverages such as amazake, horchata, rice milk, and sake. Rice flour does not contain
gluten and is suitable for people on a gluten-free diet. Rice may also be made into various
types of noodles. Raw wild or brown rice may also be consumed by raw-foodist or
fruitarians if soaked and sprouted (usually 1 week to 30 days), see also Gaba rice below.

Processed rice seeds must be boiled or steamed before eating. Cooked rice may be further
fried in oil or butter, or beaten in a tub to make mochi.

Rice is a good source of protein and a staple food in many parts of the world, but it is not
a complete protein: it does not contain all of the essential amino acids in sufficient
amounts for good health, and should be combined with other sources of protein, such as
nuts, seeds, beans or meat.[14]
Rice, like other cereal grains, can be puffed (or popped). This process takes advantage of
the grains' water content and typically involves heating grains in a special chamber.
Further puffing is sometimes accomplished by processing pre-puffed pellets in a low-
pressure chamber. The ideal gas law means that either lowering the local pressure or
raising the water temperature results in an increase in volume prior to water evaporation,
resulting in a puffy texture. Bulk raw rice density is about 0.9 g/cm³. It decreases more
than tenfold when puffed.

[edit] Cooking
See Wikibooks' Rice Recipes for information on food preparation using rice.

There are many varieties of rice; for many purposes the main distinction is between long-
and medium-grain rice. The grains of long-grain rice tend to remain intact after cooking;
medium-grain rice becomes more sticky. Medium-grain rice is used for sweet dishes, and
for risotto and many Spanish dishes.

Uncooked long rice

Rice is cooked by boiling or steaming, and absorbs water during cooking. It can be
cooked in just as much water as it absorbs (the absorption method), or in a large quantity
of water which is drained before serving (the rapid-boil method). Electric rice cookers,
popular in Asia and Latin America, simplify the process of cooking rice. Rice is often
heated in oil before boiling, or oil is added to the water; this is thought to make the
cooked rice less sticky.

In Arab cuisine rice is an ingredient of many soups and dishes with fish, poultry, and
other types of meat. It is also used to stuff vegetables or is wrapped in grape leaves.
When combined with milk, sugar and honey, it is used to make desserts. In some regions,
such as Tabaristan, bread is made using rice flour. Medieval Islamic texts spoke of
medical uses for the plant.[15]

Rice may also be made into rice porridge (also called congee or rice gruel) by adding
more water than usual, so that the cooked rice is saturated with water to the point that it
becomes very soft, expanded, and fluffy. Rice porridge is commonly eaten as a breakfast
food, and is also a traditional food for the sick.
Rice may be soaked prior to cooking, which decreases cooking time. For some varieties,
soaking improves the texture of the cooked rice by increasing expansion of the grains.

In some countries parboiled rice, also known as Minute rice or easy-cook rice, is popular.
Parboiled rice is subjected to a steaming or parboiling process while still a brown rice.
This causes nutrients from the outer husk to move into the grain itself. The parboil
process causes a gelatinisation of the starch in the grains. The grains become less brittle,
and the color of the milled grain changes from white to yellow. The rice is then dried, and
can then be milled as usual or used as brown rice. Milled parboiled rice is nutritionally
superior to standard milled rice. Parboiled rice has an additional benefit in that it does not
stick to the pan during cooking as happens when cooking regular white rice.

A nutritionally superior method of preparing brown rice known as GABA Rice or GBR
(Germinated Brown Rice)[16] may be used. This involves soaking washed brown rice for
20 hours in warm water (38°C or 100°F) prior to cooking it. This process stimulates
germination, which activates various enzymes in the rice. By this method, a result of
research carried out for the United Nations Year of Rice, it is possible to obtain a more
complete amino acid profile, including GABA.

Cooked rice can contain Bacillus cereus spores which produce an emetic toxin when left
at 4–60°C [5]. When storing cooked rice for use the next day, rapid cooling is advised to
reduce the risk of contamination.

[edit] Rice growing ecology
Rice can be grown in different ecologies, depending upon water availability.[17]

1. Lowland, rainfed, which is drought prone, favors medium depth; waterlogged,
submergence, and flood prone
2. Lowland, irrigated, grown in both the wet season and the dry season
3. Deep water or floating rice
4. Coastal Wetland
5. Upland rice, also known as 'Ghaiya rice', well known for its drought tolerance[18]

[edit] History of rice domestication & cultivation
Based on one chloroplast and two nuclear gene regions, Londo et al (2006) conclude that
rice was domesticated at least twice—indica in eastern India, Myanmar and Thailand;
and japonica in southern China—though they concede that there is archaeological and
genetic evidence for a single domestication of rice in the lowlands of China.[19]

Because the functional allele for non-shattering—the critical indicator of domestication in
grains—as well as five other single nucleotide polymorphisms, is identical in both indica
and japonica, Vaughan et al (2008) determined that there was a single domestication
event for Oryza sativa in the region of the Yangtze river valley.[20]
[edit] Continental East Asia

Rice appears to have been used by the Early Neolithic populations of Lijiacun and
Yunchanyan.[21] Rice cultivation began in China ca. 11,500 BP.[22] Bruce Smith, an
archaeologist at the Smithsonian Institution in Washington, D.C., who has written on the
origins of agriculture, says that evidence has been mounting that the Yangtze was
probably the site of the earliest rice cultivation.[23]

Zhao (1998) argues that collection of wild rice in the Late Pleistocene had, by 6400 BC,
led to the use of primarily domesticated rice.[24] Morphological studies of rice phytoliths
from the Diaotonghuan archaeological site clearly show the transition from the collection
of wild rice to the cultivation of domesticated rice. The large number of wild rice
phytoliths at the Diaotonghuan level dating from 12,000-11,000 BP indicates that wild
rice collection was part of the local means of subsistence. Changes in the morphology of
Diaotonghuan phytoliths dating from 10,000-8,000 BP show that rice had by this time
been domesticated.[25] Analysis of Chinese rice residues from Pengtoushan which were
C14(carbon dating) dated to 8200-7800 BCE show that rice had been domesticated by
this time.[26]

In 1998, Crawford & Shen reported that the earliest of 14 AMS or radiocarbon dates on
rice from at least nine Early to Middle Neolithic sides is no older than 7000 BC, that rice
from the Hemudu and Luojiajiao sites indicates that rice domestication likely began
before 5000 BC, but that most sites in China from which rice remains have been
recovered are younger than 5000 BC.[27]

[edit] South Asia

Paddy fields in the Indian state of Tamil Nadu

Wild Oryza rice appeared in the Belan and Ganges valley regions of northern India as
early as 4530 BC and 5440 BC respectively,[28] although many believe it may have
appeared earlier. The Encyclopedia Britannica—on the subject of the first certain
cultivated rice—holds that:[29]

Many cultures have evidence of early rice cultivation, including China, India, and the
civilizations of Southeast Asia. However, the earliest archaeological evidence comes from
central and eastern China and dates to 7000–5000 BC.
Denis J. Murphy (2007) further details the spread of cultivated rice from India into
South-east Asia:[30]

Several wild cereals, including rice, grew in the Vindhyan Hills, and rice cultivation, at sites
such as Chopani-Mando and Mahagara, may have been underway as early as 7000 BP. The
relative isolation of this area and the early development of rice farming imply that it was
developed indigenously.

Chopani-Mando and Mahagara are located on the upper reaches of the Ganges drainage
system and it is likely that migrants from this area spread rice farming down the Ganges valley
into the fertile plains of Bengal, and beyond into south-east Asia.

Rice was cultivated in the Indus Valley Civilization.[31] Agricultural activity during the
second millennium BC included rice cultivation in the Kashmir and Harrappan regions.[28]
Mixed farming was the basis of Indus valley economy.[31]

Punjab is the largest producer and consumer of rice in India.

[edit] Korean peninsula and Japanese archipelago

Utagawa Hiroshige, Rice field in Oki province, view of O-Yama.

Mainstream archaeological evidence derived from palaeoethnobotanical investigations
indicate that dry-land rice was introduced to Korea and Japan some time between 3500
and 1200 BC. The cultivation of rice in Korea and Japan during that time occurred on a
small-scale, fields were impermanent plots, and evidence shows that in some cases
domesticated and wild grains were planted together. The technological, subsistence, and
social impact of rice and grain cultivation is not evident in archaeological data until after
1500 BC. For example, intensive wet-paddy rice agriculture was introduced into Korea
shortly before or during the Middle Mumun Pottery Period (c. 850–550 BC) and reached
Japan by the Final Jōmon or Initial Yayoi circa 300 BC.[32][33]

In 2003, Korean archaeologists alleged that they discovered burnt grains of domesticated
rice in Soro-ri, Korea, which dated to 13,000 BC. These predate the oldest grains in
China, which were dated to 10,000 BC, and potentially challenge the mainstream
explanation that domesticated rice originated in China.[34] The findings were received by
academia with strong skepticism, and the results and their publicizing has been cited as
being driven by a combination of nationalist and regional interests.[35]

[edit] Southeast Asia

Using water buffalo to plough rice fields in Java; Indonesia is the world's third largest
paddy rice producer and its cultivation has transformed much of the country's landscape.

Rice is the staple for all classes in contemporary South East Asia, from Myanmar to
Indonesia. In Indonesia, evidence of wild Oryza rice on the island of Sulawesi dates from
3000 BCE. The evidence for the earliest cultivation, however, comes from eighth century
stone inscriptions from Java, which show kings levied taxes in rice. Divisions of labor
between men, women, and animals that are still in place in Indonesian rice cultivation,
can be seen carved into the ninth-century Prambanan temples in Central Java. In the
sixteenth century, Europeans visiting the Indonesian islands saw rice as a new prestige
food served to the aristocracy during ceremonies and feasts. Rice production in
Indonesian history is linked to the development of iron tools and the domestication of
water buffalo for cultivation of fields and manure for fertilizer. Once covered in dense
forest, much of the Indonesian landscape has been gradually cleared for permanent fields
and settlements as rice cultivation developed over the last fifteen hundred years.[36]

In the Philippines, the greatest evidence of rice cultivation since ancient times can be
found in the Cordillera Mountain Range of Luzon in the provinces of Apayao, Benguet,
Mountain Province and Ifugao. The Banaue Rice Terraces (Tagalog: Hagdan-hagdang
Palayan ng Banaue) are 2,000 to 3,000-year old terraces that were carved into the
mountains by ancestors of the Batad indigenous people. It is commonly thought that the
terraces were built with minimal equipment, largely by hand. The terraces are located
approximately 1,500 meters (5000 ft) above sea level and cover 10,360 square kilometers
(about 4,000 square miles) of mountainside. They are fed by an ancient irrigation system
from the rainforests above the terraces. It is said that if the steps are put end to end it
would encircle half the globe. The Rice Terraces (a UNESCO World Heritage Site) are
commonly referred to by Filipinos as the "Eighth Wonder of the World".
Evidence of wet rice cultivation as early as 2200 BC has been discovered at both Ban
Chiang and Ban Prasat in Thailand.

By the 19th Century, encroaching European expansionism in the area increased rice
production in much of South East Asia, and Thailand, then known as Siam. British
Burma became the world's largest exporter of rice, from the turn of the 20th century up
till the 1970s, when neighbouring Thailand exceeded Burma.

[edit] Africa

Rice crop in Madagascar

African rice has been cultivated for 3500 years. Between 1500 and 800 BC, O.
glaberrima propagated from its original centre, the Niger River delta, and extended to
Senegal. However, it never developed far from its original region. Its cultivation even
declined in favour of the Asian species, possibly brought to the African continent by
Arabs coming from the east coast between the 7th and 11th centuries CE.

In parts of Africa under Islam, rice was chiefly grown in southern Morocco. During the
tenth century rice was also brought to east Africa by Muslim traders. Although, the
diffusion of rice in much sub-Saharan Africa remains uncertain, Muslims brought it to the
region stretching from Lake Chad to the White Nile.[37]

The actual and hypothesized cultivation of rice (areas shown in green) in the Old World
(both Muslim and non-Muslim regions) during Islamic times (700-1500). Cultivation of
rice during pre-Islamic times have been shown in orange.[37]

[edit] Middle East
According to Zohary and Hopf (2000, p. 91), O. sativa was introduced to the Middle East
in Hellenistic times, and was familiar to both Greek and Roman writers. They report that
a large sample of rice grains was recovered from a grave at Susa in Iran (dated to the first
century AD) at one end of the ancient world, while at the same time rice was grown in
the Po valley in Italy. However, Pliny the Elder writes that rice (oryza) is grown only in
"Egypt, Syria, Cilicia, Asia Minor and Greece" (N.H. 18.19).[citation needed]

After the rise of Islam, rice was grown anywhere there was enough water to irrigate it.
Thus, desert oases, river valleys, and swamp lands were all important sources of rice
during the Muslim Agricultural Revolution.[37]

In Iraq rice was grown in some areas of southern Iraq. With the rise of Islam it moved
north to Nisibin, the southern shores of the Caspian Sea and then beyond the Muslim
world into the valley of Volga. In Israel, rice came to be grown in the Jordan valley. Rice
is also grown in Yemen.[37]

[edit] Europe

The Muslims (later known as Moors) brought Asiatic rice to the Iberian Peninsula in the
tenth century. Records indicate it was grown in Valencia and Majorca. In Majorca, rice
cultivation seems to have stopped after the Christian conquest, although historians are not
certain.[37]

Muslims also brought rice to Sicily, where it was an important crop.[37]

After the middle of the 15th century, rice spread throughout Italy and then France, later
propagating to all the continents during the age of European exploration.

[edit] United States

South Carolina rice plantation (Mansfield Plantation, Georgetown.)

In 1694, rice arrived in South Carolina, probably originating from Madagascar.[citation needed]

In the United States, colonial South Carolina and Georgia grew and amassed great wealth
from the slave labor obtained from the Senegambia area of West Africa and from coastal
Sierra Leone. At the port of Charleston, through which 40% of all American slave
imports passed, slaves from this region of Africa brought the highest prices, in
recognition of their prior knowledge of rice culture, which was put to use on the many
rice plantations around Georgetown, Charleston, and Savannah. From the slaves,
plantation owners learned how to dyke the marshes and periodically flood the fields. At
first the rice was milled by hand with wooden paddles, then winnowed in sweetgrass
baskets (the making of which was another skill brought by the slaves). The invention of
the rice mill increased profitability of the crop, and the addition of water power for the
mills in 1787 by millwright Jonathan Lucas was another step forward. Rice culture in the
southeastern U.S. became less profitable with the loss of slave labor after the American
Civil War, and it finally died out just after the turn of the 20th century. Today, people can
visit the only remaining rice plantation in South Carolina that still has the original
winnowing barn and rice mill from the mid-1800s at the historic Mansfield Plantation in
Georgetown, SC. The predominant strain of rice in the Carolinas was from Africa and
was known as "Carolina Gold." The cultivar has been preserved and there are current
attempts to reintroduce it as a commercially grown crop.[38]

American long-grain rice

In the southern United States, rice has been grown in southern Arkansas, Louisiana, and
east Texas since the mid 1800s. Many Cajun farmers grew rice in wet marshes and low
lying prairies. In recent years rice production has risen in North America, especially in
the Mississippi River Delta areas in the states of Arkansas and Mississippi.

Rice cultivation began in California during the California Gold Rush, when an estimated
40,000 Chinese laborers immigrated to the state and grew small amounts of the grain for
their own consumption. However, commercial production began only in 1912 in the town
of Richvale in Butte County.[39] By 2006, California produced the second largest rice crop
in the United States,[40] after Arkansas, with production concentrated in six counties north
of Sacramento.[41] Unlike the Mississippi Delta region, California's production is
dominated by short- and medium-grain japonica varieties, including cultivars developed
for the local climate such as Calrose, which makes up as much as eighty five percent of
the state's crop.[42]

References to wild rice in the Americas are to the unrelated Zizania palustris

More than 100 varieties of rice are commercially produced primarily in six states
(Arkansas, Texas, Louisiana, Mississippi, Missouri, and California) in the U.S.[43]
According to estimates for the 2006 crop year, rice production in the U.S. is valued at
$1.88 billion, approximately half of which is expected to be exported. The U.S. provides
about 12% of world rice trade.[44] The majority of domestic utilization of U.S. rice is
direct food use (58%), while 16 percent is used in processed foods and beer respectively.
The remaining 10 percent is found in pet food.[45]

[edit] Australia

Although attempts to grow rice in the well-watered north of Australia have been made for
many years, they have consistently failed because of inherent iron and manganese
toxicities in the soils and destruction by pests.

In the 1920s it was seen as a possible irrigation crop on soils within the Murray-Darling
Basin that were too heavy for the cultivation of fruit and too infertile for wheat.[46]

Because irrigation water, despite the extremely low runoff of temperate Australia, was
(and remains) very cheap, the growing of rice was taken up by agricultural groups over
the following decades. Californian varieties of rice were found suitable for the climate in
the Riverina, and the first mill opened at Leeton in 1951.

Even before this Australia's rice production greatly exceeded local needs,[47] and rice
exports to Japan have become a major source of foreign currency. Above-average rainfall
from the 1950s to the middle 1990s[48] encouraged the expansion of the Riverina rice
industry, but its prodigious water use in a practically waterless region began to attract the
attention of environmental scientists. These became severely concerned with declining
flow in the Snowy River and the lower Murray River.

Although rice growing in Australia is exceedingly efficient and highly profitable due to
the cheapness of land, several recent years of severe drought have led many to call for its
elimination because of its effects on extremely fragile aquatic ecosystems. Politicians,
however, have not made any plan to reduce rice growing in southern Australia.

[edit] World production and trade
[edit] Production and export
Paddy rice output in 2005.

World production of rice[49] has risen steadily from about 200 million tonnes of paddy
rice in 1960 to 600 million tonnes in 2004. Milled rice is about 68% of paddy rice by
weight. In the year 2004, the top three producers were China (26% of world production),
India (20%), and Indonesia (9%).

World trade figures are very different, as only about 5–6% of rice produced is traded
internationally. The largest three exporting countries are Thailand (26% of world
exports), Vietnam (15%), and the United States (11%), while the largest three importers
are Indonesia (14%), Bangladesh (4%), and Brazil (3%). Although China and India are
the top two largest producers of rice in the world, both of countries consume the majority
of the rice produced domestically leaving little to be traded internationally.

[edit] Price

In March to May 2008, the price of rice rose greatly due to a rice shortage. In late April
2008, rice prices hit 24 cents a pound, twice the price that it was seven months earlier.[50]

On the 30th of April, 2008, Thailand announced the project of the creation of the
Organisation of Rice Exporting Countries (OREC) with the potential to develop into a
price-fixing cartel for rice.[51][52]

[edit] Worldwide consumption

Consumption of rice by country—2003/2004 Between 1961 and 2002, per capita
(million metric ton)[53] consumption of rice increased by 40%.
Rice consumption is highest in Asia,
China 135 where average per capita consumption
India 85 is higher than 80 kg/person per year.
Egypt 39 In the subtropics such as South
America, Africa, and the Middle East,
Indonesia 37 per capita consumption averages
Malaysia 37 between 30 and 60 kg/person per year.
People in the developed West,
Bangladesh 26
including Europe and the United
Vietnam 18 States, consume less than 10 kg/person
Thailand 10 per year.[54][55]
Myanmar 10 Rice is the most important crop in
Philippines 9.7 Asia. In Cambodia, for example, 90%
Japan 8.7 of the total agricultural area is used for
rice production. See The Burning of
Brazil 8.1 the Rice by Don Puckridge for the
South Korea 5.0 story of rice production in Cambodia
[7].
United States 3.9
Source:
United States Department of Agriculture[6]
U.S. rice consumption has risen sharply over the past 25 years, fueled in part by
commercial applications such as beer production.[56] Almost one in five adult Americans
now report eating at least half a serving of white or brown rice per day.[57]

[edit] Environmental impacts

In many countries where rice is the main cereal crop, rice cultivation is responsible for
most of the methane emissions.[58] Farmers in some of the arid regions try to cultivate rice
using groundwater bored through pumps, thus increasing the chances of famine in the
long run.[citation needed] Rice also requires much more water to produce than other grains.[59]

As sea levels rise, rice will become more inclined to remain flooded for longer periods of
time. Longer stays in water cuts the soil off from atmospheric oxygen and causes
fermentation of organic matter in the soil. During the wet season, rice cannot hold the
carbon in anaerobic conditions. The microbes in the soil convert the carbon into methane
which is then released through the respiration of the rice plant or through diffusion of
water. Current contributions of methane from agriculture is ~15% of anthropogenic
greenhouse gases, as estimated by the IPCC. Further rise in sea level of 10-85 centimeters
would then stimulate the release of more methane into the air by rice plants. Methane is
twenty times more effective as a greenhouse gas than carbon dioxide is.[60]

[edit] Pests and diseases
Main article: List of rice diseases

Rice pests are any organisms or microbes with the potential to reduce the yield or value
of the rice crop (or of rice seeds)[61] (Jahn et al 2007). Rice pests include weeds,
pathogens, insects, rodents, and birds. A variety of factors can contribute to pest
outbreaks, including the overuse of pesticides and high rates of nitrogen fertilizer
application (e.g. Jahn et al. 2005) [8]. Weather conditions also contribute to pest
outbreaks. For example, rice gall midge and army worm outbreaks tend to follow high
rainfall early in the wet season, while thrips outbreaks are associated with drought
(Douangboupha et al. 2006).

One of the challenges facing crop protection specialists is to develop rice pest
management techniques which are sustainable. In other words, to manage crop pests in
such a manner that future crop production is not threatened (Jahn et al. 2001). Rice pests
are managed by cultural techniques, pest-resistant rice varieties, and pesticides (which
include insecticide). Increasingly, there is evidence that farmers' pesticide applications
are often unnecessary (Jahn et al. 1996, 2004a,b) [9] [10] [11]. By reducing the
populations of natural enemies of rice pests (Jahn 1992), misuse of insecticides can
actually lead to pest outbreaks (Cohen et al. 1994). Botanicals, so-called “natural
pesticides”, are used by some farmers in an attempt to control rice pests, but in general
the practice is not common. Upland rice is grown without standing water in the field.
Some upland rice farmers in Cambodia spread chopped leaves of the bitter bush
(Chromolaena odorata (L.)) over the surface of fields after planting. The practice
probably helps the soil retain moisture and thereby facilitates seed germination. Farmers
also claim the leaves are a natural fertilizer and helps suppress weed and insect
infestations (Jahn et al. 1999).

Among rice cultivars there are differences in the responses to, and recovery from, pest
damage (Jahn et al. 2004c, Khiev et al. 2000). Therefore, particular cultivars are
recommended for areas prone to certain pest problems. The genetically based ability of a
rice variety to withstand pest attacks is called resistance. Three main types of plant
resistance to pests are recognized (Painter 1951, Smith 2005): as nonpreference,
antibiosis, and tolerance. Nonpreference (or antixenosis) (Kogan and Ortman 1978)
describes host plants which insects prefer to avoid; antibiosis is where insect survival is
reduced after the ingestion of host tissue; and tolerance is the capacity of a plant to
produce high yield or retain high quality despite insect infestation. Over time, the use of
pest resistant rice varieties selects for pests that are able to overcome these mechanisms
of resistance. When a rice variety is no longer able to resist pest infestations, resistance is
said to have broken down. Rice varieties that can be widely grown for many years in the
presence of pests, and retain their ability to withstand the pests are said to have durable
resistance. Mutants of popular rice varieties are regularly screened by plant breeders to
discover new sources of durable resistance (e.g. Liu et al. 2005, Sangha et al. 2008).

Major rice pests include the brown planthopper[12] (Preap et al. 2006), armyworms[13],
the green leafhopper, the rice gall midge (Jahn and Khiev 2004), the rice bug (Jahn et al.
2004c), hispa (Murphy et al. 2006), the rice leaffolder, stemborer, rats (Leung et al 2002),
and the weed Echinochloa crusgali (Pheng et al. 2001). Rice weevils[14] are also known
to be a threat to rice crops in the US, PR China and Taiwan.

Major rice diseases include Rice Ragged Stunt, Sheath Blight and Tungro. Rice blast,
caused by the fungus Magnaporthe grisea, is the most significant disease affecting rice
cultivation.

[edit] Cultivars
Main article: List of rice varieties

While most breeding of rice is carried out for crop quality and productivity, there are
varieties selected for other reasons. Cultivars exist that are adapted to deep flooding, and
these are generally called 'floating rice' [15].

The largest collection of rice cultivars is at the International Rice Research Institute
(IRRI), with over 100,000 rice accessions [16] held in the International Rice Genebank
[17]. Rice cultivars are often classified by their grain shapes and texture. For example,
Thai Jasmine rice is long-grain and relatively less sticky, as long-grain rice contains less
amylopectin than short-grain cultivars. Chinese restaurants usually serve long-grain as
plain unseasoned steamed rice. Japanese mochi rice and Chinese sticky rice are short-
grain. Chinese people use sticky rice which is properly known as "glutinous rice" (note:
glutinous refer to the glue-like characteristic of rice; does not refer to "gluten") to make
zongzi. The Japanese table rice is a sticky, short-grain rice. Japanese sake rice is another
kind as well.

Indian rice cultivars include long-grained and aromatic Basmati (grown in the North),
long and medium-grained Patna rice and short-grained Sona Masoori (also spelled Sona
Masuri). In South India the most prized cultivar is 'ponni' which is primarily grown in the
delta regions of Kaveri River. Kaveri is also referred to as ponni in the South and the
name reflects the geographic region where it is grown. In the Western Indian state of
Maharashtra, a short grain variety called Ambemohar is very popular. this rice has a
characteristic fragrance of Mango blossom.

Brown Rice

Polished Indian sona masuri rice.

Aromatic rices have definite aromas and flavours; the most noted cultivars are Thai
fragrant rice, Basmati, Patna rice, and a hybrid cultivar from America sold under the
trade name, Texmati. Both Basmati and Texmati have a mild popcorn-like aroma and
flavour. In Indonesia there are also red and black cultivars.

High-yield cultivars of rice suitable for cultivation in Africa and other dry ecosystems
called the new rice for Africa (NERICA) cultivars have been developed. It is hoped that
their cultivation will improve food security in West Africa.

Draft genomes for the two most common rice cultivars, indica and japonica, were
published in April 2002. Rice was chosen as a model organism for the biology of grasses
because of its relatively small genome (~430 megabase pairs). Rice was the first crop
with a complete genome sequence.[62]

On December 16, 2002, the UN General Assembly declared the year 2004 the
International Year of Rice. The declaration was sponsored by more than 40 countries.
[edit] Biotechnology
[edit] High-yielding varieties

Main article: High-yielding variety

The High Yielding Varieties are a group of crops created intentionally during the Green
Revolution to increase global food production. Rice, like corn and wheat, was genetically
manipulated to increase its yield. This project enabled labor markets in Asia to shift away
from agriculture, and into industrial sectors. The first ‘modern rice’, IR8 was produced in
1966 at the International Rice Research Institute which is based in the Philippines at the
University of the Philippines' Los Banos site. IR8 was created through a cross between an
Indonesian variety named “Peta” and a Chinese variety named “Dee Geo Woo Gen.”[63]

With advances in molecular genetics, the mutant genes responsible for reduced
height(rht), gibberellin insensitive (gai1) and slender rice (slr1) in Arabidopsis and rice
were identified as cellular signaling components of gibberellic acid (a phytohormone
involved in regulating stem growth via its effect on cell division) and subsequently
cloned. Stem growth in the mutant background is significantly reduced leading to the
dwarf phenotype. Photosynthetic investment in the stem is reduced dramatically as the
shorter plants are inherently more stable mechanically. Assimilates become redirected to
grain production, amplifying in particular the effect of chemical fertilizers on commercial
yield. In the presence of nitrogen fertilizers, and intensive crop management, these
varieties increase their yield 2 to 3 times.

[edit] Potentials for the future

As the UN Millennium Development project seeks to spread global economic
development to Africa, the ‘Green Revolution’ is cited as the model for economic
development. With the intent of replicating the successful Asian boom in agronomic
productivity, groups like the Earth Institute are doing research on African agricultural
systems, hoping to increase productivity. An important way this can happen is the
production of ‘New Rices for Africa’ (NERICA). These rices, selected to tolerate the low
input and harsh growing conditions of African agriculture are produced by the African
Rice Center, and billed as technology from Africa, for Africa. The NERICA have
appeared in The New York Times (October 10, 2007) and International Herald Tribune
(October 9, 2007), trumpeted as miracle crops that will dramatically increase rice yield in
Africa and enable an economic resurgence.

[edit] Golden rice

Main article: Golden rice

German and Swiss researchers have engineered rice to produce Beta-carotene, with the
intent that it might someday be used to treat vitamin A deficiency. Additional efforts are
being made to improve the quantity and quality of other nutrients in golden rice.[64] The
addition of the carotene turns the rice gold.

[edit] Expression of human proteins

Ventria Bioscience has genetically modified rice to express lactoferrin, lysozyme, and
human serum albumin which are proteins usually found in breast milk. These proteins
have antiviral, antibacterial, and antifungal effects.[65]

Rice containing these added proteins can be used as a component in oral rehydration
solutions which are used to treat diarrheal diseases, thereby shortening their duration and
reducing recurrence. Such supplements may also help reverse anemia.[66]

[edit] Others
In the Korean and Japanese language, the Chinese character for the rice' (米 kome?) is
composed by two eights (八 hachi?) and ten (十 jyū?) which is 88, eighty-eight (八十八
hachi-jyū-hachi?). In proverbial saying in Japan, the farmer spends eighty-eight times and
efforts on rice from planting to crop and this is also teaching the sense of mottainai and
gratitude for farmer and rice itself.[67]

[edit] See also
• Rice (cooked)
• Basmati rice
• Beaten rice
• Bhutanese red rice
• Black rice
• Brown rice syrup
• Fengyuan City
• Forbidden rice
• FreeRice
• Inari
• Indonesian rice table
• Jasmine rice
• List of rice dishes
• List of rice varieties
• New Rice for Africa
• Nutritious Rice for the World
• Protein per unit area
• Puffed rice
• Red rice
• Rice Belt
• Rice bran oil
• Rice ethanol
• Rice wine
• Straw
• System of rice intensification
• White rice
• Rice shortage

[edit] References
Brick
From Wikipedia, the free encyclopedia

Jump to: navigation, search
For other uses, see Brick (disambiguation).

An old brick wall in English bond laid with alternating courses of headers and stretchers.
A brick is a block of ceramic material used in masonry construction, laid using mortar.

Contents
[hide]

• 1 History
• 2 Methods of Manufacture
o 2.1 Mud bricks
 2.1.1 Rail kilns
 2.1.2 Bull's Trench Kilns
o 2.2 Dry pressed bricks
o 2.3 Extruded bricks
o 2.4 Calcium silicate bricks
o 2.5 Fly ash bricks
• 3 Influence on fired colour
• 4 Optimal dimensions, characteristics and strength
• 5 Use
• 6 See also
• 7 Gallery
• 8 Notes
• 9 References

• 10 External links

[edit] History

The Roman Constantine Basilica in Trier, Germany, built in the 4th century with fired
bricks as audience hall for Constantine I

The oldest shaped bricks found date back to 7,500 B.C.[citation needed] They have been found
in Çayönü, a place located in the upper Tigris area, and in south east Anatolia close to
Diyarbakir. Other more recent findings, dated between 7,000 and 6,395 B.C., come from
Jericho and Catal Hüyük. From archaeological evidence, the invention of the fired brick
(as opposed to the considerably earlier sun-dried mud brick) is believed to have arisen in
about the third millennium BC in the Middle East. Being much more resistant to cold and
moist weather conditions, brick enabled the construction of permanent buildings in
regions where the harsher climate precluded the use of mud bricks. Bricks have the added
warmth benefit of slowly storing heat energy from the sun during the day and continuing
to release heat for several hours after sunset.

The Ancient Egyptians and the Indus Valley Civilization also used mudbrick extensively,
as can be seen in the ruins of Buhen, Mohenjo-daro and Harappa, for example. In the
Indus Valley Civilization all bricks corresponded to sizes in a perfect ratio of 4:2:1.[citation
needed]

[The ancient Jetavanaramaya stupa in Anuradhapura, Sri Lanka is one of the largest brick
structures in the world

The world's highest brick tower of St. Martin's Church, Landshut, completed in 1500

In Sumerian times offerings of food and drink were presented to "the Bone god," who
was "represented in the ritual by the first brick." More recently, mortar for the
foundations of the Hagia Sophia in Istanbul was mixed with "a broth of barley and bark
of elm" and sacred relics, accompanied by prayers, placed between every 12 bricks.
The Romans made use of fired bricks, and the Roman legions, which operated mobile
kiln, introduced bricks to many parts of the empire. Roman bricks are often stamped with
the mark of the legion that supervised its production. The use of bricks in Southern and
Western Germany, for example, can be traced back to traditions already described by the
Roman architect Vitruvius.

In pre-modern China, brick-making was the job of a lowly and unskilled artisan, but a
kilnmaster was respected as a step above the latter.[1] Early descriptions of the production
process and glazing techniques used for bricks can be found in the Song Dynasty
carpenter's manual Yingzao Fashi, published in 1103 by the government official Li Jie,
who was put in charge of overseeing public works for the central government's
construction agency. The historian Timothy Brook writes of the production process in
Ming Dynasty China (aided with visual illustrations from the Tiangong Kaiwu
encyclopedic text published in 1637):

The brickwork of Shebeli Tower in Iran displays 12th century craftsmanship

...the kilnmaster had to make sure that the temperature inside the kiln stayed at a level that caused
the clay to shimmer with the color of molten gold or silver. He also had to know when to quench
the kiln with water so as to produce the surface glaze. To anonymous laborers fell the less skilled
stages of brick production: mixing clay and water, driving oxen over the mixture to trample it into
a thick paste, scooping the paste into standardized wooden frames (to produce a brick roughly 42
centimeters long, 20 centimeters wide, and 10 centimeters thick), smoothing the surfaces with a
wire-strung bow, removing them from the frames, printing the fronts and backs with stamps that
indicated where the bricks came from and who made them, loading the kilns with fuel (likelier
wood than coal), stacking the bricks in the kiln, removing them to cool while the kilns were still
hot, and bundling them into pallets for transportation. It was hot, filthy work.[2]

The idea of signing one's name on one's work and signifying the place where the product
was made—in this case, bricks—was nothing new to the Ming era and had little or
nothing to do with vanity.[3] As far back as the Qin Dynasty (221 BC–206 BC), the
government required blacksmiths and weapon-makers to engrave their names onto
weapons in order to trace the weapon back to them, lest their weapons should prove to be
of a lower quality than the standard required by the government.[4]

In the 12th century, bricks from Northern Italy were re-introduced to Northern Germany,
where an independent tradition evolved. It culminated in the so-called brick Gothic, a
reduced style of Gothic architecture that flourished in Northern Europe, especially in the
regions around the Baltic Sea which are without natural rock resources. Brick Gothic
buildings, which are built almost exclusively of bricks, are to be found in Denmark,
Germany, Poland and Russia.

During the Renaissance and the Baroque, visible brick walls were unpopular and the
brickwork was often covered with plaster. It was only during the mid-18th century that
visible brick walls regained some degree of popularity, as illustrated by the Dutch
Quarter of Potsdam, for example.

Chile house in Hamburg, Germany

The transport in bulk of building materials such as paper over long distances was rare
before the age of canals, railways, roads and heavy goods vehicles. Before this time
bricks were generally made as close as possible to their point of intended use. It has been
estimated that in England in the eighteenth century carrying bricks by horse and cart for
ten miles (16 km) over the poor roads then existing could more than double their price.

Bricks were often used, even in areas where stone was available, for reasons of speed and
economy. The buildings of the Industrial Revolution in Britain were largely constructed
of brick and timber due to the unprecedented demand created. Again, during the building
boom of the nineteenth century in the eastern seaboard cities of Boston and New York,
for example, locally made bricks were often used in construction in preference to the
brownstones of New Jersey and Connecticut for these reasons.
The trend of building upwards for offices that emerged towards the end of the 19th
century displaced brick in favor of cast and wrought iron and later steel and concrete.
Some early 'skyscrapers' were made in masonry, and demonstrated the limitations of the
material - for example, the Monadnock Building in Chicago (opened in 1896) is masonry
and just seventeen stories high, the ground walls are almost 1.8 meters thick, clearly
building any higher would lead to excessive loss of internal floor space on the lower
floors. Brick was revived for high structures in the 1950s following work by the Swiss
Federal Institute of Technology and the Building Research Establishment in Watford,
UK. This method produced eighteen story structures with bearing walls no thicker than a
single brick (150-225 mm). This potential has not been fully developed because of the
ease and speed in building with other materials, in the late-20th century brick was
confined to low- or medium-rise structures or as a thin decorative cladding over concrete-
and-steel buildings or for internal non-loadbearing walls.

[edit] Methods of Manufacture

Brick making at the beginning of the 20th century.

Bricks may be made from clay, shale, soft slate, calcium silicate, concrete, or shaped
from quarried stone.

Clay is the most common material, with modern clay bricks formed in one of three
processes - soft mud, dry press, or extruded.

In 2007 a new type of brick was invented, based on fly ash, a by-product of coal power
plants.

[edit] Mud bricks

The soft mud method is the most common, as it is the most economical. It starts with the
raw clay, preferably in a mix with 25-30% sand to reduce shrinkage. The clay is first
ground and mixed with water to the desired consistency. The clay is then pressed into
steel moulds with a hydraulic press. The shaped clay is then fired ("burned") at 900-1000
°C to achieve strength.
[edit] Rail kilns

Xhosa brickmaker at kiln near Ngcobo in the former Transkei in 2007.

In modern brickworks, this is usually done in a continuously fired tunnel kiln, in which
the bricks move slowly through the kiln on conveyors, rails, or kiln cars to achieve
consistency for all bricks. The bricks often have added lime, ash, and organic matter to
speed the burning.

[edit] Bull's Trench Kilns

In Pakistan and India, brick making is typically a manual process. The most common
type of brick kiln in use there are Bull's Trench Kiln (BTK), based on a design
developed by British engineer W. Bull in the late nineteenth century.

An oval or circular trench, 6-9 meters wide, 2-2.5 meters deep, and 100-150 meters in
circumference, is dug in a suitable location. A tall exhaust chimney is constructed in the
center. Half or more of the trench is filled with "green" (unfired) bricks which are stacked
in an open lattice pattern to allow airflow. The lattice is capped with a roofing layer of
finished brick.

In operation, new green bricks, along with roofing bricks, are stacked at one end of the
brick pile; cooled finished bricks are removed from the other end for transport. In the
middle the brickworkers create a firing zone by dropping fuel (coal, [wood], oil, debris,
etc) through access holes in the roof above the trench.
West face of Roskilde Cathedral in Roskilde, Denmark.

The advantage of the BTK design is a much greater energy efficiency compared with
clamp or scove kilns. Sheet metal or boards are used to route the airflow through the
brick lattice so that fresh air flows first through the recently burned bricks, heating the
air, then through the active burning zone. The air continues through the green brick zone
(pre-heating and drying them), and finally out the chimney where the rising gases create
suction which pulls air through the system. The reuse of heated air yields a considerable
savings in fuel cost.

As with the rail process above, the BTK process is continuous. A half dozen laborers
working around the clock can fire approximately 15,000-25,000 bricks a day. Unlike the
rail process, in the BTK process the bricks do not move. Instead, the locations at which
the bricks are loaded, fired, and unloaded gradually rotate through the trench.[5]

[edit] Dry pressed bricks

The dry press method is similar to mud brick but starts with a much thicker clay mix, so
it forms more accurate, sharper-edged bricks. The greater force in pressing and the longer
burn make this method more expensive.

[edit] Extruded bricks

For extruded bricks the clay is mixed with 10-15% water (stiff extrusion) or 20-25%
water (soft extrusion). This is forced through a die to create a long cable of material of
the proper width and depth. This is then cut into bricks of the desired length by a wall of
wires. Most structural bricks are made by this method, as hard dense bricks result, and
holes or other perforations can be produced by the die. The introduction of holes reduces
the needed volume of clay through the whole process, with the consequent reduction in
cost. The bricks are lighter and easier to handle, and have thermal properties different
from solid bricks. The cut bricks are hardened by drying for between 20 and 40 hours at
50-150 °C before being fired. The heat for drying is often waste heat from the kiln.

[edit] Calcium silicate bricks

The raw materials for calcium silicate bricks include lime mixed with quartz, crushed
flint or crushed siliceous rock together with mineral colorants. The materials are mixed
and left until the lime is completely hydrated, the mixture is then pressed into moulds and
cured in an autoclave for two or three hours to speed the chemical hardening. The
finished bricks are very accurate and uniform, although the sharp arrises need careful
handling to avoid damage to brick (and brick-layer). The bricks can be made in a variety
of colours, white is common but a wide range of "pastel" shades can be achieved..

[edit] Fly ash bricks

In May 2007, Henry Liu, a retired civil engineer, announced that he had invented a new
brick composed of fly ash and water compressed at 4,000 psi (27,939 kPa) for two weeks.
Owing to the high concentration of calcium oxide in fly ash, the brick is considered "self-
cementing". The brick is toughened using an air entrainment agent, which traps
microscopic bubbles inside the brick so that it resists penetration by water, allowing it to
withstand up to 100 freeze-thaw cycles. Since the manufacturing method uses a waste by-
product rather than clay, and solidification takes place under pressure rather than heat, it
has several important environmental benefits. It saves energy, reduces mercury pollution,
aleviates the need for landfill disposal of fly ash, and costs 20% less than traditional clay
brick manufacture. Liu intends to license his technology to manufacturers in 2008. [6][7]

Brick sculpturing on Thornbury Castle, Thornbury, near Bristol, England.
The chimneys were erected in 1514.

[edit] Influence on fired colour
The fired colour of clay bricks is significantly influenced by the chemical and mineral
content of raw materials, the firing temperature and the atmosphere in the kiln. For
example pink coloured bricks are the result of a high iron content, white or yellow bricks
have a higher lime content. Most bricks burn to various red hues, if the temperature is
increased the colour moves through dark red, purple and then to brown or grey at around
1300 °C. Calcium silicate bricks have a wider range of shades and colours, depending on
the colorants used.
Bricks formed from concrete are usually termed blocks, and are typically pale grey in
colour. They are made from a dry, small aggregate concrete which is formed in steel
moulds by vibration and compaction in either an "egglayer" or static machine. The
finished blocks are cured rather than fired using low-pressure steam. Concrete blocks are
manufactured in a much wider range of shapes and sizes than clay bricks and are also
available with a wider range of face treatments - a number of which are to simulate the
appearance of clay bricks.

An impervious and ornamental surface may be laid on brick either by salt glazing, in
which salt is added during the burning process, or by the use of a "slip," which is a glaze
material into which the bricks are dipped. Subsequent reheating in the kiln fuses the slip
into a glazed surface integral with the brick base.

Natural stone bricks are of limited modern utility, due to their enormous comparative
mass, the consequent foundation needs, and the time-consuming and skilled labour
needed in their construction and laying. They are very durable and considered more
handsome than clay bricks by some. Only a few stones are suitable for bricks. Common
materials are granite, limestone and sandstone. Other stones may be used (e.g. marble,
slate, quartzite, etc.) but these tend to be limited to a particular locality.

[edit] Optimal dimensions, characteristics and strength

Loose bricks

For efficient handling and laying bricks must be small enough and light enough to be
picked up by the bricklayer using one hand (leaving the other hand free for the trowel).
Bricks are usually laid flat and as a result the effective limit on the width of a brick is set
by the distance which can conveniently be spanned between the thumb and fingers of one
hand, normally about four inches (about 100 mm). In most cases, the length of a brick is
about twice its width, about eight inches (about 200 mm) or slightly more. This allows
bricks to be laid bonded in a structure to increase its stability and strength (for an
example of this, see the illustration of bricks laid in English bond, at the head of this
article. The wall is built using alternating courses of stretchers, bricks laid longways and
headers, bricks laid crossways. The headers tie the wall together over its width.

The correct brick for a job can be picked from a choice of color, surface texture, density,
weight, absorption and pore structure, thermal characteristics, thermal and moisture
movement, and fire resistance.
Face brick ("house brick") sizes[8], from small to large In England, the length and
the width of the common
Standard Imperial Metric
brick has remained fairly
United States 8 × 4 × 2¼ inches 203 × 102 × 57 mm constant over the centuries,
United but the depth has varied
8½ × 4 × 2½ inches 215 × 102.5 × 65 mm from about two inches
Kingdom
(about 51 mm) or smaller
South Africa 8¾ × 4 × 3 inches 222 × 106 × 73 mm
in earlier times to about
Australia 9 × 4⅓ × 3 inches 230 × 110 × 76 mm two and a half inches
(about 64 mm) more
recently. In the United States, modern bricks are usually about 8 × 4 × 2.25 inches (203 ×
102 × 57 mm). In the United Kingdom, the usual ("work") size of a modern brick is 215
× 102.5 × 65 mm (about 8.5 × 4 × 2.5 inches), which, with a nominal 10 mm mortar
joint, forms a "coordinating" or fitted size of 225 × 112.5 × 75 mm, for a ratio of 6:3:2.

Some brickmakers create innovative sizes and shapes for bricks used for plastering (and
therefore not visible) where their inherent mechanical properties are more important than
the visual ones.[9] These bricks are usually slightly larger, but not as large as blocks and
offer the following advantages:

• A slightly larger brick requires less mortar and handling (fewer bricks) which
reduces cost
• Ribbed exterior aids plastering
• More complex interior cavities allow improved insulation, while maintaining
strength.

Blocks have a much greater range of sizes. Standard coordinating sizes in length and
height (in mm) include 400×200, 450×150, 450×200, 450×225, 450×300, 600×150,
600×200, and 600×225; depths (work size, mm) include 60, 75, 90, 100, 115, 140, 150,
190, 200, 225, and 250. They are usable across this range as they are lighter than clay
bricks. The density of solid clay bricks is around 2,000 kg/m³: this is reduced by
frogging, hollow bricks, etc.; but aerated autoclaved concrete, even as a solid brick, can
have densities in the range of 450–850 kg/m³.

Bricks may also be classified as solid (less than 25% perforations by volume, although
the brick may be "frogged," having indentations on one of the longer faces), perforated
(containing a pattern of small holes through the brick removing no more than 25% of the
volume), cellular (containing a pattern of holes removing more than 20% of the volume,
but closed on one face), or hollow (containing a pattern of large holes removing more
than 25% of the brick's volume). Blocks may be solid, cellular or hollow

The term "frog" for the indentation on one bed of the brick is a word that often excites
curiosity as to its origin. The most likely explanation is that brickmakers also call the
block that is placed in the mould to form the indentation a frog. Modern brickmakers
usually use plastic frogs but in the past they were made of wood. When these are wet and
have clay on them they resemble the amphibious kind of frog and this is where they got
their name. Over time this term also came to refer to the indentation left by them.
[Matthews 2006]

The compressive strength of bricks produced in the United States ranges from about 1000
lbf/in² to 15,000 lbf/in² (7 to 105 MPa or N/mm² ), varying according to the use to which
the brick are to be put. In England clay bricks can have strengths of up to 100 MPa,
although a common house brick is likely to show a range of 20–40 MPa.

[edit] Use

A brick section of the old Dixie Highway East Florida Connector on the west side of
Lake Lily in Maitland, Florida. It was built in 1915 or 1916, paved over at some point,
and restored in 1999.

Bricks are used for building and pavement. In the USA, brick pavement was found
incapable of withstanding heavy traffic, but it is coming back into use as a method of
traffic calming or as a decorative surface in pedestrian precincts. For example, in the
early 1900s, most of the streets in the city of Grand Rapids, Michigan were paved with
brick. Today, there are only about 20 blocks of brick paved streets remaining (totaling
less than 0.5 percent of all the streets in the city limits).[10]

Bricks are also used in the metallurgy and glass industries for lining furnaces. They have
various uses, especially refractory bricks such as silica, magnesia, chamotte and neutral
(chromomagnesite) refractory bricks. This type of brick must have good thermal shock
resistance, refractoriness under load, high melting point, and satisfactory porosity. There
is a large refractory brick industry, especially in the United Kingdom, Japan and the
United States.

In the United Kingdom, bricks have been used in construction for centuries. Until
recently, almost all houses were built almost entirely from red bricks. Although many
houses in the UK are now built using a mixture of concrete blocks and other materials,
many houses are skinned with a layer of bricks on the outside for aesthetic appeal.

[edit] See also
Wikimedia Commons has media related to: Bricks
Look up Brick in
Wiktionary, the free dictionary.

• Adobe
• Brick tinting
• Brickwork
• Ceramics
• Fire brick
• Masonry
• Mortar
• Millwall brick
• Mudbrick
• Roman brick
• Wienerberger

[edit] Gallery

Brick sculpturing on
A brick Thornbury Castle, Frauenkirche
kiln,Tamilnadu, IndiaBrickwork, United Thornbury, near Bristol, (Munich), erected
States. England. The chimneys 1468-1488, looking
were erected in 1514 up at the towers

Mudéjar brick church Brick cart, Porotherm style clay block
Ishtar Gate of
tower in Teruel (14th Mumbai, India brick
Babylon
c.)

[edit] Notes

Mineral
From Wikipedia, the free encyclopedia

Jump to: navigation, search
For other uses, see Mineral (disambiguation).

A mineral is a naturally occurring solid formed through geological processes that has a
characteristic chemical composition, a highly ordered atomic structure, and specific
physical properties. A rock, by comparison, is an aggregate of minerals and need not
have a specific chemical composition. Minerals range in composition from pure elements
and simple salts to very complex silicates with thousands of known forms.[1] The study of
minerals is called mineralogy.

An assortment of minerals.
Contents
[hide]

• 1 Mineral definition and classification
o 1.1 Differences between minerals and rocks
 1.1.1 Mineral composition of rocks
o 1.2 Physical properties of minerals
o 1.3 Chemical properties of minerals
 1.3.1 Silicate class
 1.3.2 Carbonate class
 1.3.3 Sulfate class
 1.3.4 Halide class
 1.3.5 Oxide class
 1.3.6 Sulfide class
 1.3.7 Phosphate class
 1.3.8 Element class
 1.3.9 Organic class
• 2 See also
• 3 External links

• 4 References

[edit] Mineral definition and classification
To be classified as a true mineral, a substance must be a solid and have a crystalline
structure. It must also be a naturally occurring, homogeneous substance with a defined
chemical composition. Traditional definitions excluded organically derived material.
However, the International Mineralogical Association in 1995 adopted a new definition:

a mineral is an element or chemical compound that is normally crystalline and
that has been formed as a result of geological processes.[2]

The modern classifications include an organic class - in both the new Dana and the
Strunz classification schemes.[3][4]

The chemical composition may vary between end members of a mineral system. For
example the plagioclase feldspars comprise a continuous series from sodium-rich albite
(NaAlSi3O8) to calcium-rich anorthite (CaAl2Si2O8) with four recognized intermediate
compositions between. Mineral-like substances that don't strictly meet the definition are
sometimes classified as mineraloids. Other natural-occurring substances are nonminerals.
Industrial minerals is a market term and refers to commercially valuable mined materials
(see also Minerals and Rocks section below).

A crystal structure is the orderly geometric spatial arrangement of atoms in the internal
structure of a mineral. There are 14 basic crystal lattice arrangements of atoms in three
dimensions, and these are referred to as the 14 "Bravais lattices". Each of these lattices
can be classified into one of the six crystal systems, and all crystal structures currently
recognized fit in one Bravais lattice and one crystal system. This crystal structure is based
on regular internal atomic or ionic arrangement that is often expressed in the geometric
form that the crystal takes. Even when the mineral grains are too small to see or are
irregularly shaped, the underlying crystal structure is always periodic and can be
determined by X-ray diffraction. Chemistry and crystal structure together define a
mineral. In fact, two or more minerals may have the same chemical composition, but
differ in crystal structure (these are known as polymorphs). For example, pyrite and
marcasite are both iron sulfide, but their arrangement of atoms differs. Similarly, some
minerals have different chemical compositions, but the same crystal structure: for
example, halite (made from sodium and chlorine), galena (made from lead and sulfur)
and periclase (made from magnesium and oxygen) all share the same cubic crystal
structure.

Crystal structure greatly influences a mineral's physical properties. For example, though
diamond and graphite have the same composition (both are pure carbon), graphite is very
soft, while diamond is the hardest of all known minerals. This happens because the
carbon atoms in graphite are arranged into sheets which can slide easily past each other,
while the carbon atoms in diamond form a strong, interlocking three-dimensional
network.

There are currently more than 4,000 known minerals, according to the International
Mineralogical Association, which is responsible for the approval of and naming of new
mineral species found in nature. Of these, perhaps 100 can be called "common," 50 are
"occasional," and the rest are "rare" to "extremely rare."

[edit] Differences between minerals and rocks

A mineral is a naturally occurring solid with a definite chemical composition and a
specific crystalline structure. A rock is an aggregate of one or more minerals. (A rock
may also include organic remains and mineraloids.) Some rocks are predominantly
composed of just one mineral. For example, limestone is a sedimentary rock composed
almost entirely of the mineral calcite. Other rocks contain many minerals, and the
specific minerals in a rock can vary widely. Some minerals, like quartz, mica or feldspar
are common, while others have been found in only four or five locations worldwide. The
vast majority of the rocks of the Earth's crust consist of quartz, feldspar, mica, chlorite,
kaolin, calcite, epidote, olivine, augite, hornblende, magnetite, hematite, limonite and a
few other minerals.[5] Over half of the mineral species known are so rare that they have
only been found in a handful of samples, and many are known from only one or two
small grains.

Commercially valuable minerals and rocks are referred to as industrial minerals. Rocks
from which minerals are mined for economic purposes are referred to as ores (the rocks
and minerals that remain, after the desired mineral has been separated from the ore, are
referred to as tailings).

[edit] Mineral composition of rocks

A main determining factor in the formation of minerals in a rock mass is the chemical
composition of the mass, for a certain mineral can be formed only when the necessary
elements are present in the rock. Calcite is most common in limestones, as these consist
essentially of calcium carbonate; quartz is common in sandstones and in certain igneous
rocks which contain a high percentage of silica.

Other factors are of equal importance in determining the natural association or
paragenesis of rock-forming minerals, principally the mode of origin of the rock and the
stages through which it has passed in attaining its present condition. Two rock masses
may have very much the same bulk composition and yet consist of entirely different
assemblages of minerals. The tendency is always for those compounds to be formed
which are stable under the conditions under which the rock mass originated. A granite
arises by the consolidation of a molten magma at high temperatures and great pressures
and its component minerals are those stable under such conditions. Exposed to moisture,
carbonic acid and other subaerial agents at the ordinary temperatures of the Earth's
surface, some of these original minerals, such as quartz and white mica are relatively
stable and remain unaffected; others weather or decay and are replaced by new
combinations. The feldspar passes into kaolinite, muscovite and quartz, and any mafic
minerals such as pyroxenes, amphiboles or biotite have been present they are often
altered to chlorite, epidote, rutile and other substances. These changes are accompanied
by disintegration, and the rock falls into a loose, incoherent, earthy mass which may be
regarded as a sand or soil. The materials thus formed may be washed away and deposited
as sandstone or siltstone. The structure of the original rock is now replaced by a new one;
the mineralogical constitution is profoundly altered; but the bulk chemical composition
may not be very different. The sedimentary rock may again undergo metamorphism. If
penetrated by igneous rocks it may be recrystallized or, if subjected to enormous
pressures with heat and movement during mountain building, it may be converted into a
gneiss not very different in mineralogical composition though radically different in
structure to the granite which was its original state.[5]

[edit] Physical properties of minerals

Classifying minerals can range from simple to very difficult. A mineral can be identified
by several physical properties, some of them being sufficient for full identification
without equivocation. In other cases, minerals can only be classified by more complex
chemical or X-ray diffraction analysis; these methods, however, can be costly and time-
consuming.

Physical properties commonly used are:[1]

• Crystal structure and habit: See the above discussion of crystal structure. A
mineral may show good crystal habit or form, or it may be massive, granular or
compact with only microscopically visible crystals.

Talc

Rough diamond.

• Hardness: the physical hardness of a mineral is usually measured according to the
Mohs scale. This scale is relative and goes from 1 to 10. Minerals with a given
Mohs hardness can scratch the surface of any mineral that has a lower hardness
than itself.

o Mohs hardness scale:[6]

1. Talc Mg3Si4O10(OH)2
2. Gypsum CaSO4·2H2O
3. Calcite CaCO3
4. Fluorite CaF2
5. Apatite Ca5(PO4)3(OH,Cl,F)
6. Orthoclase KAlSi3O8
7. Quartz SiO2
8. Topaz Al2SiO4(OH,F)2
9. Corundum Al2O3
10. Diamond C (pure carbon)

• Luster indicates the way a mineral's surface interacts with light and can range
from dull to glassy (vitreous).
o Metallic -high reflectivity like metal: galena and pyrite
o Sub-metallic -slightly less than metallic reflectivity: magnetite
o Non-metallic lusters:
 Adamantine - brilliant, the luster of diamond also cerussite and
anglesite
 Vitreous -the luster of a broken glass: quartz
 Pearly - iridescent and pearl-like: talc and apophyllite
 Resinous - the luster of resin: sphalerite and sulfur
 Silky - a soft light shown by fibrous materials: gypsum and
chrysotile
 Dull/earthy -shown by finely crystallized minerals: the kidney ore
variety of hematite

• Color indicates the appearance of the mineral in reflected light or transmitted light
for translucent minerals (i.e. what it looks like to the naked eye).
o Iridescence - the play of colors due to surface or internal interference.
Labradorite exhibits internal iridescence whereas hematite and sphalerite
often show the surface effect.
• Streak refers to the color of the powder a mineral leaves after rubbing it on an
unglazed porcelain streak plate. Note that this is not always the same color as the
original mineral.
• Cleavage describes the way a mineral may split apart along various planes. In thin
sections, cleavage is visible as thin parallel lines across a mineral.
• Fracture describes how a mineral breaks when broken contrary to its natural
cleavage planes.
o Chonchoidal fracture is a smooth curved fracture with concentric ridges of
the type shown by glass.
o Hackley is jagged fracture with sharp edges.
o Fibrous
o Irregular
• Specific gravity relates the mineral mass to the mass of an equal volume of water,
namely the density of the material. While most minerals, including all the
common rock-forming minerals, have a specific gravity of 2.5 - 3.5, a few are
noticeably more or less dense, e.g. several sulfide minerals have high specific
gravity compared to the common rock-forming minerals.
• Other properties: fluorescence (response to ultraviolet light), magnetism,
radioactivity, tenacity (response to mechanical induced changes of shape or form),
piezoelectricity and reactivity to dilute acids.

[edit] Chemical properties of minerals

Minerals may be classified according to chemical composition. They are here categorized
by anion group. The list below is in approximate order of their abundance in the Earth's
crust. The list follows the Dana classification system[1][7] which closely parallels the
Strunz classification.

[edit] Silicate class

Quartz

The largest group of minerals by far are the silicates (most rocks are ≥95% silicates),
which are composed largely of silicon and oxygen, with the addition of ions such as
aluminium, magnesium, iron, and calcium. Some important rock-forming silicates
include the feldspars, quartz, olivines, pyroxenes, amphiboles, garnets, and micas.

[edit] Carbonate class

The carbonate minerals consist of those minerals containing the anion (CO3)2- and
include calcite and aragonite (both calcium carbonate), dolomite (magnesium/calcium
carbonate) and siderite (iron carbonate). Carbonates are commonly deposited in marine
settings when the shells of dead planktonic life settle and accumulate on the sea floor.
Carbonates are also found in evaporitic settings (e.g. the Great Salt Lake, Utah) and also
in karst regions, where the dissolution and reprecipitation of carbonates leads to the
formation of caves, stalactites and stalagmites. The carbonate class also includes the
nitrate and borate minerals.
[edit] Sulfate class

Sulfates all contain the sulfate anion, SO42-. Sulfates commonly form in evaporitic
settings where highly saline waters slowly evaporate, allowing the formation of both
sulfates and halides at the water-sediment interface. Sulfates also occur in hydrothermal
vein systems as gangue minerals along with sulfide ore minerals. Another occurrence is
as secondary oxidation products of original sulfide minerals. Common sulfates include
anhydrite (calcium sulfate), celestine (strontium sulfate), barite (barium sulfate), and
gypsum (hydrated calcium sulfate). The sulfate class also includes the chromate,
molybdate, selenate, sulfite, tellurate, and tungstate minerals.

[edit] Halide class

Halite

The halides are the group of minerals forming the natural salts and include fluorite
(calcium fluoride), halite (sodium chloride), sylvite (potassium chloride), and sal
ammoniac (ammonium chloride). Halides, like sulfates, are commonly found in
evaporitic settings such as playa lakes and landlocked seas such as the Dead Sea and
Great Salt Lake. The halide class includes the fluoride, chloride, bromide and iodide
minerals.

[edit] Oxide class

Oxides are extremely important in mining as they form many of the ores from which
valuable metals can be extracted. They also carry the best record of changes in the Earth's
magnetic field. They commonly occur as precipitates close to the Earth's surface,
oxidation products of other minerals in the near surface weathering zone, and as
accessory minerals in igneous rocks of the crust and mantle. Common oxides include
hematite (iron oxide), magnetite (iron oxide), chromite (iron chromium oxide), spinel
(magnesium aluminium oxide - a common component of the mantle), ilmenite (iron
titanium oxide), rutile (titanium dioxide), and ice (hydrogen oxide). The oxide class
includes the oxide and the hydroxide minerals.
[edit] Sulfide class

Many sulfide minerals are economically important as metal ores. Common sulfides
include pyrite (iron sulfide - commonly known as fools' gold), chalcopyrite (copper iron
sulfide), pentlandite (nickel iron sulfide), and galena (lead sulfide). The sulfide class also
includes the selenides, the tellurides, the arsenides, the antimonides, the bismuthinides,
and the sulfosalts (sulfur and a second anion such as arsenic).

[edit] Phosphate class

The phosphate mineral group actually includes any mineral with a tetrahedral unit AO4
where A can be phosphorus, antimony, arsenic or vanadium. By far the most common
phosphate is apatite which is an important biological mineral found in teeth and bones of
many animals. The phosphate class includes the phosphate, arsenate, vanadate, and
antimonate minerals.

[edit] Element class

The elemental group includes metals and intermetallic elements (gold, silver, copper),
semi-metals and non-metals (antimony, bismuth, graphite, sulfur). This group also
includes natural alloys, such as electrum (a natural alloy of gold and silver), phosphides,
silicides, nitrides and carbides (which are usually only found naturally in a few rare
meteorites).

[edit] Organic class

The organic mineral class includes biogenic substances in which geological processes
have been a part of the genesis or origin of the existing compound.[2] Minerals of the
organic class include various oxalates, mellitates, citrates, cyanates, acetates, formates,
hydrocarbons and other miscellaneous species.[3] Examples include whewellite,
moolooite, mellite, fichtelite, carpathite, evenkite and abelsonite.

[edit] See also
• A list of minerals with associated Wikipedia articles
• A comprehensive list of minerals
• Tucson Gem & Mineral Show
• Industrial minerals
• Mineral water
• Mineral processing
• Mineral wool
• Mining
• Norman L. Bowen
• Quarry
• Dietary mineral
• Rocks
• Strunz classification

[edit] External links
Wikimedia Commons has media related to: Minerals

• Minerals.net
• mindat.org Mindat database
• Webmineral.com
• Mineral atlas with properties, photos

[edit] References