You are on page 1of 93

The History of Disease

An electronic book co-authored by students at the Massachusetts Academy


of Math and Science at WPI

Scientific and Technical Writing, 2008

Chapter 1 Anthrax

Chapter 2 Bubonic Plague

Chapter 3 Cholera

Chapter 4 Ergotism

Chapter 5 Hansen’s Disease (Leprosy)

Chapter 6 Hemophilia

Chapter 7 Influenza

Chapter 8 Malaria

Chapter 9 Polio

Chapter 10 Smallpox

Chapter 11 Syphilis

Chapter 12 Tuberculosis

Chapter 13 Typhus

Chapter 14 Typhoid Fever

Chapter 15 Yellow Fever

1
Chapter 1
Anthrax
Lisha Xu, Johan Skende, Alex Lovejoy

History

Anthrax has been an influential disease since its discovery by Robert Koch in
the 1870s. It is bacterial in nature and, in most cases, leads to death. The discovery
was made by Robert Koch when he uncovered the existence of a new bacterium,
Bacillus anthracis, which results in black scabs and boils on the skin (Porter, 1996,
pp.200-201). Anthrax, a potentially fatal infection, is a virulent and highly contagious
disease. It does not, however, originate from the current pool of constantly
mutatating bacteria. In fact, historical medical records throughout history show that
the descriptions of Anthrax symptoms date back as far as 1491 BC in early Egyptian
and Mesopotamian civilizations. Furthermore, such spread might be considered so
catastrophic for the early civilizations reference to the aftermath of anthrax
outbreaks is evident in various literatures, namely prominent Hindi works (Koch, 1961,
pp.89-90). The fifth and sixth biblical plagues have been characterized as anthrax.
Additionally, the “burning plague” in Homer’s The Iliad is also thought to have been
anthrax. Finally, Virgil, in his work Georgics wrote about an anthrax epidemic and
even linked the origin of anthrax to animals (Lord, 2001, p.1).

Historical Impact

Countless people from around the world throughout history have struggled with
anthrax infection. As time progressed, several outbreak events, such as the so-called
Black Bane of Europe in 1613, which decimated more than 60,000 people, introduced
an ever-growing number of individuals, afflicted with Bacillus anthracis., Again during
the mid-eighteen hundreds outbreaks of occupational, cutaneous, and respiratory
anthrax inflicted the industrial nations of the continent (Koch, 1961, pp.89-90).
Through careful analysis, present day specialists speculate that cutaneous infection
was, in fact, caused by the handling of wool, hair, and hides.
On the other hand, respiratory anthrax, the more lethal form of anthrax
infection, is believed to arise from processes that created aerosol while handling
wool, hair, and hides. As industries grew increasingly concerned of their loss of
profits as the direct outcome anthrax outbreaks, they issued preventive measures to
counteract the dire effects of such bacteria. Beginning in the early 1900s, decreased
use of imported, potentially contaminated animal products and improved industrial
and animal husbandry practices gradually decreased the annual number of infected
individuals (Koch, 1961, pp.89-90). In the United States, on the other hand, the first
anthrax epidemic struck the populace in the eighteenth century. However, instead of
2
arousing fear in the public, such event only intensified the determination for
researchers to search for potential cures for and preventions. Nevertheless, the
arduous efforts of these researchers were not in vain; their investigations eventually
led to the development of cell free human anthrax vaccines, which were sterile
filtrates of cultures from avirulent noncapsulated strains that provide protective
antigens, to attenuate the bacteria infection. Just eight years after the introduction
of the first anthrax vaccine, the largest record of anthrax outbreak in humans and
animals came about in Zimbabwe for two years during the civil war. Approximately
10,000 human cases were reported and 151 deaths were documented (Kobuch, Davis,
& Fleischer, 1990, pp.34-48).
Etiology

Bacillus anthracis, an endosporeforming bacillus, a large, aerobic, gram-


positive microorganism, can survive under specific moisture environment. In fact,
because grasses serve as exploitable habitats for B. anthracis, grazing animals, such
as sheep, goats, and cattle are predominantly the victims of the fatal sepsis disease,
which arise from the bacterium. Other favorable conditions for sporulation include
alkaline soils with a pH greater than 6.0, high nitrogen levels caused by decaying
vegetation in soil, balanced periods of rain and drought, and temperature higher than
15 degrees Celsius. Under the above conditions, high concentration of amino acids,
nucleosides, and glucoses can stimulate B. anthracis spores to germinate
spontaneously and vegetate into bacteria.
A benign B. anthracis cell is nonflagellated with dimensions 1 – 8 micrometers
in length and 1 - 1.5 micrometers in breadth. Once in contact with human tissue, the
bacteria will spread through circulation to the lymphatics, where they can
proliferate. Furthermore, these bacteria produce potent exotoxins, which are lethal
to humans. Full virulence of the bacteria usually requires the presence of a capsule
and a three component toxin - protective antigen, lethal factor and edema factor.
Primarily, the toxins utilize protective antigen, a naturally produced non-enzymatic
component, to assist them with the delivery of the two enzymatic components into
the cells (Sellman, 2001, p.695). Because protective antigen is essential for full
virulence, addition of mutant protective antigen to B. anthracis prevents the release
of edemal or lethal factors into the cells (Enserink, 2001, pp.490-491). The toxin has
two enzymatic components. Oedema factor (EF) is an adenylate cyclase that leads to
increase in cyclic AMP resulting in edema at the site of infection. On the other hand,
lethal factor (LF) is a protease that appears to alter the production of cytokines by
macrophages and to induce macrophage lysis and lethal effects of anthrax in animals
(Erwin, 2001, p.675).
Most anthrax bacteria inside the body are destroyed by anaerobic bacteria that
can grow without oxygen. Nonetheless, the greater danger lies in the bodily fluids and
blood that spill from the body into the soil where the anthrax bacteria transform into
their dormant protective spore forms. In addition, once the spores are formed, they
would be very difficult to eradicate and could survive for decades in favorable
environment (Enserink, 2001, pp.490-491). In contrary, benign B. anthracis that

3
survive poorly outside the animal or human host cannot form spores after they
exhaust local nutrients.
In order to obtain comprehensive understanding of the bacteria, present day
researchers have developed models of the genomic sequence of B. anthracis.
Investigations revealed both two of the sequences of the virulence plasmids, p x 01
and p x 02, (Okinaka, 1999, p.261) and the role of plasmids, which codes for the
major virulence factors of B. anthracis (Mikesell, Ivins, & Ristropn, 1983, p.371).
Nevertheless, the work on the anthrax genome itself is still underway as
microbiologists hope to uncover more information about the genetic mechanisms of
the bacteria.
Affected Populations

Anthrax spreads easily through animal hides to nearby humans because of the
zoonotic nature of the bacterium (Lord, 2001, p.1). The hide of any animal infected
with anthrax will likely contain endospores that infect anyone who is in contact with
them. These hides can remain dangerous over long periods of time because of the
resilient nature of the anthrax endospores. Both the vegetative forms and the
endospores can infect a large number of human and animal populations.

The United States is very vulnerable to attacks that utilize anthrax spores, as
can be seen in the 2001 terrorist attacks. A mortality rate of 45% was observed during
the attacks. While this number is significantly lower than the historical mortality rate
of 89-96%, it is still extremely high and would be devastating if it remained constant
during a large-scale attack. The difference in mortality rates can be attributed to
modern intensive care units and advancements in antibiotics. Further advancement in
these fields would lead to even lower mortality rates, thus ensuring the survival of a
large part of the population (Holty et al., 2006, p.1).

Without the established vaccine, animals can also easily contract anthrax. A
German plan during World War I involved sending spies to various countries and
infecting the local animal populations in order to affect the food supplies for opposing
nations. Methods of infection ranged from an injection directly in the blood stream to
a mixture of anthrax spores and sugar fed directly to the animals (Lord, 2004, p.1).
Animals are the largest population that is infected by anthrax, making these attacks
quite effective.
Causative Agents
Awareness of anthrax outbreaks was raised in numerous areas, including
Middle East, where their inhabitants are most susceptible to the disease due to
elevated number of animal interaction and high population density. Sources of
anthrax through human to animal contact can arise from most various occasions.
Some passages exist in direct forms, such as contact with excrete or saliva from
deceased animals, while others are more discreet, such as contact with blood-sucking
flies. In this way, underdeveloped countries serve as the perfect niche for B.
anthracis. For many years, how the disease may be contracted remained elusive;

4
however, present day technologies espoused discoveries that provided contemporary
researchers the key to in-depth understanding of the passage in which the bacterium
can enter the human body (Porter, 1996, pp.200-201).

Upon entrance to the body, B. anthracis endospores are taken by macrophages


but are not killed. Under this condition, they eventually proliferate and decimate the
macrophages. B. anthracis has two essential virulence factors, exotoxins. Sharing a
third of the toxic components, exotoxins encompasses a cell receptor binding protein,
protective antigen, which binds toxins to target cells and permit their entry. Edema
toxin, for instance, is one of the exotoxins that initiates local edmal swelling and
interferes with phagaeytosis by macrophages. Lethal toxin, on the hand, specifically
targets and kills macrophages, which disables an essential defense of the host. The
capsule of B. anthracis is not a polysaccharide but is composed of amino acid, which
do not stimulate protective response by the host immune system. Therefore, once the
anthrax bacteria enter the blood stream, they proliferate without any effective
inhibition until their population reaches approximately tens of millions per milliliter.
Ultimately, these bacteria kill the host by precipitating a form of septic shock as a
result of high levels of lethal toxin (Porter, 1996, pp.200-201).

In humans, anthrax occurs in three forms: cutaneous, gastrointestinal, and


inhalational. Cutaneous anthrax results from contract with immediate endospores
that enter the host body through lesions in the epidermal layer of skin. At first,
victims of cutaneous anthrax are subjected to the formation of papules on their skin.
Eventually their vesicles, which rupture and form a depressed ulcerated area, are
covered by a black eschar. Generally, the pathogen of cutaneous anthrax does not
directly enter the blood stream. However, if these bacteria do, mortality can reach to
20%.

Alternatively, gastrointestinal anthrax can transmit to humans through


ingested, undercooked food, which, similar to the case of cutaneous anthrax, contains
anthrax endospores. Symptoms associated with gastrointestinal anthrax are primarily
nausea, abdominal pain, bloody diarrhea, and ulcerative lesions, which occur in
gastrointestinal tract. In comparison to the two previous anthrax cases, inhalational
pulmonary anthrax is perhaps the most lethal form. For individuals afflicted with
inhalational pulmonary anthrax, the endospores first enter the body from the lung
through inhalation. Without immediate treatment, the patient, who has received 2 or
3 days of septic shock, can be killed within 24 to 36 hours. In fact, mortality rates of
patients diagnosed with inhalational anthrax is almost one hundred percent (Porter,
1996, pp.200-201).
Diagnosis

Today, many health institutions offer a variety of tests to potential victims that
correctly diagnose and differentiate cutaneous, gastrointestinal, and inhalational
anthrax. Vesicular fluids would be collected from patients and analyzed with gram
stain and culture for potential indication of cutaneous anthrax. Those with
inhalational anthrax typically have widened mediastinums identified in chest x-rays.

5
Finally, gastrointestinal anthrax can be tested through biopsy, histopathology, and
culture rapid diagnostic tests for validation (Inglesby, Henderson, Barlett, 1999,
pp.1734-1745).

Control and Prevention

During the 19th century, concurrence of several important events resulted in


preventive measures against the bacterium. In the 1900s, human inhalation anthrax
occurred sporadically in the United States among textile and tanning workers, but the
incidence of the illness had declined dramatically. Just nine years later, an outbreak
of inhalation anthrax occurred in Sverdlovsk near a Soviet military microbiology
facility. This epidemic represented the largest documented outbreak of human
inhalation anthrax in history. Subsequently in 2001, 22 cases of confirmed or
suspected inhalation and cutaneous anthrax were reported to associate with
intentional release of the organism in the United States. These statistics not only
attracted greater attention but urged immediate resolutions. Although Louis Pasteur's
veterinary anthrax vaccine containing attenuated live organisms was made previously,
the US developed a more effective type of vaccine in the late 1900s, the first free cell
vaccine from noncapsulated strains of B. anthracis. Immediately, dosages were
regularly allotted to workers in potential contaminated material in industrial setting.
By 1998, US issued to vaccinate every member of US Navy Armed Forces, including 1.4
million active duty troops and 1 millions reservists (Turnbull, 1991, pp.533-539).

Treatment
The time of administration of antibiotics promptly after anthrax infection is
essential. Treatment, however, usually takes up to 60 days because of risk of delayed
germination of spores (Turnbull, 1991, pp.533-539). Although penicillin was accepted
as the primary antibiotic against B. anthracis, current drugs, ciprofloxacin and
doxycycline, quickly supplanted it. Unfortunately, no clinical studies for treatment of
inhalational anthrax exist in humans. Because many animals are the initial carriers of
the bacteria, livestock vaccination is prevalent in endemic areas. Such vaccination
stems from the experimental discoveries of Louis Pasteur, a French chemist with
particular interest in microorganisms. Initially, he explored and developed methods to
eliminate microbes from milk pasteurization. Eventually, on April 28, 1881, Pasteur
injected 24 sheep with his new vaccines and repeated this experiment three weeks
later. The control group was the unvaccinated animals and the other group was the
animals implanted with virulent anthrax bacilli. Approximately two months later, all
the vaccinated sheep survived while the unvaccinated animals were dead. The
groundbreaking success of Pasteur galvanized interest for scientist to develop more
effective and applicable vaccines for anthrax (Turnbull, 1991, pp.533-539).

Immunization

Several forms of anthrax immunizations have been developed. While the form of the
vaccine designed for humans is not widely distributed to the general population, it

6
has been distributed to army personnel in case a terrorist bioweapons attack is
launched. Several variations of anthrax vaccines have been used throughout history
for both humans and livestock.
One form of anthrax vaccine is a livestock vaccine using live samples of B.
anthracis. The first iteration of it used attenuated strains of the bacterium. A new
revision uses an unencapsulated variant of B. anthracis. This vaccine is still the most
widely used veterinary vaccine in the west. However, it can cause the death of the
injected animal so it is not seen as viable for human use (“Use of anthrax vaccine,”
2000, Vaccination section, para.1).
A human anthrax vaccine known as Anthrax Vaccine Adsorbed (AVA) has been
developed. It does not use live or dead bacteria. Instead, it is made from a filtrate of
B. anthracis. The pattern of vaccination consists of vaccines at zero, two, and four
weeks, followed by vaccines at six, twelve, and 18 months. Finally, an annual booster
shot is also administered to insure continued immunity. This is the most effective
anthrax vaccine and, in fact, the only one licensed in the United States for use on
humans (“Use of anthrax vaccine,” 2000, Vaccination section, para.4).

Anthrax as a Biological Weapon

Deplorably, research of anthrax diverged in another path as researchers


became more attentive the potential power this bacterium would wield. In fact,
unscrupulous political leaders can actually manipulate the power of bacteria as a
form of biological weapon and gear its impact to their own advantages. Today, 17
nations are believed to have offensive biological weapons. In particular, between
1955 and 1991, Iraq has continuously produced anthrax targeted for this purpose.
However, such weapon had not been proven one hundred percent effective; aerosols
from anthrax bacteria and botulism toxin dispersed in Tokyo failed to produce illness
on eight occasions (Inglesby, Henderson, & Barlett, 1999, pp.1734-1745). While some
targeted biological weapons did not seem to fulfill their purposes, others ironically
imposed upon the producer nation itself. In 1979, one incident in which accidental
aerosolized release of anthrax spores in the Soviet Union resulted in 79 cases of
infection and 68 deaths (Inglesby, Henderson, & Barlett, 1999, pp.1734-1745) is the
epitome of such mishap. Of course, innovative technological advances and rapid
developments soon led to more effective and lethal biological weapons.

In 1993, US Congressional Office of Technology Assessment estimated that


between 130,000 and 3 millions deaths could result from the aerosolized release of
100 kg of anthrax spore upwind of the Washington, D.C. area. Initially, anthrax
research appeared to be the binding force for nations to collaborate with one another
in search for cure and prevention, but ironically at the same time such research
ultimately concocted a crime against humanity.

Anthrax is dangerous, resilient, and deadly. It is difficult to detect by observing


its symptoms until the acute stage and, at that point, the patient will most likely die.
While it is treatable and a vaccine has been developed, it cannot be effectively
controlled in a widespread outbreak. As can be seen in the historical data concerning

7
anthrax, it can quickly and efficiently be used by opposing countries or terrorist
groups as part of a biological weapon and have a devastating effect. Anthrax is a
threat to all humans and animals and, therefore, it is important to understand both
the disease and its history.

Literature Cited

Cristy, G.A., Chester, C.V. (1981). Emergency protection against aerosols. Oak Ridge,
TN, Oak Ridge National Laboratory.

Enserink, M. (2001). This time it was real: Knowledge of anthrax put to the test.
Science. 294, 490- 491.

Erwin J.L. (2001). Infection Immunology. 292, 675.

Green B.D., Battisti, C., Koehler, T.M., et al. (1985). Infection and Immunity.49, 291.

Hana, P.C., Acosta D., Collier R.J. (1993). Proceedings of National Academy of
Science. 90, 101-198.

Holty, J.C., et al. (2006). Systematic Review: A Century of Inhalational Anthrax Cases
from 1900 to 2005. American College of Physicians

Inglesby T.V., Henderson D.A., Bartlett J.G. et al. (1999). Anthrax as a Biological
Weapon. Medical and Public Health Management. Journal of American Medical
Association, 281,1735-1745.

Kobuch W.E., Davis J., Fleischer K., et al. (1990). A clinical and epidemiological study
of 621 patients with anthrax in western Zimbabwe. Proceedings of the International
Workshop on anthrax. 68, 34-38.

Koch R. (1961). Milestones in Microbiology. Washington, D.C., American Society for


Microbiology, 89-90.

Lord, A.M. (2001). A brief history of anthrax. Retrieved March 6, 2008, from
http://lhncbc.nlm.nih.gov/apdb/phsHistory/resources/anthrax/anthrax.html

Mikesell P., Ivins B.E., Ristropn J.D. (1983). Infection and Immunity. 39,371.

Okinaka, R.T. (1999). Journal of Applied Microbiology. 87,261.

Porter, R. (1996). Cambridge Illustrated History of Medicine. Cambridge University


Press, NY, .

Sellman et al. (2001). Science. 292,695.

8
Turnbull, P.C. (1991). Anthrax vaccines: past, present and future. Vaccine. 9,533-539.

Use of anthrax vaccine in the united states. (2000). Retrieved April 3, 2008, from
http://www.cdc.gov/MMWR/preview/mmwrhtml/rr4915a1.htm

9
Chapter 2
Bubonic Plague

Jeffrey Elloian, Erin O’Halloran, Jithu Maliakal

Introduction
Ring around a rosy,
A pocket full of posies,
Ashes, ashes
We all fall down

This nursery rhyme chronicled the black death, at a time when it was thought that
scented flowers and herbs would purify the air of its bad humors. "Ring around a
rosey" describes the pink circle that preceded the black spots that characterized
plague , while "Ashes, ashes" refers to cremation of those who died of the plague.
"We all fall down" relates to what most folk experienced if they suffered from the
disease,an untimely death (Biddiss & Cartwright, 1972, p. 45, para. 1)
Diseases have plagued humanity throughout its existence, often changing the
course of history. Of all of the diseases mankind has encountered, the most infamous
is often considered to be the black death. This disease is thought to have been
caused by a type of bacteria called Yersinia pestis and it can cause a person to have
an extremely painful death over the course of a few days (David, 2000, The Truth
section , para. 1). Although some things about the disease remain uncertain, the
impact it had on Europe and the world is incredible. In a time where little was
known about anatomy and disease, the victims and everyone surrounding them
panicked. Both social order and Western civilization nearly collapsed as this disease
changed how historic events occurred. Although this interesting biological disease
devastated the population of Europe, it also changed the course of history to create
the world we know today.

The Black Death

The black death is a general term used to describe the plague that spread throughout
Europe primarily from 540 to 1666 AD and for several centuries later, where there
were fewer and more sporadic epidemics. There are several instances of the black
death throughout this time period and a few major epidemics between 540 and 1666,
as well as rare and isolated cases afterward. Some of the more famous examples of
plague epidemics include Justinian’s Plague from 1346 to 1361 and the Great Plague
of London immediately prior to the Great Fire from 1665 to 1666 (Cartwright &
Biddiss, p. 31,para. 2). Little is known what exactly occurred, but it is believed that
the plague was brought to Europe by a ship returning to Sicily from Caffa, a Genoese
trading post with the Eastern Civilization. Most historians believe the bacteria
originated in the East. Upon their arrival, most of the crew were dead or dying
because of the disease. When people of Sicily realized that these ships were the
cause of this epidemic, many of the port cities denied them access, forcing these

10
ships to make landfall on the European mainland, thus spreading the disease (David,
2000, The Truth section , para. 1).

Although it is capable of spreading quickly, the disease is relatively simple.


Because there are no surviving records from the Middle Ages or notes of someone who
could have diagnosed this disease using modern techniques, considering the germ
theory was not hypothesized for centuries, there is no way of knowing the exact
disease. Scientists have a widely accepted the conjecture that the black death was
actually the bacteria Yersinia pestis because it shares similar characteristics and
symptoms as those shown in records of the era. The bacteria was named after a man
name Alexandre Yersin, who discovered it in China. As described by most Europeans,
this species of bacteria would meet the common description of a disease coming from
the East and it was also carried by rats, another frequent observation during the
period. Although it was a simple disease, it had an enormous impact on history and
the life of people (Cohn & Weaver, 2006, Historical Evidence section, para. 1).

Other Possible Diseases

Whereas some scientists confidently assert that Yersinia pestis is the black
plague, other scientists remain skeptical. Few of these scientists have offered other
theories, but several doubt that the balck plague was caused by Yersinia pestis. For
instance, Yersinia pestis is better adapted for survival in tropical and sub-tropical
environments and it would have difficulty in more temperate climates such as the
climate found in most of Europe. The bacteria have little in common with
descriptions of the disease that was becoming a threat in other parts of the world
during the 1800s (Cohn & Weaver, 2006, Historical Evidence section, para. 1).
Another reason that Yersinia pestis may not be the cause of the black death is that
the bacteria are extremely inefficient in their mode of transfer. If records of the
disease are accurate, only 20% of fleas infected with the disease will pass on the
bacteria to a new host. In this form, the disease is not nearly as contagious as most
records of the age indicate. More importantly it would have been impossible for the
disease to spread as rapidly as it did because this bacteria would only be able to
transfer itself approximately 12 miles per year (Cohn & Weaver, 2006, Historical
Evidence section, para. 2). This leads one to believe that this type of bacteria may
not have been nearly as powerful as the true causative agent of the black death.

On the other hand, records of the black death describe the disease to be more
deadly than Yersinia pestis. It is possible that the fear and panic caused by the
disease resulted in a belief that the plague was worse than it really was and people
made exaggerated accounts of the plague. There is, however, concrete evidence of
its existence. The black death supposedly could spread extremely rapidly and
eradicate entire populations, often killing up to 78% of the people living in a single
settlement. Furthermore, the disease rapidly spread across Europe with a rate of
nearly 12 miles per day instead of 12 miles per year (Cohn & Weaver, 2006, Historical
Evidence section, para. 3). Without a chance to obtain living samples from the time

11
period, modern scientists may never truly claim to know what the black death really
was.

Biological Description, Morphology, and Symptoms

At the time of the black death, little was known about biology. In fact the
concept of contagion originated as the people saw this disease spreading to anyone
near those infected (Cohn & Weaver, 2006, Historical Evidence section, para. 3). The
people at the time became panicked and they became soon became paranoid of
anything relating to the disease. Although there were medical practitioners at the
time, they could not do anything to cure the victims or alleviate their sufferings. In
fact, several uneducated people began to call themselves doctors to fill the
overwhelming need, but they too could do little to help those infected with the
plague (The Black Death, 1348, 2001,The Signs of Impending Death section, para. 2).
If doctors had some basic knowledge of the germ theory or biology, than there is a
high probability that more live would have saved or the disease would have at least
been contained.

There were three basic types of the disease: bubonic plague, pneumonic
plague, and septicimic plague, which were believed to evolve in that order. As the
plague evolved, the bacteria would kill their hosts faster. All three of these versions
of the plague existed simultaneously in the environment. The most famous of these
diseases was the bubonic plague, a disease named for the egg-shaped swollen glands.
At the time, many referred to these growths as tumors (The Black Death,
1348,2001,The Signs of Impending Death section, para. 2). Although these growths
began as red swellings, they often turn into dark purple or black lumps of flesh, for
which the black death received its name. Often these will occur at the site of the
flea bites, which could be around the neck, armpit of groin areas, and they were
reportedly excruciatingly painful (David,2000,the Symptoms, para. 1). Although the
buboes were the most noticeable traits of the disease, most people would often
experience other symptoms including substantial blood loss through the nose and the
appearance of purple or black spots across the body, primarily concentrated on the
outer extremities (The Black Death, 1348,2001,The Signs of Impending Death section,
para. 2). Often a person with the bubonic plague will die within two to six days
depending on the severity and which form of the disease he or she contracted.

The other two forms of the plague are less famous than the first, but are even
more deadly. The pneumonic plague was the result of the bacteria entering and
infecting a victim’s lungs. It is often associated with a chronic cough and it often led
to the person beginning to vomit large quantities of blood. This is by far the most
contagious of the three types of the disease because one may contract the disease by
simply breathing the same air as an infected person and it could kill victims even
faster than the original bubonic plague. On the other hand, septicimic plague is a
disease of the blood and was the most deadly of the three main cases. It often would
kill an infected person in less than single day (David,2000,the Symptoms section,
para. 1). Septicmic plague may still cause buboes to form, but it is difficult confirm

12
this fact because the victim usually dies from blood poisoning before the glands have
a chance to swell to a notable degree (Cartwright & Biddiss, p. 30-31,para. 1). All of
these forms are usually fatal given the lack of medical technology of the time period,
yet the bubonic plague remains the most famous.

The most common mode of transmission of the bubonic plague is a flea bite.
Of all of the methods in which a flea could infiltrate human society, none are more
famous or better associated with the plague than the common English rat or the
Rattus rattus. This breed of rat would commonly live in urban environment and
around humans. Another possible culprit may have been the brown rat, but this breed
would often avoid humans. Because of the natural camouflage that brown rat has
acquired through natural selection, it would stay near rural farmland and sewers, thus
significantly reducing the amount of humans with whom it would come into contact
(Cartwright & Biddiss, p. 30,para. 2). Often the various fleas that would be carried
by these rats would not normally bite humans, preferring rats, but they could effetely
choose any other warm-blooded mammal as a victim. They only began to attack
humans when the plague began to kill most of their other hosts, thus making the
disease zoonotic. (Cohn & Weaver, 2006, Historical Evidence section, para. 2). The
only other method of transfer was breathing the same air as a victim of the
pneumonic plague as mentioned previously.

Geography and How Plague Entered Europe

The black death was truly one of the first world-wide epidemics. Nearly all
records indicate the plague originating somewhere from the East, but scientists and
historians are still unsure exactly where it began. Many believe that the disease
started spreading somewhere on plains of Russia, India, or China (Cohn & Weaver,
2006, Yersinia pestis in Europe section, para. 2). From here it slowly traveled west to
Europe, reaching the coast of Italy by the Spring of 1348 (The Black Death,
1348,2001,para. 1). A ship from Genoa, a region between Southeastern France and
Northwestern Italy, left the trading city of Caffa, a Genoese trading post with the
Eastern world, and arrived in Sicilian port named Messina. Upon reaching the shore,
most of the ship’s crew were either dead or dying of the plague. After realizing that
these ships were carrying the plague, many cities, including the port where this first
ship landed, decided to close their ports in an attempt to save themselves from this
disease. Any ship coming from the East needed a place to make landfall and they
were forced to land in other areas of Italy, thus spreading he plague throughout the
continent (David,2000, para. 1).

The black death quickly affected the entire European continent, but there
were other places that were equally affected. For example, the disease quickly
spread to cover vast areas such as Russia, Mongolia, Scandinavia, and China
throughout the course of its existence (Cartwright & Biddiss, p. 32, para. 1).
According to most historical texts, a disease that may have been the bubonic plague
spread through the Middle East and North Africa in the 1300s and 1400s AD, which was
approximately the same time era it traveled through Europe (Cohn & Weaver, 2006,

13
Yersinia pestis in Europe, para. 2). This area alone is far more widespread than the
distance most other diseases have traveled. There have been very rare cases
afterwards across the world, but most of these were isolated and they were not major
epidemics. For example, there have been cases every few years In the Southwestern
United States in the early 1950s (Bubonic Plague in the U.S, 1956, para. 10). This
disease has affected people across the globe.

Figure 1.World Distribution of Plague in 1998. This map displays regions in the world
most affected by Y. pestis from 1970-1998 (Centers for Disease Control and
Prevention, 2007).

How the Back Death Changed History

It is unmistakable that history would be very different if the black death had
never existed. For example, after Erik the Red began his explorations in 936 AD, it is
believed that a very small percentage of the population could have had the bubonic
plague, spreading it across entire villages as soon as they settled. This disease killed
off enough of the Vikings to a point at which their forces were weakened enough for
native Eskimo forces to destroy the remainder of the settlement. Had this not
occurred, the Vikings may have settled America long before any other country
(Cartwright & Biddiss, p. 32,para. 1). Some of the most noticeable effects of the
plague took place in England where thousands died. When the Scottish saw that the
English forces were weakened by the plague, they viewed it as God’s revenge against
the British and attempted to attack them in this weakened state. The course of
history was completely changed because the Scottish army began to catch the plague
and they were forced to return to Scotland, where they inadvertently spread the
plague throughout their homeland(David,2000,TheBlack Death Reaches England,para.

14
8). If either of these events had not occurred, then the world would be completely
different from how it is today.

One of the greatest effects that the black death had on Europe was the massive
change in population, where nearly half of the population of Europe died (The Black
Death, 1348,2001,para. 2). The most notable difference after the plague was the
immense decrease in the amount of available labor, which allowed the lower-class to
move themselves into a better bargaining position with nobles. With a high demand
for labor and a reduced supply, wages rose substantially and nobles were required to
give peasants added benefits such as food and clothing in order to convince them to
work. In a desperate attempt for the government to suppress the people of England,
an act called The Ordinance of Laborers was signed to reduce wages, but this
completely failed due to the magnitude of the decrease in population
(David,2000,Consequences section,para. 1). In addition, what was once considered
the previously all-powerful church suffered greatly after losing nearly two-fifths of
the clergy of England. These fallen priests were then replaced with newer ones that
were not well trained in the arts of theology. This, in combination with the
helplessness of the Church to save people from the black death, caused the people of
the time period to lose faith in the power of the Church that they once believed in,
which may have led to the English Reformation (David,2000,Consequences section
,para. 2). In addition from simply affecting the population at the time, it is clear that
the black death also affected society.

Effects on Society

The black death greatly affected society and attitudes. The overall view of the
plague was one of panic and paranoia as everyone would avoid anyone with the
plague (The Black Death, 1348,2001,Varying Reactions to the Disaster section,para.
1). It was not very long after the plague began before law and order completely
collapsed because there were no one left alive or able to carry out their civic duties
(The Black Death, 1348,2001,Varying Reactions to the Disaster section,para. 3).
People began to look for any possible scapegoat or reasoning behind the plague and
they turned to both religion and superstition. Some people believed that the whole
event was ultimately caused by the alignments of stars and planets, whereas others
believed it was the wrath of God on sinners. As done throughout history, often
minorities are blamed for such negative occurrences. Hundreds of Jews were killed
by angry mobs who believed that they wanted to annihilate the Christians with the
plague (David, 2000, para. 2). Often death would become a daily part of life as there
would be a public effort to clean the corpses off of the streets, not for the sake of
respecting the dead, but as a vain attempt to save themselves and stop the rancid
smell of decaying bodies (The Black Death, 1348,2001,Mass Burials section,para. 1).
The black death made disease and death a common and daily experience within
society.

15
Black Death Today

The black death never truly disappeared, but it became far less frequent after
the 1600s (David, 2000, The black death Reaches England section, para. 10). After
that there were only occasional and isolated cases and there have been no reports of
major epidemics in the civilized world. For example, there were a few isolated cades
of the bubonic plague in the Southwestern United states including New Mexico where
there were several individual cases in 1951 (Bubonic Plague in the U.S, 1956,para.
10). The year before, there was another case of the plague in California, where a
man died even after antibiotics were administered by hospital staff (Bubonic Plague in
the U.S, 1956,para. 7). Some experts have suggested that the CCR5-Δ32 allele that
causes a resistance to HIV-1 infections may be a mutation caused by the black death,
explaining why Europeans are far less susceptible to AIDS. Their reasoning for this is
that the allele seems to have appeared very rapidly and at the exact time and places
where the black death was most common (Cohn & Weaver,2006,Introduction
section,para. 2). The plague is still a part of modern events as it has been in the past

Conclusion

The black death has been affecting our lives and history ever since it evolved.
Such a simple type of bacterial species nearly destroyed all of the human population
of Europe and other continents. It is evidence on how a disease can evolve to better
infect humans and why people must even be wary today. The bubonic plague
completely changed the course of history and left a permanent scar on a terrified
society. Even today people have found ways to in which it is related to other diseases
such as HIV-1. Overall, this disease has both devastated and changed the world.

Literature Cited

Bubonic plague in U. S. (1956). The Science News Letter 70 (1), 5. Retrieved March
3, 2008, from the JSTOR database at http://links.jstor.org/sici?sici=0096
4018%2819560707%2970%3A1%3C5%3ABPIUS%E2.0.CO%3B2-Z

Cartwright, F., & Biddiss, M. (1972). The Back Death. In Disease and History
(pp. 29- 32). New York: Dorset Press.

Cohn, S & Weaver, L (2006). The Black Death and Aids: CCR5-Δ32 in Genetics
and History. QJM 99 (1), 497-502. Retrieved March 3, 2008, from the Q J Med
database at http://doi:10.1093/qjmed/hzl076

David (2000) The Black Death in England 1348-1350. Retrieved March 4,


2008. http://www.britainexpress.com/History/medieval/black-death.htm

16
The Black Death, 1348 (2001). Retrieved March 3, 2008, from http://www.eyewitness
tohistory.com/plague.htm

17
Chapter 3
Cholera

Kelley Sielis, Anna Salate, Olivia Paquette

Introduction

Throughout history, cholera has caused major epidemics that have taken a
devastating toll on human populations. It is most commonly connected to areas with
poor sanitary conditions and dense populations. The disease is a swift killer if not
treated properly, and the symptoms of the disease can appear quickly and cause
major harm to the body in a short period of time. The main symptom of cholera is
massive amounts of fluid loss from diarrhea. This can lead to many other symptoms
including death. The loss of fluid leads to severe dehydration, which can then lead to
other long-term problems. Hyperprofusion, the slowing of blood flow, is caused by
dehydration, and can cause permanent cell dysfunction or death. Tachycardia is
another side affect that can create a rapid heartbeat and death. An increase in body
acid (metabolic acidosis), potassium depletion, and vascular collapse are all possible
products of cholera. During the process of discharging fluids a host can lose up to 10%
of his or her body weight (Zubay, 2005, p. 252). Fortunately, cholera has been nearly
expunged from most parts of the world, existing mainly now in developing countries.
Despite this, there is still an ongoing pandemic of cholera in the world today.

Causative Agents

Cholera is most commonly spread through the consumption of contaminated


food or water supplies. Because outbreaks occur in areas with poor sanitation,
cholera can spread very quickly through water that has not properly been treated.
The few cases of cholera in developed countries are usually caused by travelers
importing food from foreign countries. Most cases in the United States are caused by
people consuming raw or undercooked food from foreign countries or shellfish from
infected water (“Cholera”, 2006, para. 2).

Cholera bacteria can also be found in shallow wells, rivers, and streams, which
means that they are spread easily (How Cholera, 1996, p. 1). Vibrio. cholerae is very
dangerous because it can survive in both clean and brackish water, making it possible
for people who have filtered water to contract the disease. Unfortunately, the place
where V. cholerae thrives is in the intestine of a human host. Thus the bacterium
becomes less aggressive if it does not find a new host within 18 hours of exiting the
body (Betrayed, 2002, p. 1). Nevertheless, V. cholerae spreads quickly and without
warning.
It is common for an outbreak of cholera to appear near a river or body of salt
water. Shellfish found in salty rivers or along the coast can carry cholera, and can be
infectious if consumed raw or undercooked. Although it is not possible to be passed
from human to human, it is most common for cholera to spread quickly through dense

18
populations. The lack of cleanliness in the areas makes it easy for people to come in
contact with the bacteria.

Populations Affected

Cholera has been responsible for more than a million deaths across the globe. It
is prevalent in areas of high population where there are poor water purification
systems and where food storage and preparation is unhealthy. Poor hygiene is key for
V. cholerae to infect a host. A woman alive during an outbreak in the 1830s made a
statement about the infected. She said, “The disease is now, more than before rioting
in the haunts of infamy and pollution. A prostitute at 62 Mott Street, who was decking
herself before the glass at 1 o'clock yesterday, was carried away in a hearse at half
past three o'clock. The broken down constitutions of these miserable creatures, perish
almost instantly on the attack… But the business part of our population, in general,
appear to be in perfect health and security (Rosenberg, 1832, p.3).” She explained
how the wealthy people, who were much cleaner, were not becoming infected. It is,
however, capable of infecting people of all ages equally. Although the previous
statement remains true, children and the elderly seem to be more susceptible to
contracting the virus in endemic areas.

Children lack the pre-existing immunity that adults have gained from living in
the endemic area for many years without contracting the virus, or having survived it.
The elderly are more vulnerable because their immune systems weaken over time and
their gastric acid production slows. Their immune system is unable to fight off the
virus and their stomach acid is not as strong as that of a younger adult, making it easy
for the cells to pass through the gastric mucous barrier and into the intestine. There
have been endemics in many countries on all continents, but the disease has been
most common in Asia, where 4 of the 7 most populated countries are located (Zubay,
2005, p. 265).

Geographic Distribution

Throughout history, outbreaks of cholera have occurred in almost every


continent. The pandemics of the 1800s occurred in Europe and later spread to the
Americas. Most cases of cholera today originate in undeveloped parts of South
America and Africa. The latest pandemic of cholera reached Africa in 1971 and South
America in 1991. Most cases today do not end in death though because of better
treatment and understanding of the disease. Of the 45,159 cases reported in Africa,
3,488 were fatal, and only 2,618 of the 251,533 cases reported in South America were
deadly (Colwell, 1996, para. 7). The environment plays a major factor in the
outbreak of cholera. Of past cholera outbreaks, 25 percent were caused by very wet
weather and 13 percent were caused by population displacement due to natural
disasters (Griffith, Kelly-Hope, and Miller, 2006, para. 5). A rapid influx of a
population can cause an outbreak because the sanitary conditions are poor and people

19
are in much closer proximity to one another. Reported cases of cholera are lower in
developed countries because of better sewage treatment and sanitation.

Etiology

The disease cholera is caused by a bacterium, Vibrio cholerae. Because the


most common source of cholera is contaminated drinking water, it is easy for the
bacterium to enter the body. Once it is inside, it becomes an infection of the small
intestine. The bacterium is known to live and multiply in close proximity to algae and
marine invertebrates. It can be found on zooplankton, copepods, and even fish,
making it a zoonosis. Vibrio cholerae releases a toxin which increases the chloride
ions and water exuded by the intestine (Lentnek, 2008, para. 3). This leads to severe
diarrhea. Any bodily fluid from the victim can become a breeding ground for the
bacteria as well (Glausiusz, 1996, para. 4).

The type of strain that caused the most recent pandemic is O1, which produces
a toxin that latches on to cells in the small intestine causing the cells to pump a
surplus of chloride ions and water which can add up to about five gallons a day. Most
other strains of V. cholerae are harmless to humans and live in water; O1 is the only
strain to develop into a deadly disease. According to Matthew Waldorf of the New
England Medical Center in Boston, it is possible that a virus caused the strain to
evolve in such a way (Glausiusz, 1996, para. 3). He discovered the strain when looking
at the DNA of the bacteria and the gene CTX that contains the toxin. The passing of
genetic material from virus to the infected bacteria could have caused the difference
in the strain that would make it lethal. The evolution of a dangerous disease from
the infiltration of a virus is not uncommon; diphtheria and botulism are also deadly
strains of otherwise harmless bacteria.

Treatment

When treated at the first signs, cholera is unlikely to be fatal. Symptoms


include watery diarrhea, vomiting, quick dehydration, tiredness, abdominal cramps,
and nausea (Lentnek, 2008, para. 1). Other possible indicators to watch for are
sunken eyes, dry mouth, and lethargy. At the first occurence of these symptoms, the
patient should be given a mixture of sugar and waters to replace the electrolytes that
were lost. Antibiotics can also be used to decrease the duration of the disease. It is
imperative that the patient be checked often for dehydration, because it is possible
to die within hours if not hydrated adequately. If properly treated, a full recovery
from the disease is probable.

20
Immunization and Prevention

For further prevention, new vaccines are being researched that would prevent
the contraction of cholera. Several vaccines are available, although only one is
offered in the United States. This vaccine, CVD 103-HgR, is produced from a
genetically modified strain of V. cholerae. It is found to be effective within the first
three months of being administered. The vaccines are not recommended for everyone
because there are some negative aspects to them; recipients are only immune for a
short period of time, and they are still susceptible to any infection that has atypical
symptoms. The vaccine is more of a preventative measure for travelers to disease
stricken countries.

Although certain vaccines are available, the most effective way to prevent the
spread or contraction of cholera is through proper sanitation. In a pandemic region,
precautions should be taken to avoid contracting cholera. Large public areas or
meeting places should be avoided. Also, only water that is known to be sanitary or
has been treated should be consumed. If there is any doubt as the condition of the
water, it should be boiled to kill any bacteria. Also, all bedding and cloth has come in
contact with someone who is infected should be handled with great caution. The
most effective way to stay healthy in a region that has an outbreak of cholera is to
take extra measures to keep living conditions clean.

Historical Impact

Cholera has had a great impact on populations and civilization throughout


history. The earliest records of a cholera-like disease first date back to 500-400 B.C.
in India and Greece. It also was a major concern during Oman War, the war between
Persia and Turkey, and World War I. The first recorded pandemic occurred in 1817
and lasted until 1823. The second pandemic lasted from 1829 to 1851 and most likely
started in Russia. The outbreak spread to North America along the St. Lawrence River
to New York City and later to Philadelphia and New Orleans. Cholera thrived because
of the poor sanitation and sewage systems in the cities.

The second pandemic ravaged London, but lead to new insight into the spread
of the disease. A physician by the name of John Snow realized that the outbreak
affecting the areas around Broad Street and Golden Square was caused by the
communal water pump in that area. The drinking water had been mixed with the
sewage in the area and was contributing to the spread of the infection. The handle to
the pump was removed to prevent access to the water, and the epidemic began to
decrease in the area. Snow was the first to realize that cholera was more likely to be
spread by water consumption than through contact with the infected (Colwell, 1996,
para. 8).

Britain was greatly affected by cholera and other diseases during the 1800s
because people started moving from rural areas to congested cities. It was the time
of the industrial revolution and cities in Europe were growing rapidly. From 1801 to

21
1851, the percentage of people living in towns with over 5,000 people grew from 20
percent to 54 percent. Because the population was so dense, it was easy for disease
to be spread. If a person were infected in a household, cholera could be spread to
other members of the house through unwashed clothes or dirty hands. Although
cholera could not be passed from human to human, it was contagious nonetheless
through unhygienic conditions.

Although scientists at that time could not grasp the concept of bacteria, they
were able to determine possible causes of bacterial diseases and their prevention. A
two year study was performed by Robert Spitta in the 1860s, and his findings were
published in a pamphlet titled Brief Remarks on Cholera. In his study, Spitta (1866)
advised that food should be carefully prepared and linens changed often (p. 9). Also,
water was to be boiled and the house was to be disinfected with either chloride or
lime. It was not until 1884, however, that Edward Koch discovered the cholera
bacillus (McClean, 2005, p.10). Researchers were still not sure at this point whether
or not the bacteria could be passed from human to human.

Four more pandemics occurred in different continents until 1926, when it was
thought cholera would no longer be a threat because of waste treatment and cleaner
drinking supplies. Another outbreak did start in 1961, though, and still persists today
in parts South America and Africa. It is a new variation of Vibrio cholerae, the El Tor
biotype, which causes some new cases in the world. The variation tends to be less
harmful than the classic type, although it can be very serious if not treated.

The bacterial disease, cholera, has been the cause of many deaths throughout
history. The disease is actually a strain of V. cholerae, which is an otherwise
harmless disease. There have been seven documented pandemics throughout the
world since the early 1800s. Due to increased sanitation in living conditions, cholera
has become almost nonexistent in most countries.

Cholera has not been overlooked as a weapon of war. The first known use of
cholera was in World War I when the Germans used it against Italy. In the 1930s Japan
used cholera as an ingredient in bombs that were dropped on China. It was mixed
with various other biological pathogens to have a greater effect. It could still
potentially be used in biological warfare because of its detrimental effects, but V.
cholerae has not recently been a major topic of concern (Zubay, 2005, p. 256).

There was one very surprising case of cholera outbreak that did not take place
on land. A flight from London to New Zealand and Australia experienced its own
outbreak when contaminated food was served. There were about 374 passengers and
19 crewmembers exposed to the contaminated food. Only 40 cases were diagnosed in
Australia and only 3 in New Zealand. Unfortunately one person who was traveling to
New Zealand passed away (Zubay, 2005, p. 255). This outbreak was shocking, but
demonstrated the power of cholera and its ability to appear unexpectedly.

22
Literature Cited

Betrayed by the belly. (2002). Retrieved March 19, 2008, from Discover magazine:
http://discovermagazine.com/2002/sep/breakbelly/article_print

Cholera. (2008). Retrieved February 27, 2008, from www.cdc.gov/


ncidod/dbmd/diseaseinfo/cholera_g.htm

Colwell, R.R. (1996, December 4). Global Climate and Infectious Disease: the Cholera
Paradigm. Science Magazine, 274, 5295.

Griffith, Kelly-Hope, and Miller (2006). Review of reported cholera outbreaks


worldwide, 1995-2005. Fogarty International Center, National Institutes of Health,
Bethesda, Maryland

Glausiusv, J., “How Cholera Became a Killer.” Science Magazine. October 1, 1996.
Lentnek, A. (2008). Cholera Health Article. Retrieved March 3, 2008, from
http://www.healthline.com/adamcontent/cholera.

How cholera became a killer. (1996). Retrieved March 19, 2008, from Discover
magazine: http://discovermagazine.com/1996/pct/howcholerabecame900/
article_print
Mclean, D. (2005). Public Health and Politics in the Age of Reform : Cholera, the State
and the Royal Navy in Victorian Britain. London: Taurus and Company, Limited.
Rosenberg, C. E. (1832) The Cholera Years: The United States in 1832, 1849, and
1866. New-York Mercury. Retrieved April 13, 2008, from http://caho-
test.cc.columbia.edu/dbq/11003.html
Royal Navy in Victorian Britain. I.B. Taurus and Company, 2005.
Spitta, R. (1866). Brief remarks on cholera. London: J. Churchill and Sons.

Zubay, G. (2005). Agents of Bioterrorism. New York. Columbia University Press.

23
Chapter 4
Ergotism

Erica Brosovsky, Dan Levy, Jake Wassenar

Morphology and Etiology

Ergotism is a disease or extended affliction acquired by the ingestion of the fungus


ergot (Claviceps purpurea). Typically, ergot can be found living on cereals such as wheat
(Triticum aestivum) and rye (Secale cereale). It has curved sclerotia that grow in place of
the grains of the host plant. Ergot usually thrives in moist, warm areas, but it grows very
unpredictably. It is known to appear and then vanish for years at a time, and it grows in
fluctuating quantities. In most cases, ergotism is acquired by the consumption of
contaminated cereals such as rye bread. Individual susceptibility to ergotism is somewhat
variant, but it is known to affect pregnant women and children most easily.

Ergot is exceptionally dangerous because not only does it take approximately two
years to reach 50% deterioration, but its effects are collective over time. There are two
main categories of ergotism. With gangrenous ergotism, a dry gangrene forms on the
extremities of the body, and eventually, the gangrenous flesh decays away or falls off.
Convulsive ergotism manifests itself in a more diverse manner. Common symptoms of
convulsive ergotism are unpleasant crawling sensations in the skin, tingling of the fingers,
vertigo, tinnitus aurium (ringing in the ears), intense headaches, interrupted sensory
perception, hallucinations, and painful muscle spasms that result in convulsions, vomiting,
and diarrhea. Alkaloids within the ergot sclerotia cause involuntary muscles to contract
and behave erratically. Additionally, ergot causes many psychological problems such as
mania, depression, psychosis, and delirium (Caporeal, 1976, para. 26-27).

Both humans and plant-eating animals can suffer from ergotism. Many animals that
graze and eat wild plants are at risk of ergot poisoning. Mycorrhizal fungi can be found
living in symbiosis with many different types of plants, particularly in the stalk and leaves
of the plant or around the roots of the plant. There is a type of grass called Festuca
arundinacea that contains a type of mychorrhizal fungi known as Sphacelia typhina. This
fungus produces many of the same toxins found in ergot and can lead to forms of
ergotism. Usually, cattle and other animals tend to avoid these fungus-infected plants and
grasses; if this fungus is consumed, however, animals suffer many negative symptoms.
Cattle are known to show signs of poisoning by behaving sluggishly, producing excessive
saliva, producing less milk, having a longer gestation period, and developing fevers and
gangrene (Ergotism, n.d., para. 5).

24
Geographic Distribution and Effects

Ergot tends to grow mostly on Secale cereale and Triticum aestivum. It is not,
however, limited to these species of plants; ergot has also been found on other members
of the Triticum genus and barley (Hordeum vulgare), as well as other types of grains.
Ergotism thrives in almost any warm, moist environment where it has another host plant
to live on, so it is fairly widespread; however, the greatest concentration of ergotism
outbreaks occurred in the Middle Ages. Ergotism spread about medieval Europe in many
individual epidemics. Throughout the continent, there were approximately 50 epidemics
of gangrenous and convulsive ergotism that took the lives of tens of thousands of people.

This is the time period when this burning gangrene merited its many common names
such as the so-called plague of fire (France), St. Anthony’s fire, and St. Vitus’s dance
(Ergotism, n.d., para. 4). St. Vitus’s dance earned its name from the great outbreaks of
psychotic and hysterical dancing due to the psychological effects of convulsive ergotism.
It was typical for individual communities to have a common source for grains and cereals,
so when a single crop became infected, many people would consume the infected grains
and suffer from ergotism simultaneously. Allegedly, there were outbreaks of convulsive
ergotism so great that crowds of people would lose most self-control and dance without
restraint in the streets (Biddiss & Cartwright, 1972, pp. 197-214).

The frequency of epidemics of ergotism in Europe brought about much interest in the
fungus ergot. The very first scientific records of ergot were made by Denis Dodart a few
years prior to the Salem witch trials (1692), so knowledge of ergot at that time was
extremely limited. Dodart made the association between the ergotization of Secale
cereale and bread poisonings, and he shared his discoveries with the French Royal
Academie des Sciences in 1676.

The first reference to ergot in the United States was in a letter from Dr. John Stearns
to a medical colleague. This letter stated that ergot sclerotia were an effective means of
easing the pain of childbirth. Stearns’s extensive research on ergot brought about a
general misconception that he had actually been the first to discover ergot. In the early
years of its discovery and medical utilization, ergot was not known to be a potentially
dangerous parasitic fungus; it was merely thought to be kernels of grain that had been
overexposed to the sun (Caporeal, 1976, para. 23).

The populations most affected by ergotism were those that were poor and those that
were in rural areas. Typically, a combination of both occurred. In the countryside,
farmers grew their own grain; due to the climate, it was usually rye which was often
infected with ergot. The farmers did not know the ergot to be dangerous, so they simply
ground it up along with the regular rye grain. Ergotism was not as prevalent among the
affluent because they dined on wheat instead of rye, but even so, the disease was very
prominent. Ergotism baffled doctors and scientists for centuries because the disease could
strike one family and not their neighbors, or even one member of a family, and not the
rest of the family.

25
Treatment and Prevention

There is no specific cure for ergotism, but there are treatments for those
suffering from it. In the case of gangrenous ergotism, anticoagulants and vasodialators are
used to keep blood flow at a maximum. In cases of intense pain, nerve blockades are
carried out so as to reduce the agonizing burning. With convulsive ergotism, sedation is
often required for hallucinations and other psychiatric issues.

To prevent ergot-infected Secale cereale from being inadvertently consumed, there is


a method of separating the ergot sclerotia from the healthy grains of Secale cereale. The
ergot-infected grains have less mass, so they float to the surface of the fluid. After the
flotation process, they can be skimmed away and destroyed. Additionally, fungicides can
be effective against ergot growth (Ergotism, n.d., “Treatment” and “Prevention”
sections).

Historical Impact

Ergotism has had a very large impact on the history of mankind. Prior to our current
knowledge of ergot, it was truly a silent killer, sweeping through entire populations and
creating countless physiological problems. Aside from the large-scale epidemics of the
Middle Ages, there have been theories pertaining to ergotism in more recent history.

One of the most famous ergotism-related time periods in history was that of the
Salem witchcraft trials of 1692. There have been many theories pertaining to these trials,
but not one can dispel the mystery surrounding these occurrences. Essentially, all physical
symptoms experienced by the accusers were unknown in nature and were determined
falsely based. Modern science suggests that these events were triggered by then-
inexplicable physiological conditions. Based on what judicial recordings have been
obtained, it is likely that these witchcraft-related disturbances were caused by cases of
convulsive ergotism, a direct effect of the consumption of ergot-infected grains
(Caporeal, 1976, para. 1).

Due to the inexplicable afflictions of the so-called witches, no research was


conducted to further explore possible scientific causes. In hindsight, the social and
psychological aspects of the Salem community were too intricate and complicated to
allow the recognition of this widespread yet shrouded disorder. The phenomenon of the
Salem witch scare can be explained by the physiological condition of ergotism. Such an
explanation would be cohesive to the events of the time period (Caporeal, 1976, para. 2).

One proposal is that all the strange behavioral symptoms were fraudulent and that
the afflicted girls had their own motivations for fakery. Some believe that the girls were
merely looking for attention or that they were hoping to avoid parental discipline in
response to their suspiciously-viewed interests in magic. One flaw in this theory is that
the apparent severity of the condition of each of the girls was far too great to be
purposefully faked. Another less common view is that there was actual witchcraft
involved and that the girls were actually “sophisticated necromancers” who actually had

26
contact with the realm of black magic. Yet another opinion is that nearly everyone in the
community of Salem had simultaneously fallen into a state of hysteria and psychological
illness (Caporeal, 1976, para. 17-21).

One of the most credible theories is that there was some sort of physiological cause
for the symptoms and that the afflictions of the girls were legitimate. This theory has not
been fully considered because the Puritans of Salem were not able to ascertain any
plausible physical cause to the symptoms of the girls; hence, no suggestion of a
physiological cause was suggested. Today, based on the descriptions of the events
surrounding the witch scare and the advancement of modern science, it can be inferred
that there may have been a physiological cause (Caporeal, 1976, para. 22).

Another historical theory was that the legendary author and poet Robert Louis
Stevenson was heavily influenced by a form of ergotism. As a child, Stevenson was
extremely ill most of the time and was regularly given a medicinal derivative of ergot to
treat his symptoms. Theory suggests that this hallucinogenic drug affected him so greatly
that he was compelled and inspired to write his famous novel The Strange Case of Dr.
Jekyll and Mr. Hyde (Robert Louis Stevenson, n.d., para. 9-10). Regardless of the veracity
of this theory, the parallelisms between this story and the daunting negative effects of
ergot on the mind are substantial.

Clearly, the severity of the symptoms of ergotism led it to be an extremely powerful


factor in the development of early civilizations and the daily lives of members of
societies. It swept death across populations while remaining a mystery until recent years.
Its prevalence implies its effect on history in general; its effects were incalculably
numerous and substantially life-changing to all it affected.

27
Literature Cited

Caporael, L. (1976). Ergotism: The Satan Loosed in Salem? Science, 192.

Cartwright, F. F. and Biddiss, M. D. (1972). Mass Suggestion. In Disease and History


(pp. 197-214). New York: Dorset Press.

Definition of Ergotism. (2000). Retrieved March 4, 2008, from


http://www.medterms.com/script/main/art.asp?articlekey=14928

Ergotism. (n.d.). Retrieved March 5, 2008, from


http://www.itg.be/itg/DistanceLearning/LectureNotesVandenEndenE/48_Mycotoxins
p2.htm

McMullen, M. and Stoltenow, C. (2002). Ergot. Retrieved March 5, 2008, from


http://www.ag.ndsu.edu/pubs/plantsci/crops/pp551w.htm

Robert Louis Stevenson. (n.d.). Retrieved March 29, 2008 from


http://www.pbs.org/wgbh/masterpiece/kidnapped/stevenson.html

28
Chapter 5
Hansen’s Disease (Leprosy)

Elizabeth Bachelder, Blake Reeves, Gabriela DeFosse

Historical Background

Leprosy, also known as Hansen’s disease, has been feared and misunderstood
throughout its history, thought by many to be a purely hereditary disease, a curse, or
a punishment from God. Leprosy sufferers were brutally stigmatized and shunned.
Many references to leprosy and leprosy victims exist in the Bible, the latter
considered physically and spiritually unclean. During the Middle Ages of Europe,
leprosy victims had to wear special clothing, ring bells to warn others that they were
close, and walk on a particular side of the road, depending upon the direction in
which the wind was blowing.

Leprosy could have occurred as early as 1550 B.C in Egypt. Many scholars
believe that leprosy appears in an Egyptian papyrus document written close to that
year. Indian writings portray a disease that resembles leprosy 950 years later, ca. 600
B.C. In Europe, leprosy first appeared in the records of ancient Greece following the
return of Alexander the Great from India. In ca. 62 B.C., leprosy appeared in Rome,
coinciding with the return of troops fighting in Asia Minor.

Even during more modern times, leprosy treatment occurred in separate


hospitals and live-in colonies called leprosariums. Leprosy has been so prevalent in
various areas at certain times throughout history that is has inspired art work and
influenced other cultural practices (History of Leprosy, n.d., p. 1).

Leprosy in the Bible

The clear visual impact leprosy has is quite horrific and really leaves an
impression on those who see it in person due to the graphic disfigurement it often
causes to its victims. This may be the reason why thousands of years ago leprosy was
linked very closely with sin and evil. Influential writers of the time such as Augustine
the Bishop made multiple references during that time, which related leprosy to sin
(Davies, 1890, pp. 151-152). He expressed that the person is in severe pain and must
be getting punished for some terrible deed he or she committed. Documentation was
found in Egypt describing leprosy as early as 2400 B.C.

One way Ramses II legitimized his mass killing of 90,000 Jews in Egypt circa
1250 B.C. was by accusing them of “harboring a disgraceful disease” which is
interpreted as leprosy (Porter, 1996, p. 162). Because lepers were viewed as dirty and
corrupted by sin, they were excluded from the community and forced into inhumane
holding camps. All rights belonging to a leper were stripped and families were torn
apart by law for fear of spreading the tragic disease to children and others. If that

29
was not terrible enough, they were also made to wear a bell or something similar to
warn others that a leper is approaching (Porter, 1996, p. 163).

Not until Jesus arrived did anyone respect the rights lepers had as human
beings and treated them like rats. Once Jesus came he would take care of the lepers
and cure them sometimes, converting their terrible image towards the image of
“Christ’s poor”. Until that point lepers were feared and thought to be the cause of
almost all things that went astray in the society. Black Death was even blamed on
lepers; people accused them of poisoning the wells and causing chaos (Porter, 1996,
p. 163). It is fairly easy to comprehend the fears people had of lepers, but today we
know that despite their unfortunate appearance, they really are not as dangerous as
they seem.

Morphology and Causative Agent

In 1873, Norwegian doctor, Gerhard Henrik Armauer Hansen, for whom the
disease was named, was the first person to identify the germ that causes leprosy. A
rod-shaped bacterium, Mycobacterium leprae, which multiplies very slowly and
mainly affects the skin, nerves, and mucous membranes, is the causative agent of
leprosy. The organism has never been grown in bacteriologic media or cell culture,
but it has been grown in mouse foot pads. Because this bacterium must be viewed
under a microscope using special staining techniques, it is referred to as acid-fast
bacilli (CDC, n.d., para. 3).

Etiology and Effects

Two reactions can occur from the entrance of Mycobacterium leprae into the
body – a milder reaction and a stronger reaction. Tuberculoid leprosy, or TT, is the
milder reaction. In the deeper layers of the skin, the immune cells of the body
attempt to seal off the infection from the rest of the body by surrounding the
Mycobacterium leprae. As a result of this immune system response, the hair follicles,
sweat glands, and nerves can be destroyed, therefore causing the skin to become dry,
discolored, and insensitive to touch. Because of the rarity of bacteria in this type of
leprosy, it is referred to as paucibacillary (PB) leprosy. Of all leprosy cases, seventy to
eighty percent are TT.

Lepromatous leprosy, or LL, is the second, stronger reaction. The immune


system of the body is unable to create a strong response to the invading organism.
Therefore, the organism multiplies freely in the skin. Because of the large numbers of
bacteria present in LL, the disease is often referred to as the multibacillary (MB)
leprosy. The characteristic feature of this disease is the appearance of lesions all over
the body and face. Occasionally, the mucous membranes of the eyes, nose, and
throat may be involved, which often produces a lion-like appearance. This type of
leprosy can cause blindness, major change in voice, or mutilation of the nose.

30
Rumors and reports about the connection between armadillos and leprosy in Louisiana
and Texas struck great interest in conspiracy theorists from all over the world. If this
had been true it would prove a possible cause of leprosy to be a zoonotic disease
spread by armadillos. Unfortunately, or fortunately, no officially confirmed cases of
armadillo-caused leprosy in humans have been found, though some armadillos are
known to carry leprosy naturally. As more research is conducted the true origin of
leprosy may one day finally be revealed.

Populations Affected and Geographic Distribution

Fortunately, leprosy has been eliminated from the most countries. However,
most of the incidents of leprosy that still occur are located in the most poverty-
stricken regions on the globe. Therefore, one could argue that environmental factors
such as insanitation, overpopulation, and malnutrition could contribute to the
contraction of the disease.

Relatively large groups of people infected with the disease still remain in some
areas of Angola, Brazil, Central African Republic, Democratic Republic of Congo,
India, Madagascar, Mozambique, Nepal, and the United Republic of Tanzania. Leprosy
in the United States is rare, but the disease is still a problem in sections of Texas and
Louisiana, possibly because of the presence of the nine-banded armadillo.

In 1997, there were approximately 1.2 million cases of leprosy worldwide, and
Africa and Asia reported the highest numbers. Approximately 600,000 new cases are
reported annually. Although everyone is capable of contracting leprosy, children seem
to be more susceptible than adults (World Health Organization, n.d., para. 5).

Mode of Transmission

The mode of transmission of leprosy is still uncertain. However, most scientists


believe that Mycobacterium leprae is usually transmitted from person to person in
respiratory droplets or nasal droplets. Approximately fifty percent of patients
diagnosed with the disease have reported their close contact with infected family
members. The milder form of leprosy, TT, could be transmitted by insect carriers or
by contact with infected soil (Iranderma, n.d., para. 6).

Current study shows that genetic factors are possibly involved in susceptibility
to leprosy as well. In 2003, scientists found that susceptibility to the disease was
linked to region q25 on the long arm of chromosome 6 when they studied a large
Vietnamese family with many cases of leprosy. Additional study indicated that the
gene that causes leprosy susceptibility is located within a region shared by two genes
for Parkinson's disease. This leads scientists to believe that the occurrence of leprosy
in particular individuals is related to the inheritance of genes for Parkinson’s disease.

31
Leprosy Treatment and Immunization

Leprosy treatment generally consists of a mixture of drugs. In the past, the oil
of seeds from the chaulmoogra tree was used to relieve leprosy, as the natives of
southeastern Asia have long known of its curative properties (Hunting the
Chaulmoogra Tree, 1922, p. 22). As times progressed, dapsone was the most widely
used drug for leprosy. However, dapsone-resistant strains have introduced the need
for multi-drug treatment with a combination of dapsone, rifampicin, and clofazimine
(Britton, 2004, pp. 1).

This approach is viewed as the most effective treatment for preventing nerve
damage, deformity, disability, and further transmission, and the majority of patients
with PB leprosy, or TT, are given this treatment. Usually upon three months of
treatment, the patient is no longer infectious. However, depending on the type of
leprosy, that time frame could realistically be two years or longer. Researchers are
working on developing a vaccine and ways to detect leprosy sooner in order to start
treatment earlier.

These drugs, although effective, do contain side effects. Dapsone can cause
nausea, dizziness, palpitations, jaundice, and rash. Rifampin may also cause muscle
cramps or nausea. Clofazimine may cause severe abdominal pain and diarrhea, as well
as discoloration of the skin. Red to brownish black discoloration of the skin and bodily
fluids, including sweat, may persist for months to years after use. Because of these
side effects, thalidomide, the most infamous agent of birth defects in the twentieth
century, is now being used to treat complications of leprosy and similar diseases.
Thalidomide regulates the immune response by suppressing a protein, Tumor necrosis
factor alpha (Mycobacterium leprae, n.d., para. 4).

Unfortunately, all of these drugs used for treatment can cause a lepra
reaction, which is a serious immune response. As antibiotics kill Mycobacterium
leprae, antigens (the proteins on the surface of the organism that initiate the immune
system response of the body) are released from the dying bacteria. When the antigens
combine with the antibodies to eliminate Mycobacterium leprae in the bloodstream, a
reaction called Erythema nodosum leprosum may occur, resulting in new lesions and
peripheral nerve damage. Medications utilizing cortisone are used increasingly to
minimize these effects.

There is no immunization for leprosy as of now. Until recently, the study of the
immunology of leprosy has been hindered by the lack of immunologically specific
Mycobacterium leprae antigens. The finding of specific antigens has invigorated
research efforts, and scientists are working diligently to find a cure (Hastings, n.d.,
para. 3).

32
Literature Cited

Rock, J. F. (1922, March). Hunting the Chaulmoogra Tree. The National Geographic
Magazine, XLI, Number Three.

Britton, W.J. (2004, April 14). Leprosy. Retrieved April 3, 2008, from PubMed Web
site: http://www.ncbi.nlm.nih.gov/pubmed/15081655

Center for Disease Control and Prevention, Hansen's disease (leprosy), Retrieved
March 4, 2007, from www.cdc.gov/ncidod/dbmd/diseaseinfo/

Davies, T. Wytton (1890, September). Bible Leprosy [Electronic Version], The Old and
New Testament Student, 11, No. 3. pp. 151-152, http://links.jstor.org/sici? sici=0190
5937%28189009%2911%3A 3%3C142%3ABL%3E2.0.CO%3B2-5

Hastings, R.C. Leprosy. Retrieved April 3, 2008, from BioInfoBank Web site:
http://lib.bioinfo.pl/meid:60476

History of leprosy. Retrieved March 4, 2007, from http://www.stanford.edu/


class/humbio103/ParaSites2005/Leprosy/history.htm

Leprosy. Retrieved March 4, 2007, from http://www.iranderma.com/ leprosy.htm

Mycobacterium leprae. Retrieved April 3, 2008, Web site: http://microbes.historique.


net/leprae.html

Porter, R. (Ed.). (1996) The Cambridge Illustrated History of Medicine (pp. 161-164)
New York: Cambridge University Press

World Health Organization, Leprosy today. Retrieved March 4, 2007, from


http://www.who.int/lep/en/

33
Chapter 6
Hemophilia
Christopher Bullock, Monica Kacprzyk, and Catherine Shea

In the Talmud, a collection of Rabbinical writings from the 2nd century AD,
scribes stated that male babies did not have to be circumcised if two brothers had
already died from the procedure. Most people of the ancient times did not know that
what seemed like a sign of physical weakness was actually the result of a blood
disorder known as hemophilia (Canadian Hemophilian Society, 2006, para.5).
Hemophilia is a rare, inherited bleeding disorder in which blood does not clot
normally. People who have hemophilia often have longer bleeding after an injury or
surgery.

Morphology

Hemophilia is a genetic disorder that reduces the level of clotting factors.


Females can be carriers, but are often unaffected by the disease because women
have two X chromosome and one normal gene can compensate for a defective gene.
However, males have only one X chromosome, so if they have a defective
chromosome, then they will be afflicted by hemophilia. A mother with the gene for
hemophilia has a 50% chance of passing the gene on to each child, whereas a father
with hemophilia will always pass the defective gene onto his daughters, but not his
sons. Hemophilia in a child will often become evident following circumcision or when
the child begins to toddle. Children will bleed and bruise easily and bleeding will take
longer to stop than normal.
There are two forms of sex-linked hemophilia: A and B. Hemophilia A is caused
by a deficiency in clotting factor VIII, and this form of the disease affects about 1 in
10,000 people. A lack of clotting factor IX causes hemophilia B, which affects
approximately 1 person in 50,000. The disease can be further classified as mild (5-30%
normal levels), moderate (1-5% normal levels), or severe (less than 1% normal levels).
The level of the coagulating factors affects the risk of serious complications and the
need for treatment (Canadian Hemophilia Society, 2006 p. 17; Genetics Home
Reference, 2007, para. 4).
There are two main types of hemophilia. Type A hemophiliacs (also known as
classic hemophiliacs) have little to no clotting factor F8, which is about 90% of the
hemophilic community. Others with hemophilia B (also known as the Christmas
disease) have missing or low levels of clotting factor F9. Hemophilia can be mild,
moderate, or severe, depending on how much clotting factor is in the blood
(Hemophilia, 2008, para 4). Close to 70% of patients who have hemophilia A have the
severe form of the disorder, whom are people with a factor F8 activity of less than 1%
(Hemophilia, 2008, para 4).

34
Affected Populations

Near the 1970s, the geographic distribution of the hemophiliacs tended to


resemble the distribution of all U.S. males, except for a higher concentration in the
Mid Atlantic regions, with 26% of the hemophilia population and 18% of the male
population. The Pacific, New England, and West North Central regions also had a
slightly higher proportion of hemophiliacs. One of the most famous aspects of the
spread of hemophilia is the infection of the royal families of Europe through Queen
Victoria’s descendants (Resnik, 1999, para 2).

Mode of Transmission

Hemophilia is inherited in an X-linked recessive pattern. A condition is


considered X-linked when gene mutation that causes it is located on the X
chromosome, one of the two sex chromosomes. In males (who have only one X
chromosome), one altered copy of the gene in each cell is sufficient to cause the
condition. In females (who have two X chromosomes), a mutation must be present in
both copies of the gene to cause the disorder. Males are affected by X-linked
recessive disorders much more frequently than females. A striking characteristic of X-
linked inheritance is that fathers cannot pass X-linked traits to their sons (National
Human Research Genome Institute, 2008, para 9).

In X-linked recessive inheritance, a female with one altered copy of the gene in
each cell is called a carrier. She can pass on the altered gene to her children, but
usually does not experience signs and symptoms of the disorder. Only in about 10 % of
hemophilia carrying cases do females who carry one altered copy of the factor F8 or
F9 gene will experience mild problems with bleeding (National Human Research
Genome Institute, 2008, para 10) .

Etiology

Mutations in the F8 and F9 genes cause hemophilia. Mutations lead to the


production of an abnormal version of coagulation factor VIII or coagulation factor IX.
The altered protein cannot participate effectively in the blood clotting process and,
in some cases; the protein does not work at all. A shortage of either protein prevents
clots from forming properly in response to injury. Some mutations almost completely
eliminate the activity of coagulation F8 or coagulation F9, resulting in severe
hemophilia. Other mutations reduce but do not eliminate the activity of one of these
proteins, which usually causes mild or moderate hemophilia (History of Hemophilia,
2004, para ) .

A rare form of the condition, acquired hemophilia, results when the body
makes specialized proteins called auto-antibodies that attack and disable coagulation
F8. The production of auto-antibodies is sometimes associated with pregnancy,
immune system disorders, cancer, or allergic reactions to certain drugs. In about half

35
of cases, the cause of acquired hemophilia is unknown (History of Hemophilia, 2004,
para 2).

Treatment

There is currently no cure for hemophilia, and treatment depends on the


severity of the disease. The main form of treatment is called replacement therapy,
which involves giving or replacing the clotting factor that is too low or missing.
Concentrates of clotting F8 (for hemophilia A) or clotting F9 (for hemophilia B) are
slowly dripped in or injected into a vein. Clotting factor concentrates can be made
from human blood that has been treated to prevent the spread of diseases, such as
hepatitis, and with the more refined methods of screening and treating donated
blood, the risk of developing an infectious disease from clotting factors is very
minimal. To further reduce that risk, some patients take concentrates that do not use
human blood, which are called recombinant clotting factors.

Some hemophilic patients may have replacement therapy on a regular basis to


prevent bleeding, also known as preventive or prophylactic therapy. Others may only
need replacement therapy to stop bleeding when it occurs. The use of the treatment
on an as-needed basis is called demand therapy. Therapy that is given as needed is
less intensive and less expensive than preventive therapy, but there is a risk that
bleeding will cause damage before the as-needed treatment is given (Genetics Home
Reference, 2007, para 11) .

Another form of treatment is using desmopressin (DDAVP) – a man-made


hormone used to treat people with mild to moderate hemophilia A. DDAVP stimulates
the discharge of stored factor VIII and von Willebrand factor and increases the level of
these proteins in blood. Von Willebrand factor carries and binds factor VIII, which
then can stay in the bloodstream longer. The effect of DDAVP wears off when used
often so it is usually used during certain circumstances and is given by injection or in
a nasal spray (Genetics Home Reference, 2007, para 12) .

Antifibrinolytic medicines (including tranexamic acid and aminocaproic acid)


may be used with replacement therapy. Antifibrinolytic agents prevent the
breakdown of blood clots by neutralizing chemicals in the mucous membranes of the
mouth, nose, and urinary tract that break down clots (Genetics Home Reference,
2007, para 12) .

Influence and Impact on History

The first modern description of hemophilia is attributed to Dr. John Conrad


Otto, a Philadelphian physician in 1803, who wrote an account of “an hemorrhagic
disposition existing in certain families.” He recognized that the condition was
hereditary and affected males. Otto traced back the disease three generations back
to a woman who had settled near Plymouth, New Hampshire, in about 1720. However,
the word "hemophilia" first appears in a description of the condition written by Hopff,

36
a pupil of Schönlein at the University of Zurich, in 1828. It was not until 1952 that
hemophilia B was distinguished from the more common type, and is referred to as the
"Christmas disease" after the surname of the first child reported with this condition
(History of Hemophilia, 2004, para. 2) .

Hemophilia is often known as the Royal Disease because many royal families
were afflicted with it in the early 20th century. Queen Victoria was a carrier as the
result of a spontaneous mutation in her gene. She passed the gene on to two of her
daughters, Alice and Beatrice, and to her son Leopold. He lived a sheltered life and
was described as “delicate” and later died after a minor fall at the age of 31.
Beatrice married Prince Henry of Battenburg; two of their sons had hemophilia and
died of the disorder, and one daughter (Eugenie) was a carrier. She married Alfonso
XIII of Spain. Their two sons died of hemophilia; however, their daughters were not
carriers. Alice married Prince Louis of Hesse; they had six children, none had the
disease, but two were carriers: Irene and Alix (Alexandra). Irene married Prince
Henry of Prussia and brought hemophilia to the German line. They had three sons,
two of whom died of hemophilia. Alix married Nicholas II, tsar of Russia. Their first
four children were girls whose status is unknown; their last child was a boy, Alexis,
heir to the thrown and severe hemophiliac (Vaillincourt, 2007, pp. 6-11).

On the day that Alexis was born cannons rang out to spread the news that the
child was the boy that would eventually rule all of Russia. However, it soon became
clear that there was something wrong with him: he bruised and bled easily. Before he
was a year old, he was diagnosed with hemophilia. The family decided to keep the
young tsarvich’s ailment a secret for fear that the people would view it as a
weakness. The boy suffered agonizing bleeding episodes that left him bedridden for
periods of time.

It was Alexis’s condition that first brought Gregori Rasputin into the palace.
Rasputin was a self-proclaimed holy man with perverted and drunkard tendencies.
Yet, the monk was able to ease the boy’s suffering and stop his bleeding episodes, so
he became a common visitor at the palace. The Tsar thought of him as the voice of
the people, when in reality the people loathed him for his impolite behavior and
began to mistrust the royals because they did not understand why the ruling family
would want to associate with someone like Gregori Rasputin. He most likely put Alexis
into a hypnotic trance, and once relaxed the bleeding would stop; and, Rasputin is
credited with saving the boy’s life on several occasions.

Rasputin’s ability to manipulate the boy lead to an ability to manipulate the


entire Romanov family because Alexandra loved Alexis and Rasputin was the only one
who could help him, so he had influence over her, and she, in turn, over Nicholas.
When the tsar went to support and lead the troops in WWI, Rasputin’s power grew, as
did the peoples mistrust of the royal family. Eventually the people staged a revolution
that ended in the murder of the Romanovs. It is likely that if Alexis were not a
hemophiliac, then Rasputin never would have become involved in the royal’s affairs,
and the tsar would have been better able to rule the country and avoid the

37
revolution. Had this happened, Russia may not have become a communist country and
history may have taken a drastically different path (Caron, 2002, pp. 1-9).

A major advance in hemophilia treatment was the discovery by Dr. Judith Pool
in 1965 that slow thawing of plasma to around 4˚ C led to the appearance of a brown
sediment rich in factor VIII. This breakthrough was considerably helpful to many
hemophiliacs who could store these products in a domestic refrigerator at 4˚ C and
facilitate home treatment, further freeing them from the physical and psychological
constraints of hemophilia (History of Hemophilia, 2004, para 7) .

However, researchers now recognize that this introduced the potential for the
transmission of viruses. Large numbers of patients around the world were infected
with HIV in the period of 1979-1985. Many hemophiliacs were diagnosed with hepatitis
C virus (HCV), first identified in 1989, and the exposure of this virus led to chronic
liver disease. The introduction of physical treatments such as exposure to heat or the
addition of a solvent-detergent fusion has effectively eliminated the risk of the
spreading of these viruses. On the whole, hemophilia has had a tremendous impact
on the history of politics and medicine through the disintegration of the Russian royal
throne, the creation of innovative biotechnology, and the spread of disease (History
of Hemophilia, 2004, para. 7).

Literature Cited

Canadian Hemophilia Society. (2006). Canadian Hemophilia Society. Retrieved


March 3, 2008, from http://www.hemophilia.ca/en/index.html

Caron, D. (2002, April 30). Alexis' Hemophilia: The Triangle Affair of Nicholas
II, Alexandra, and Rasputin. Retrieved March 29, 2007, from stlwa.edu: http:
//it.stlawu.edu/~rkreuzer/pcaron/alexisillness.html

Genetics Home Reference . (2007). Retrieved March 5, 2008, from


http: //ghr.nlm.nih.gov/condition=hemophilia#treatment

Giangrande, Dr. P. L. F. (2004). History of Hemophilia. Retrieved March 5, 2008 from


http://www.wfh.org/2/1/1_1_3_HistoryHemophilia.htm
Hartmann, J. R. and Bolduc, Rose A. (1956, Feb.). Hemophilia. The American Journal of
Nursing, 56 (2), 169-174.

Hemophilia. (2008). Retrieved March 5, 2008, from http://www.nhlbi.nih.gov/health/


dci/Diseases/hemophilia/hemophilia_what.html
National human research genome institute. (2008). Retrieved March 5, 2008, from
http://www.genome.gov/20019697#3
Resnik, S. (1999). Blood Saga. San Fransisco: University of California Press.

38
Chapter 7
Influenza

Natalie Copeland and Shelley Wang

History

During World War I, 22,700 Americans died in the final week of October 1918.
Out of those 22,700 fatalities, 21,000 died on American soil from the flu; only 2,700
were war casualties. This virus ran rampant around the whole world from the
Eskimos of Alaska to the battle trenches in Germany. At first glance, it seemed that
the infection began in Spain hence its nickname, the “Spanish flu,” but that was not
entirely true (Persico, 1976, para. 1-2). Although the origin of this virus is still not
known for sure, scientists now believe that the virus mutated in Cedar Rapids, Iowa in
the United States of America. At the time, a Dr. J. S. Koen noticed a similarity
between the ailment afflicting animals at the National Swine Breeders’ show and the
illness contracted by residents in that area. Swine began contracting this sickness,
diagnosed by the symptoms of sweating and whimpering, during late September 1918.
His conclusion, confirmed about a decade later by scientists, was that there was a
connection between the influenza strain afflicting humans and the “hog flu,” as he
called it, which was affecting the hogs in the area (Persico, 1976, para. 23).

Four hundred fifty miles from Cedar Rapids in Fort Riley, Kansas, in March a
dust storm so thick it clouded out the sun blew in over the army base and only two
days later army personnel were in the infirmary with influenza symptoms; forty-six of
the thousand or so that were sick died. In May the troops departed for France where
the French troops were soon diagnosed with influenza. British soldiers returning to
England from France soon infected their fellow countrymen, and soon after, Asia was
also contaminated. By August, the newest deadly strain of influenza was brought
back to America on a Norwegian passenger ship docked in New York (Persico, 1976,
para. 3-6).

Political and Military Events

Although the influenza epidemic of 1918 is the most widely known (and the
most deadly) of influenza outbreaks in the past millennium, the virus did exist before
then. In fact, the earliest recorded pandemic of influenza was in 412 B.C. by
Hippocrates. Similar to the outbreak in 1918, the 412 B.C. pandemic killed the
Athenian Army during one of their battles. Even during the American Revolutionary
War, there were a multitude of cases reported among the American troops camped in
Valley Forge, Virginia during the winter of 1779 (Persico, 1976, para. 8).

By early October 1918, Woodrow Wilson, President of the United States and
Commander and Chief of the United States Army, seriously considered stalling the
shipments of reinforcements to the front lines in Europe due to the insanitary
conditions on the ships and in the camps which bred influenza so much that one out of
39
every three soldiers died before they even got to the battle field. However, American
General Peyton March strongly insisted that the troops be sent, and he convinced
President Wilson of this by asserting that whether a solider died of disease or of
mortal wounds, he had served his country in the best way that he could To stop the
flow of troops would be to ensure failure and all because of some microscopic germ.
To make an even stronger argument, not only were the Americans currently winning
their battles, but also German General and Imperial Chancellor Max, had contacted
Wilson a few days earlier requesting an armistice.

To not send troops would be to give an weaker appearance and the option of
peace at last might disappear as quickly as it had come. In the end, Wilson agreed to
keep sending replacement soldiers and ended his debate with this rhyme, later
recited by American children all over the States (Persico, 1976, para. 34-41):

“There was a little bird


its name was Enza.
I opened the window
and in-flu-enza.”

Etiology

Influenza is a zoonotic disease, and the origin of the virus for humans was the
infective agent in animals, specifically birds. Avian influenza was found naturally in
wild birds, carried in the intestines of the animals. It was very contagious and spread
to domesticated fowl, such as chickens. However, birds were not the only animals
affected, and the infection soon spread to pigs and horses. The avian strains of
influenza combined in pigs to form strains that affected humans (Keen, 1995, para.
9). The disease was spread to humans through the contact of farmers to the farm
animals (Porter, 1996, p. 21).

The virus occurs in three types: A, B, and C. Type C only has mild symptoms
and is not usually discussed with when attempting to control influenza (Davis, 2007,
Causes, para. 1). Type B is the common form of the flu which circulates the globe
each year. Although usually just a nuisance in the winter months, it can be potentially
fatal when paired with complications such as pneumonia. Each year there are a
multitude of new strains of type B influenza virus because the influenza virus morphs
and adapts constantly. Type B is found in humans and in animals, specifically swine
and fowl. An avian influenza virus does not affect humans because the two do not
compute and vice versa.

Pigs, however, can be infected by both human and bird strains of flu. This is a
perfect opportunity for the two to mix and become a hybrid influenza (the process of
reassortment) which is more dangerous than just a human form because it adds
another dimension of virus which human immune systems are not accustomed to yet.
In fact, in 1957 and 1968 there were small influenza pandemics caused by a hybrid
virus which scientists believe was reassorted in a pig and then jumped back to humans

40
(Appenzeller, 2005, para. 25). When the bird flu makes the jump to humans it is
known as Type A and the most deadly. Making the jump from an avian influenza to a
human influenza virus is difficult and does not happen often. The last widespread
case of this was in 1918 with the Spanish influenza epidemic, which killed millions
around the world. The most recent case of this was in 1997 in Hong Kong where a
three-year old boy contracted the virus from local chickens. Despite all the
medicines and special medical treatment he received, within a week he had died
(Subbarao, 1998, para. 1).

Since then, H5N1, the current avian flu, there has been much preparation to
avoid another pandemic of proportions like that of 1918. Each year more cases have
been showing up, primarily in Hong Kong, China or Vietnam. Vietnam, where the
avian flu has hit worst in recent years, unlike Hong Kong does not have as plentiful
monetary resources to spend annihilating the chicken and duck populations in a
preemptive strike against the flu. As of yet, H5N1 has only been able to move from
bird to human. There have been a few unverified isolated cases where some
scientists believe the patient caught the virus from a dying victim, but no leaps other
than that have been recorded (Appenzeller, 2005, para. 41-58).

Viral Morphology

Influenza virus is deadly to humans because of the structure of the virion,


which allows it to replicate quickly once in the body and to change strains over the
years. The virion is rounded in shape but can also be long and thread-like. The
outside, a lipoprotein bilayer called the envelope, is very receptive to changes in the
environment of the virion, easily affected by conditions such as heat, drying, or
detergents. The inside of the envelope is covered in a matrix protein, MP 1. The
outside is comprised of two types of proteins that are capable of stimulating immune
responses. Neuraminidase (NA) is a box-shaped protein with properties like enzymes.
Haemegglutinin (HA) has a receptor site that helps attach and combine the virion to
the specific particles such as the cell membranes of red blood cells (Keen, 1995, para.
7).
Inside the virion are 8 segments of ribonucleoprotein, RNP, which contain the
genome of the virus. The RNP is an important part of the replication process once the
virus enters a cell through endocytosis. In a reaction due to the low pH in the cell, the
envelope fuses with organelles, vesicles, and releases RNP into the cell. The RNP
travels through the cell cytoplasm into the nucleus, where viral RNA begins to be
produced instead of the normal RNA of the cell. New virions are then released into
the environment of the cell in a process called gemmation, or budding.

Influenza A is the most common viral type, classified by varying changes in the
HA. Influenza B is recognized by a slower change in the HA, and it only affects
humans. Influenza C also only affects people, and it is very uncommon. Other
classification criteria for an influenza virus are the town where it first appeared, the
number of isolated strains, the year of isolation, and the type of HA and NA.

41
Because the genes of the virus are in separate sections of RNP, the genes of a
certain type of virus can swap with those of other viruses if they are present in the
same cell. The viruses formed after replication will be different than the ones that
entered the cells. NA can be found as 9 major types, and HA has 13 major types. Many
different types, strains, of influenza can be formed out of combinations of the two
proteins. Every 10-15 years, a new strain with major changes appears within the
human population, and this is called an antigenic shift. Every 2-3 years, minor
changes appear in strains, which are antigenic drifts (Keen, 1995, para. 12).

Populations Affected

Human infection is believed to have originated in Asian countries and then


travelled from the East to other parts of the world. The places where influenza did
not naturally occur were not unaffected by the virus because of people who moved
frequently from one location to the next, such as merchants and marauders, who
spread the disease. Other groups of people responsible for the spread of viruses were
armies, which were large masses of people that moved quickly to varying places
(Porter, 1996, p. 24-25). As the world was explored and the parts of the earth that
had been previously not traveled to became known, no parts of the world were safe
from viruses. The natives of the New World were introduced, by Christopher Columbus
and other travelers, to influenza and other diseases they had previously avoided
(Porter, 1996, p. 32).

The source or reservoir of influenza is the infection in human beings. It is


spread from person to person through inhalation of aerial droplets and fomites that
contain the infective agents, into the respiratory tracts (Keen, 1995, para. 3). The
incubation period is only 1-3 days, and then, the virus is spread rapidly.

Influenza now affects human populations worldwide. The influenza virus


strains seen here in the United States come from Asia in the east, specifically China
and Vietnam, and depending on the type of influenza virus it is will determine which
demographic of people it will spread among and infect. Type A influenza affects the
young adult population who usually can overcome the common virus or bacterial
infection. Type A influenza also is the deadliest, also known as the avian flu. In the
past century, research in virology has shown that the so-called Spanish influenza was
in fact a Type A swine influenza virus (Appenzeller, 2005, para. 21-25). Type B
influenza is the most common, and it circulates the globe each year. The flu vaccines
developed each year are first provided to those people considered at risk for infection
and later to the rest of the population. In order to be considered at risk, the person
would either be young, such as a toddler or infant, elderly, or afflicted with a chronic
respiratory disease such as chronic bronchitis or asthma (World Health Organization,
2008, para. 10).

42
Mode of Transmission

The influenza virus can only infect someone by entering their body through the
nasal passage. This was shown in 1918 by Dr. Joseph Goldberger, a public-health
doctor in Boston, in his experiment in search of a vaccine for the influenza of 1918
(Persico, 1976, para. 30). Transmission between species occurs most often between
birds, pigs, and humans due to reassortment in pigs. This process occurs when half of
an avian flu virus combines with half of a human virus to form a hybrid influenza virus
(Appenzeller, 2005, para 23). Also, the avian influenza has the ability to mutate into
a human virus. This has already happened a few dozen times since the newest strain
of avian influenza, H5N1, appeared in humans in 1997 (Appenzeller, 2005, para. 32).

Treatment

The common winter type B influenza virus can be cured with rest and some
medicines. The best way to treat the virus is to act preemptively by becoming
vaccinated before flu season begins (Davis, 2007, Treatment, para. 1). The type A,
H5N1 strain does not yet have a cure, however, oseltamivir, brand name Tamiflu, is
an antiviral prescription created to eradicate the avian flu. Although it is specially
designed with that purpose in mind, it does not always work and therefore, the avian
flu has a high mortality rate (Appenzeller, 2005, para. 29).

Immunization

Each year, the influenza virus mutates into multiple strains, and scientists do
not have a sure way of knowing which strains will become the most contagious and
infectious in a population. A better understanding of how viruses evolve would be
very helpful for developing and predicting movements of the virus (Holmes,
Taubenberger, & Grenfell, 2005, p. 989). Scientists perform research and make
educated guesses as to which strains will affect the greatest portion of the population
and design their vaccines accordingly. Vaccines produced in modern times take a few
months to be created because they are grown in poultry eggs which means the
scientist have to create the vaccine months before it even hits the United States.
Despite common belief, the flu vaccine cannot cause you to have the flu. Mild
symptoms such as low fever or headache may occur in those who have not been
previously exposed to the virus or those with sensitive immune systems. For the
elderly, it is highly recommended to receive the vaccine to avoid many complications
which may involve hospitalization and are possibly, although not usually, fatal.

Influenza Past and Future

Epidemics and pandemics of influenza have occurred since the ancient times;
an epidemic is a seasonal outbreak of a type of influenza, but a pandemic is a
deadlier spread of a new type of virus throughout a large population of the world. The
first recorded cases of influenza epidemics occurred during the “age of plagues” in
Japan (Porter, 1996, p. 27). Also, over a hundred epidemics occurred in the time

43
period of 700 to 1050, and, among other things, the causes of those epidemics include
influenza.

The first American epidemic, swine influenza, took place on the island of
Hispaniola in 1493 (Porter, 1996, p. 32). Other cases of the influenza epidemics
throughout history include three severe epidemics in Europe during the 16 th century.
However, epidemics were not the events that affected the world the most. Pandemics
occurred a larger scales and impacted a greater number of people.

Modeling the conditions of future pandemics suggests that the effects could be
very substantial. The results of a medium sized influenza A pandemic could include
90-200 thousand deaths, over 300 million hospitalizations, and 20-40 million people
infected (Fact Sheet: Influenza Pandemics, 2005, para. 16). The cost could be around
$71.3-166.5 billion. The possible consequences of an influenza pandemic cannot be
underestimated.

Unfortunately, scientists have not yet discovered a completely reliable cure for
the influenza virus. Antibiotics may cure the bacterial infections that sometimes are
experienced with influenza but not the virus itself. Amantadine is another anti-viral
medicine, which may prevent acquiring the infection at the time of a pandemic
(Keen, 1995, para. 15). Vaccines are produced to prevent getting viruses, but they
only have 70% protection at most. Research is being done to find synthetic, universal
vaccines to all types of influenza because of the continued development of new
strains. Other types of vaccines include killed whole virus, live virus, and virus
subunit. The first vaccine was produced in the US in 1944, and in the 1950s to 1960s,
scientists discovered more about influenza, allowing them to produce accurate
vaccines quicker.

The US Center for Disease Control and Prevention (CDC) and World Health
Organization (WHO) keep watch for future cases of influenza pandemics. There is also
a US National Pandemic Influenza Response and Preparedness Plan for the future.

Literature Cited

Appenzeller, T. (2005). “Tracking the next killer flu [Electronic Version].” National
Geographic. Retrieved March 19, 2008 from http://science.national
geographic.com/science/health-and-human-body/human-diseases/next-killer-flu.html

Davis, C. (2007). “Flu (influenza).” MedicineNet. Retrieved April 2, 2008 from


http://www.medicinenet.com/influenza/article.htm

Holmes, E. C., Taubenberger, J. K., and Grenfell B. T. (2005). Heading off


and influenza pandemic. Science, 309, 989.

44
Keen, A. (1995). Influenza. Retrieved March 3, 2008, from
http://web.uct.ac.za/ depts/ mmi/jmooide/influen2.html

Persico, J. E. (1976). The great swine flu epidemic of 1918 [Electronic version].
American Heritage. Retrieved March 19, 2008 from http: //www.american
heritage.com/articles/magazine/ah/1976/4/1976_4_28.shtml

Porter, R. (1996). The Illustrated History of Medicine. Cambridge: Cambrisge


University Press.

Subbarao, K. et al (1998). Characterization of an avian influenza A (H5N1) virus


isolated from a child with a fatal respiratory illness [Electronic Version]. Science,
279, 393-396. Retrieved April 1, 2008 from http://www.sciencemag.org/cgi/content/
full/279/5349/393?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=influenz
a&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT

World Health Organization(WHO) (2008). “Global influenza surveillance.”


Retrieved April 2, 2008 from
http://www.who.int/csr/disease/influenza/influenzanetwork /en/index.html

45
Chapter 8
Malaria

Amanda Olender, Tim Raymond, and Ma’ayan Dagan

Introduction

Malaria has been plaguing humankind since before the start of recorded
history. Annually, there are between 215 and 500 million cases reported, and this
leads to millions of deaths every year. In Africa alone, malaria kills more than one
million people annually, including hundreds of thousands of children. The outbreak of
malaria is most severe in Africa and India, where bad living conditions as well as poor
prevention methods has lead to the development of malaria as the number one cause
of child mortality as well as the number four cause of death overall (Malaria Facts,
2008, para. 2).
In many developed nations, including the United States and much of Europe,
malaria has been all but eradicated due to many anti-malaria actions taken by these
countries. For example, there were only 1337 reported cases of malaria in the U.S. in
2002; only eight of these proved fatal. However, all but 5 of these cases were
contracted in foreign countries and brought back to the U.S. Even more astonishing is
that since 1957, there have only been 63 cases of malaria contracted here in the U.S.
inside the host. This is due to the effects of multiple factors, including both a cooler,
less habitable climate and numerous prevention measures. It is likely that with a
great deal of work, malaria numbers could be drastically reduced in most countries.

Causative Agent and Etiology

Malaria is a zoonotic disease spread from human to human by one of 30-50 of


the 430 species of the arthropodic vector Anopheles (“CDC Malaria: Biology,” 2004,
para. 1). The female mosquito infects a human host by taking a blood meal and in
doing so introduces one of four types of the human-affecting plasmodium parasite
into the body: P. vivax, P. ovale, P. falciparum, and P. ovale. Of these four types, P.
falciparum is the most prevalent in endemic regions, causing 50% of malarial
infections and 90% of deaths. However, infection from one type does not guarantee
immunization; strains from multiple and different plasmodia are able to cause
infections in one individual, especially in native regions (McGrew & McGrew, 1985, p.
165). A vector is created when the mosquito feeds on a human infected with malaria.
Following this initial ingestion, the parasite grows for ten to 21 days, after which
period any human being bitten by the mosquito is susceptible to being inoculated with
the parasite (“Anopheles Mosquitoes,” 2004, para. 1).The parasite is passed from
human to mosquito and from mosquito to human.

46
The infective agent breeds in the mosquito stomach, human liver tissue, and
human bloodstream. The parasite and disease undergo a life cycle which can be
tracked in its five basic stages: sporozoitic, merozoitic, erythrocytic, gametocytic,
and zygotic (Chamberlain, 2007, para. 3). In the sporozoitic phase, the plasmodia
occupy hepatocytes, liver cells, after the human host is inoculated and then develop
into schizonts. Merozoitic parasites are released from ruptured schizonts, and this
completes the exo-erythrocytic cycle. The cycle that follows comprises asexual
reproduction (erythrocytic schizogony), introduction to the bloodstream, and
maturation and rupturing of merozoites into schizonts.

The outward evidence of malaria begins when the parasite starts to thrive in
the bloodstream. The parasite is returned to the Anopheles mosquito when male and
female gametocytes are ingested. The parasites then multiply in a stage named the
sporogonic cycle (“Life cycle of malaria,” 2004, para. 1-2). Microgametes invade
macrogametes, forming oocysts on the interior of the mosquito gut. Finally, the
oocyst ruptures, sporozoites are released, and the parasites move to the salivary
gland of the mosquito, upon which event the life cycle repears (“Life cycle of
Plasmodium falciparum,” n.d., para. 1). The entire process is completed in
approximately 20 days (“WHO: Life cycle of Plasmodium falciparum,” n.d.).

Epidemiology

After an incubation period of seven to 30 days, symptoms begin to appear in


humans. Prophylactic drugs administered to travelers during this period may delay the
progression of the disease by months, but in endemic regions, signs are likely to be
seen just a few days after malarial infection. In an uncomplicated case, a sufferer
endures a six- to ten-hour attack involving fever, chills, sweats, headaches, nausea
and vomiting, body aches, and malaise. Physical signs include high temperature,
weakness, and an enlarged spleen. Without treatment, these symptoms may cycle
every two to three days (“CDC Malaria: Disease,” 2006, para. 2-6).

In a severe case, four stages mark the advancement of malaria. In the first
stage, lasting 45 minutes, a person experiences ague, cold temperature, and a
heightened heart rate. The second stage balances the first, raising body temperature.
In the third stage, lasting two to four hours, a person sweats, the fever breaks and
the person sleeps. Further progression occurs in the fourth stage, in which a swollen
spleen, jaundice, and damage to the liver can be noted by a physician. This stage
lasts six to ten weeks. Recurrence of these symptoms can occur because of infection
with a different strain of the disease, especially if immunity has not been developed
(McGrew & McGrew, 1985, p. 165).

Geographic Distribution

Malaria is most prevalent in warm regions close to the equator. These areas,
like Southern Africa, Central America, South America, and South Asia, are most suited
for the subsistence of the Anopheles mosquito. Their climates are tropical and

47
subtropical. Although the disease can be deadly, the mortality rate is not high in
these areas because of the regularity of outbreaks; the populations have gained
immunity. These regions are known as endemic, or stable regions. Unstable, epidemic
regions are those in which malaria is not common. Oftentimes they are areas where
the disease once thrived but has now been eradicated, such as North America and
Europe (Wiser, 2006, para. 1-4). Countries of these continents, however, have
become more hygienic during their urbanization, while developing countries have not.
For this reason, the number of cases of malaria has steadily increased with the
urbanization of developing countries (Martens & Hall, 2000, para. 11-26).

There are several populations that are at risk of contracting malaria. First, of
course, are the populations living in endemic or epidemic regions. Nomadic refugees
from political catastrophes are also among those who are prone to becoming infected.
Similarly, people passing through airports are traversing dangerous areas; in 1984, a
search of Gatwick Airport revealed mosquitoes living in twelve of 67 planes from
tropical regions (Martens & Hall, 2000, para. 11-26). Warm climates in the
destinations facilitate the transportation of the vectors and malaria. Most cases do
occur in tropical and sub-tropical regions, but a real threat to human health still
exists in areas of mass human movement.

The Buxton Line is a notable boundary for malarial infection. This border
comprises three sections: south of the equator, the Buxton Line is east of 170 0E and
south of 20oS; north of the equator but south of the Tropic of Cancer, it is east of
130oE; north of the equator and north of the Tropic of Cancer, the line is east of
135oE. While locations west of the Buxton Line are highly malarious, including New
Guinea, Melanesia, Indonesia, Philippines, and northern Australia, locations east of
the line are free of malaria vectors (Carter & Mendis, 2002, pp. 564-594).

Immunization and Prevention

Unlike for some diseases, prevention and immunization for malaria is possible.
There are three types of clinical immunization, two types of acquired immunity, and
one genetic predisposition for malarial immunity. Clinical immunization is similar to
treatment for when a person has already contracted the disease. The first type of
clinical immunization reduces the risk of death. The second reduces the severity of
symptoms. Antiparasitic immunity, the third type of clinical immunization, reduces
the amount of harmful parasites in the infected human. Frequent inoculations are
necessary to support clinical immunity to non-fatal symptoms. Acquired immunity
affects the parasites directly once they are in humans.

The first prevents sporozoites from reaching full maturity, while the second
protects against the stage in the life of a Plasmodium in which the parasite is able to
be transmitted to another human if an Anopheles mosquito should bite. All of these
types of immunity involve complex processes if continual protection is to be granted.
Each type of immunization is specific to one species of Plasmodium, and
immunizations must be reapplied relatively frequently to counter the evolving

48
parasites. The only natural immunization for malaria is having the gene for sickle cell
anemia (see Sickle Cell Anemia section). Young children generally cannot obtain
decent antimalarial immunity (Carter & Mendis, 2002, pp. 564-594). Instead, several
prevention methods must be utilized.

As with many other diseases, the only real cure for malaria is to prevent it
before it begins. The main prevention method is an avoidance of contact with
mosquitoes. Wear long sleeves and pants when venturing outside, especially during
summer evenings. Use bug spray with insecticides, and erect insect-prevention nets.
In addition, infected persons should receive aggressive treatment immediately. This
reduces the risk that malarial Plasmodium will be transmitted should an Anopheles
mosquito bite an infected human. Taking antimalarial drugs before travelling to a
location where malaria is a great concern has also been shown to be beneficial
(Parmet, 2007, p. 2310). If these measures are taken, there is a great likelihood that
malarial infection can be prevented.

Treatment

Current drug treatments for malaria include: chloroquine, sulfadoxine-


pyrimethamine, mefloquine, atovaquone-proguanil, quinine, doxycycline, and
artemisia derivatives. In 2007, chloroquine was recommended by the National Vector
Borne Disease Control Programme as the first treatment response to combating over
two million cases of malaria in India, most being of the P. falciparum strain. Despite
its effectiveness, chloroquine treatment is being carefully monitored for any
resistance shown by the disease to remain in control of any mutations (Sharma, 2007,
para. 1). The second drug of choice is sulfadoxine-pyrimethamine, which can be
combined with a drug known as artesunate for a more thorough extermination of the
parasitic life and a halved recovery time. However, these two anti-malarial drugs are
more often given independently of artesunate because they are cheaper. Whereas
chloroquine and sulfadoxine-pyrimethamine can cost from 15 to 25 cents, artesunate,
derived from the semi-rare sweet wormwood, costs one dollar (Seppa, 2004, para. 1-
6).
Other drugs are not as widely used. Mefloquine is used as a treatment for
infections by P. vivax. Quinine was one of the more popular medications for many
years. It is derived from the bark of the cinchona tree. Its remedial properties were
first recorded in 1633, and it continues to be gathered naturally today as a cure for
many ailments (Kakkilaya, 2008, para. 1).

Malaria and Sickle Cell Disease

Sickle cell anemia is commonly associated with malaria because the sickle cell
allele in the heterozygous state provides natural immunity against plasmodium
infection. The recessive allele is ninety percent effective in protecting against
mortality over the course of a malarial infection. It occurs when a valine is
substituted for the glutamate in position six of a beta chain of hemoglobin.

49
Hemoglobin S in a homozygote form results in sickle cell anemia. This gene in the
heterozygous state is present in over thirty percent of the population in several
regions in Africa. In West Africa, children who are homozygous for the normal gene
have a fatality risk nine times that of children who are heterozygous for the sickle cell
gene. Heterozygotes are immune to malaria because they have both one normal gene
and one sickle cell gene.

The normal gene is dominant, so it allows the person to function normally.


Because the sickle cell gene is still present, it provides protection against malarial
infection. In addition to having immunity to malaria, heterozygotes have normal
levels of oxygen in their bloodstream. Historically, few people with sickle cell anemia
would have reached puberty. Today, however, people usually live to middle
adulthood (Carter & Mendis, 2002, pp. 564-594). Sickle cell anemia is not embraced
as a cure or treatment for malaria; sickle cell sufferers have their own set of
difficulties. However, if by some provision one is born with it, the likelihood of
contracting malaria is incredibly slim.

History

Although all types of malaria is known throughout the world today, each strain
emerged individually in a separate location than the others. P. malariae was
probably the only strain of malaria to exist in prehistoric times. P. vivax most likely
escaped the borders of Africa 10 000 to 30 000 years ago. P. ovale expanded out of
Africa into New Guinea around the same time P. falciparum entered Africa:
approximately 4000 years ago (Carter & Mendis, 2002, pp. 564-594). Clearly, there is
a great range in the times when each strain became apparent.

There are several references to malaria in historical literature; that is one way
archaeologists and scientists know the approximate dates and locations of malarial
cases. The oldest text is Nei Ching (Cannon of Medicine) from China, dating about
4700 years ago. The species of Plasmodium that was most likely involved was P.
vivax. Egyptian and Sumerian texts (4000-3500 years ago) mentioned what was
probably P. falciparum. P. malariae and P. vivax were noted in Vedic and Brahmanic
writings from 3500 and 2800 years ago, respectively, and P. falciparum spread to the
Indus Valley by 1000 BCE. Hippocrates (460-377 BCE) wrote about P. vivax, P.
malariae, and P. falciparum, although he did not call these strains the names by
which they are known today.

Alexander the Great supposedly died from malaria in 323 BCE. In 100 BCE,
malaria became known in the region where present-day Italy is located, but because
of the affluence of the Roman Empire between 50 BCE and 400 CE, the disease was
uncommon. The advent of the Dark Ages coincided with reappearance of malaria,
namely P. vivax and P. falciparum. Throughout the first century of the Common Era,
malaria was prevalent in the Mediterranean basin, southern Europe, Arabian
Peninsula, and throughout Asia, including China, Manchuria, Korea, and Japan. During

50
the Dark and Middle Ages, the infection spread to northern Europe (Carter & Mendis,
2002, pp. 564-594).

Malaria was not present in the Americas until the post-Colombian era (after
1492) (Carter & Mendis, 2002, pp. 564-594). When Europeans arrived in the Americas,
a virgin soil epidemic occurred in which malaria was introduced into a population that
had not had exposure to it previously and so the members of the population were
“immunologically defenseless” against the Plasmodium parasites. Incredible amounts
of Native Americans were killed as a result (Crosby, 1993, p. 11, 97). Importation of
slaves from Africa in the mid-1700s increased natural inoculation rates greatly. By
1850, malaria had been introduced to all mainland regions of both Americas (Carter &
Mendis, 2002). Yanomamö populations in Brazil and Venezuela were decreasing
significantly in the 1990s because of a combination of malaria, flu, measles, and
chicken pox (Crosby, 1993, p. 60).

Prior to the 1940s, malaria was a serious issue for Americans. In a 1916 journal
report, Dr. John Trask discussed the disease. Much of the land to which malaria was
endemic was located in the southeastern United States as well as a small area of New
England. 50 years prior, the disease occupied a much greater area; quinine was at
that time a common home remedy. Malaria had an effect on the quality of work that
was being produced because of the numbers of workers who were becoming ill. Trask
concluded his paper by stating that malaria was a large and important problem with
regards to both national public health as well as the economy (1916, pp. 1290-1297).

An extremely pertinent organization grew out of the response to malaria in the


United States: The Center for Disease Control and Prevention. Initially, it was called
the Communicable Disease Center and was a descendent of the Office of Malaria
Control in War Areas during World War II. In 1942, the Center was established with its
base in Atlanta, close to the southeast region of the United States where malaria was
endemic. The CDC headed the national eradication program, and by 1951 had
declared malaria to be eradicated from the United States (“CDC’s origins and
malaria,” 2004, para. 1-2).

The final great breakthrough in the prevention of malaria was the discovery of
the pesticide DDT. It was found in 1942 by Paul Muller and first used in 1944 in Italy
(Lambert, 2008, para. 10). The emergence of this pesticide fueled hopes of
worldwide malaria eradication, but this was not the case because of many factors,
including environmental influences and social issues. This project was abandoned in
1969 due to growing mosquito immunity and environmental risks.
Ever since, the annual number of malaria cases has been on the rise. However,
renewed efforts in recent years by the National Malaria Control Programme have led
to a drastic reduction in the number of malaria cases in third world countries each
year, using just simple methods such as netting and larval control. To this point
malaria still is one of the most prevalent diseases in the world, but the discovery of a
vaccine for malaria seems imminent.

51
Literature Cited

Anopheles mosquito. (2004). Retrieved April 4, 2008, from http://www.cdc.gov/


malaria/biology/mosquito/index.htm.

Carter, R. and Mendis, K.N. (2002). Evolutionary and Historical Aspects of the Burden
of Malaria [Electronic version]. Clinical Microbiology Reviews 15 (4), 564-594.

Cartwright, F. F., & Biddiss, M. D. (1972). Disease and history. New York: Dorset
Press.

CDC Malaria: Biology. (2004). Retrieved April 4, 2008, from


http://www.cdc.gov/malaria/biology/index.htm#parasite.

CDC Malaria: Disease. (2006). Retrieved April 4, 2008, from


http://www.cdc.gov/malaria/disease.htm.

CDC’s origins and malaria. (2004). Retrieved April 4, 2008, from


http://www.cdc.gov/malaria/history/history_cdc.htm.

Chamberlain, N. R. Malaria. (2007). Retrieved April 4, 2008, from


http://www.kcom.edu/faculty/chamberlain/Website/lectures/lecture/malaria.htm.

Crosby, A.W. (1993). Germs, seeds, and animals: studies in ecological history.
Armonk: M.E.Sharpe

Kakkilaya, B. S. (2008). Quinine. Retrieved April 4, 2008, from


http://www.malariasite.com/MALARIA/quinine.htm

Lambert, Paul H. (2008). Malaria: past and present. Retrieved March 4, 2008, from
NobelPrize.org Web site:
http://nobelprize.org/educational_games/medicine/malaria/ readmore/history.html

Life cycle of Plasmodium falciparum. (n.d.). Retrieved April 4, 2008, from


http://www.tigr.org/tdb/edb/pfdb/lifecycle.html.

Malaria facts. (2007, April 11). Retrieved March 4, 2008, from CDC Malaria Web
site:http://www.cdc.gov/ malaria/facts.htm

Martens, P., and Hall, L. (2000). Malaria on the move: Human population movement
and malaria transmission. Retrieved April 4, 2008, from http://www.cdc.gov/ncidod/
eid/vol6no2/martens.htm.

52
McGrew, R. E., and McGrew, M. P. (1985). Encyclopedia of medical history (165). NY:
McGraw Hill Book Company.

Schema of the life cycle of malaria. (2004). Retrieved April 4, 2008, from
http://www.cdc.gov/malaria/biology/mosquito/index.htm.

Seppa, N. Malaria drug boosts recovery rates. (2004). Retrieved April 4, 2008,
fromhttp://www.sciencenews.org/articles/20040207/note16.asp.
Sharma, V. P. (2007).

Battling the malaria iceberg with chloroquine in India. Malaria Journal, 6(105),
Retrieved April 4, 2008, from http://www.malariajournal. com/content/6/1/105.

Trask, J. W. (1916).Malaria as a publc health and economic problem in the United


States. American Journal of Public Health. 6 (12), 1290-1297. Retrieved April 4, 2008,
from PubMed.

WHO: Life cycle of Plasmodium falciparum. (n.d.). Retrieved April 4, 2008, from
http://www.searo.who.int/en/Section10/Section21/Section340_4269.htm.

Wiser, M. F. Malaria. (2006). Retrieved April 4, 2008, from


http://www.tulane.edu/~wiser/ protozoology/notes/malaria.html#epi.

53
Chapter 9
Polio

Connor Hughes, Jon Toohill, Jacob Kopczynski

History of Polio
At one time, every July, August, and September, polio epidemics hit all areas
of America. People did not understand the symptoms of the disease or how it spread.
The year 1916 saw the first large polio epidemic in New York City with 9,000 infected
and 2,343 dead. Although another name for polio is infantile paralysis, 35% of those
affected were adults. The first US polio outbreak occurred in Vermont in 1894, while
the last naturally occurring one was in 1979 (Whatever happened, n.d., Communities
section, p. 1).

Polio patients faced hardships and discrimination, and, as a result, they were
very outspoken about disability rights, believing that disabilities went beyond medical
problems. The Americans with Disabilities Act (ADA), a major legislation that created
encompassing legal protection for disabled individuals, was passed in 1990. Polio
outpatients also brought the 1973 Rehabilitation Act, moving the focus of
rehabilitation from only military patients to all disabled persons. Polio also caused
public health officials to rethink vaccination and resulted in the 1962 Vaccination
Assistance Act which gave states 36 million dollars towards free vaccines and later
was transformed by the CDC into an infant immunization week that has occurred each
year since 1977(Whatever happened, n.d., How polio changed us section, p. 1-3).

Many charitable organizations had their roots in the polio epidemic. Franklin
Roosevelt, after contracting polio at 39 in 1921, coordinated Birthday Balls each
January that raised money for polio care, and led to the beginnings of The National
Foundation for Infantile Paralysis, which became the March of Dimes in 1938. It was a
grassroots organization that collected small donations from millions of people for
polio care, prevention, and treatment and supported Jonas Salk, Albert Sabin, and
other scientists working on the polio vaccine. Universal Design, or Design for All, was
a result of a 1960s and 1970s political movement geared towards equitable public
space use and encouraged people to follow federal handicap building guidelines in
building design (Whatever happened, n.d., How polio changed us section, p. 5-6).

Causative Agent

Polio is a contagious disease spread by a virus family called polioviruses (Polio,


2007, Introduction section, p.1), which reside only in humans and spreads primarily
through the oral-fecal route in unsanitary locations. The virus may also be physically
transported to food by flies (Polio, 2007, Causes section, p.1). Polioviruses remain

54
active for long periods in food and water, and are often contracted after these are
contaminated with virus-carrying feces (Tortora, Funke, and Case, 2004, p. 625). The
reception of a live vaccine can also cause polio. People living with someone infected
with polio are also likely to contract the disease, especially during the two to three
weeks during which the majority of symptoms occur (Polio, 2007, Causes section,
p.1).

Polio replicates in the throat and intestines, which causes early symptoms of
sore throat and nausea, and enters the blood and lymph, causing viremia. Usually, the
viremia is only transient, and the disease ends in the lymphatic stage before it
reaches a clinical level. If persistent viremia occurs, poliovirus can travel to the
nervous system. There, it infects motor neurons in the spinal cord and brain stem.
During the multiplication of the virus in the nerve cell cytoplasm, the cells die,
causing paralysis in the host (Tortora et al., 2004, p. 625).

Poliovirus was discovered in 1908 by Karl Landsteiner and Erwin Popper. James
Hogle obtained the first 3D image of polio through X-ray crystallography. The year
1981 saw both Vincent Racaniello and his partner David Baltimore as well as Eckard
Wimmer’s team find the genome for the poliovirus. A DNA version of the poliovirus
was synthesized in order to help with creating new vaccines and therapy systems
(Whatever happened, n.d., The Polio Genome section, p. 1).

Symptoms

Most cases of polio do not exhibit severe symptoms, allowing those infected to
transmit the disease to others without knowing it. The symptoms of this type of polio,
nonparalytic polio (abortive poliomyelitis) are similar to other viral diseases and
include flu-like symptoms such as sore throat, nausea, vomiting, and diarrhea. (Polio,
2007, Symptoms section, p. 1). Nonparalytic polio is common in the very young,
especially infants in areas with poor sanitation who are still under the protection of
their mother’s antibodies (Tortora et al., 2004, p. 625).

In others, the poliovirus causes them to develop nonparalytic aseptic


meningitis, whose symptoms include fevers, headaches, vomiting, diarrhea,
drowsiness, pains and stiffness in the back, neck, arms, and legs, and muscle spasms
(Polio, 2007, Symptoms section, p. 1).

The most severe type of polio disease is paralytic polio. The first symptom of
paralytic polio is a fever, and about one week later, other symptoms such as
headache, neck and back stiffness, constipation, and increased contact sensitivity
occur. Acute flaccid paralysis, the severe weakening of muscle groups due to motor
neuron damage, occurs rapidly at this point and usually affects one side of the body
more (Polio, 2007, Symptoms section, p. 1).

One of the three subtypes of paralytic polio is spinal polio, in which the virus
affects motor neurons in the spinal cord. This leads to paralysis of limbs and the

55
muscles that regulate breathing. Victims retain feeling in these areas, but although
some motor control may return, the damage is permanent if the neurons are
completely destroyed. Bulbar paralytic polio affects the same type of neurons in the
brain stem, impairing the ability to see, hear, smell, taste, and swallow. The effect of
this type of polio on heart and lung muscles can lead to death unless the patient
receives respiratory aid. Bulbospinal paralytic polio is a combination of these two and
can affect limb, heart, lung, and swallowing muscles (Polio, 2007, Symptoms section,
p. 1).

Severe symptoms of paralytic polio often include permanent disabilities caused


by the damage to motor neurons. Others include pulmonary edema, in which the lung
air sacs fill with fluid and cannot collect oxygen, aspiration pneumonia, in which
inhaled stomach contents inflame the lungs, urinary tract infections, kidney stones,
myocarditis, which causes inflammation of heart muscle, and cor pulmonae, in which
the heart can not supply enough pressure to counteract high blood pressure in the
lungs (Polio, 2007, Complications section, p. 1). Death from polio occurs from
respiratory or cardiovascular paralysis (Tortora et al., 2004, p. 625).

Diagnosis

Diagnosis of polio often occurs through recognition of early symptoms. The


diagnosis is often confirmed through testing a sample of throat secretions, stool, or
cerebrospinal fluid for the presence of poliovirus (Polio, 2007, Screening and Diagnosis
section, p. 1). Cerebral fluid is obtained by spinal tap, also known as lumbar
puncture, a process originally created to relieve hydrocephalus. Tests have also been
developed for muscle strength, mobility of limbs and joints, and breathing facility
(Whatever happened, n.d. The Medical Community section, p. 2). Once polio is
isolated during diagnosis, it is possible to inoculate the cell cultures and observe
cytopathic effects on cells (Tortora et al., 2004, p. 625).

Treatment

Although no cure for polio exists, supplementary recovery treatment includes


antibiotics for secondary infections, analgesics, breath assistance devices, exercise,
and diet (Polio, 2007, Treatment section, p. 1). A new type of treatment was
developed by Elizabeth Kenny, who emigrated from Australia to the US in 1940 and
brought the hot-pack, stretching, and muscle massage treatments that are part of
normal polio treatment today (Whatever happened, n.d., The Medical World section,
p. 2). The tank respirator, or the iron lung, was used on patients who had paralyzed
muscle groups in the chest and had trouble breathing. These patients would often not
survive until Phillip Drinker and Louis Agassiz Shaw developed this device, which used
an electric motor to power two vacuum cleaners that pulled air in and out of the
lungs for up to two weeks until the patient regained motor control. The machine was

56
redesigned at half the cost by John Emerson, and was first mass distributed in 1939
(Whatever happened, n.d., The Iron Lung and Other Equipment section, p. 1-2).

Risk Factors and Prevention

The most widespread preventative measure for polio is the polio vaccine. To
prevent polio, polio vaccination must occur first at the age of 2 months, again at four
months, a third time between 6 and 18 months with a booster shot between ages 4
and 6 (Polio, 2007, Prevention section, p. 1).

Aside from lack of immunization, risk factors include traveling to the site of a
recent or present outbreak, living with an infected or recently vaccinated person,
handling live poliovirus specimens, a weakened immune system, oral or throat
surgeries, and high levels of stress or physical action after exposure. The risk factors
for contracting the paralytic form are a weakened immune system, pregnancy, oral or
throat surgery, and injury of physical strain after contraction (Polio, 2007, Risk
Factors section, p. 1).

In the past, polio outbreaks often caused quarantines on victim’s homes and
restriction on travel and commerce between cities to prevent further spread.
Sometimes those infected were quarantined in a hospital. Children who contracted
polio experienced mandatory separation from their families for up to two weeks and
could not return until months after contraction. Strict discipline and orderly patient
regimens were exhibited in most hospitals treating polio and despite precautions,
many nurses and doctors treating polio patients contracted the virus themselves
(Whatever happened, n.d., American Epidemics section, p. 1-3).

Immunization

Two types of immunization for polio have been developed. Inactivated polio
vaccines (IPVs) like the 1954 Salk vaccine use formalin-treated inactive viruses. These
require a series of injections as well as booster shots every few years and have an
effectiveness of 90% versus paralytic polio. (Tortora et al., 2004, p. 625)
The subjects of the first clinical trials of Polio vaccines were heralded as “Polio
Pioneers” (Whatever happened, n.d., Clinical Trials section, p. 1). The study involved
giving some children the actual vaccine and others a placebo. Enhanced inactivated
polio vaccine (E-IPV) is produced from human diploid cell cultures (Tortora et al.,
2004, p. 625).

Oral polio vaccine (OPV), introduced as the Sabin vaccine in 1963, contains 3
strains of the virus that are live and attenuated. It is cheaper to administer, and is
more comfortable that injections, and has an immunity rating similar to natural
infection. The problem, however, is that secondary transfer of the type 3 live viruses
may occur for 10-16 years after vaccination and may cause them to revert to
virulence and cause polio (Tortora et al., 2004, p. 626). The Sabin vaccine was the
leading world vaccine between 1963 and 1999, after which the US returned to the

57
Salk vaccine. Sabin was able to test his vaccine in Russia despite the cold war because
polio was of greater importance (Whatever happened, n.d., Two Vaccines section, p.
2).
The CDC has decided that the OPV is too dangerous to be used on routine
immunizations, and recommend its use only for outbreaks and special cases where
people are traveling to infected areas or did not receive shots on schedule (Tortora,
Funke, and Case, 2004, p. 626).

Polio vaccine requires large amounts of the virus for production and was found
to grow quickly in embryonic brain tissue. John Enders, Thomas Weller and Frederick
Robbins solved this problem by proving that poliovirus could also grow in other tissues
(Whatever happened, n.d., History of Vaccines section, p. 3).

Post Polio Syndrome

Post-polio syndrome (PPS) occurs in acute poliomyelitis survivors between 10


and 40 years after the original symptoms. The major effects of PPS is muscle
weakness in areas both affected and unaffected by the original infection. It may also
cause fatigue, muscle atrophy, degeneration of joints, aches, loss of control over
breathing and swallowing, sleep apnea and related disorders, heightened sensitivity
to cold, and musculoskeletal deformities like scoliosis (Polio, 2007, Symptoms section,
p. 1). Dangerous side effects of PPS include under-ventilation due to lung muscle
weakness or aspiration pneumonia from weakness in swallowing muscles. PPS severity
often correlates to the severity of the original disease. PPS is not contagious (Post-
polio syndrome, 2007, What is post-polio syndrome? section, para. 1-5).

When polio damages motor neurons, neighboring neurons can repair the
connection by developing new axons, but the strain that this causes the neurons may
deteriorate them, possibly leading to the symptoms of post-polio syndrome. The risk
factors for post polio symptom may include severe polio infection, adult infection,
physical activity, and high recovery levels. There is no known method for preventing
PPS (Post-polio syndrome, 2007, What causes PPS? section, para. 1-3).

PPS is diagnosed by the following criteria: previous paralytic polio with signs of
residual damage, recovery from polio for at least 15 years, slow development of new
muscle weakness and other symptoms that persist for a period of 1 year, and no
evidence of other medical problems that cause similar symptoms. Medical diagnosis
methods include magnetic resonance imaging (MRI), computed tomography (CT),
neuroimaging, electrophysiological tests, muscle biopsy or spinal fluid tests (Post-
polio syndrome, 2007, How is PPS diagnosed? section, para. 1-6).

Treatment for PPS includes non-fatiguing exercise, but there is no real cure.
Prescribed exercise allows the patient to focus on specific muscle groups and specific
exercises (Post-polio syndrome, 2007, How is PPS treated? section, p. 1).

58
The Future of Polio

Although polio has been eliminated in most of North America, Europe, and Asia
through widespread use of vaccines, it is still found in third world countries in Africa
and the Middle East. Current methods for combating the polio virus focus on
eliminating the wild type virus in third world countries. This is done through an active
acute flaccid paralysis (AFP) surveillance network. This is currently being
implemented in third world countries like Malta. This involves members of the
population in notifying medical authorities so that they could make a diagnosis and
then check records to make sure that those in contact with the individual had been
vaccinated. This succeeded in notifying these authorities about AFP cases in time to
make diagnoses, and distribute appropriate vaccine coverage (Muscat et al., 2001, p.
1057-1060).

Literature Cited

Muscat, M., Fiore, L., Busuttil, R., & Gilles, H. M. (2001). Surveillance of wild
polioviruses in patients with acute flaccid paralysis in Malta during 1998 and 1999.
European Journal of Epidemiology, 16, 1057-1060.

Polio. (2007, March 5). Retrieved March 4, 2008, from MayoClinic Web site:
http: //www.mayoclinic.com/health/polio/DS00572.

Post-polio syndrome. (2007, December 13). Retrieved April 1, 2008, from National
Institute of Neurological Disorders and Stroke Web site: http://www.ninds.nih.gov/
disorders/post_polio/detail_post_polio.htm

Tortora, G., Funke, B., & Case, C. (2004). Microbiology: An introduction. San
Fransico, CA: Pearson Education, Inc..

Whatever happened to polio? (n.d.) Retrieved April 3, 2008, from American History
Web site: http://americanhistory.si.edu/polio/americanepi/index.htm

59
Chapter 10
Smallpox

Maija Mednieks, Nikhil Thorat, and Neil Cheney

Introduction

Smallpox is of the genus Orthopoxvirus, which includes monkeypox, cowpox,


and other viruses such as Orf and Molluscum contagiosum. With a fatality rate of
approximately 30%, smallpox is the deadliest of the Variola viruses. All of these are
characterized by an outbreak of spots. Owing to the deadliness of the disease, as well
as the permanent scarring caused by the rash, mankind has worked to eradicate the
disease (“Smallpox Overview,” 2004, para. 2). This effort has proved successful, and
the last naturally occurring case of smallpox was documented in Somalia in 1977
(Koplow, 2003, p. 17).

Immunization

Smallpox no longer occurs in nature, not because of a treatment, but because


of a system of vaccination. After World War II, the World Health Organization decided
to eradicate this disease finally, and thus vaccinations were administered worldwide
(Henderson, 1977, p. 83-84). Unfortunately this system was highly inefficient, and
many objected to mandatory vaccinations. After deliberation, the vaccine was only
given to those who had come in contact with an infected person. This method was
called ring-vaccinating because it immunized the people who were in danger of
contracting the disease, but this method was not as objectionable as a compulsory
injection. The effectiveness of this system is proven by the successful eradication of
smallpox. Today, the only remaining viruses are in two laboratory stockpiles (Koplow,
2003, p. 38).

There is some fear that smallpox from the laboratory stockpiles or an unknown
source will be used as a biological weapon (“Smallpox Overview,” 2004, para. 4). Any
occurrence of smallpox would be considered unnatural and an international cause for
concern. Furthermore, the fear of a large-scale outbreak of smallpox in the United
States is unfounded because the Center for Disease Control possesses more than
enough vaccinations to immunize the entire population of the United States (“Vaccine
Overview,” 2004, para. 2). The arguments for destroying the stockpiles, based on a
potential for biological terrorism, are unfounded. Smallpox could be found in infected
cadavers buried in permafrost, and monkeypox, a related disease, could be altered to
be identical to smallpox in both etiology and method of transmission (Joklik, 1993, p.
1225).

The smallpox vaccine is unlike other vaccines in that it is given with a


bifurcated needle and contains a live pox virus. A bifurcated needle is a two pronged
needle that is dipped into the vaccine solution. It is then inserted into the arm
repeatedly over a small area. After receiving the vaccine, the site of the injection

60
turns red and forms a scab. Contact with this area and subsequent contact with other
areas of the body causes pox to spread to those areas. For that reason it is
recommended that children are not vaccinated in a non-emergency situation. Other
people who should avoid a vaccine are pregnant women, infants, and people with skin
or heart conditions. Since smallpox has been eradicated, routine vaccinations have
been stopped; however, scientists working with the disease continue to be vaccinated
(“Vaccine Overview,” 2004, para. 4).

Treatment

Currently there is no treatment for smallpox, nor is it a high priority to develop


a treatment. Before the eradication of smallpox the only treatment was to vaccinate
a person who was in the early stages of infection. Vaccination decreases the severity
of the infection and likelihood of death. In the foreseeable future, a smallpox
treatment will not be developed because smallpox is no longer present in the world.
(“Smallpox Overview,” 2004, para.3 ).

Mode of Transmission

Transmission of smallpox requires extended contact with an infected object or


individual. Even though smallpox has a history of being used as a biological weapon,
the most efficient method of spreading the disease is via blankets or other linens
infected with the disease (Knollenburg, 1954, p. 498). The other means by which
smallpox spreads is via prolonged proximity to an infected person. The victim would
be most contagious after the rash appears, so the smallpox infection would be clearly
evident. Furthermore, no animals carry smallpox, which not only eliminates animals
as potential carriers, but enabled the eradication of the virus (Henderson, 1977, p.
86).
Morphology

Smallpox has two strains: Variola major and Variola minor. V. major is a
deadlier form of smallpox and is more common. There are four types of V. major,
flat, hemorrhagic, ordinary, and modified. The flat and hemorrhagic varieties are the
most severe and least common forms of V. major, ordinary is the most common type,
and a modified version of smallpox occurs in people who have already been
vaccinated. V. minor is less severe and less common. The fatality rate for V. major
has been stated as approximately thirty percent. The fatality rate of V. minor is
approximately one percent (“Smallpox Overview,” 2004, para. 2).

Etiology

Smallpox has several distinct phases to infection, each with different


characteristics and levels of contagiousness. During the incubation period of the
disease, seven to seventeen days, a person is not infectious. Then there is the onset
of the initial symptoms for two to four days. At this time a person is slightly infectious

61
and has symptoms such as high fever, malaise, vomiting, and aches. At the initial
outbreak of a rash and for approximately four days, smallpox is most infectious. The
rash spreads to the remainder of the body and the infected person may begin to feel
better. The spots become raised bumps filled with pus.

This becomes a pustular rash which lasts for approximately four days. The
pustules become hard, as if there is a BB or other small object embedded in the skin.
At this point the infected person is still contagious, but not so much as when the rash
began. Scabs begin to form over the pustules; this takes approximately five days. The
scabs then start to fall off. When the last scab falls off, the disease is gone and the
infected person is no longer contagious (“Smallpox Overview,” 2004). It is impossible
to catch smallpox twice so the formerly infected is now immune (Koplow, 2003, p.
29).
Population Most Likely Affected

Currently no population is affected by smallpox. The only people who are


considered at higher risk of contracting smallpox are the scientists that work with the
stockpiles of the virus (“Vaccine Overview,” 2004, para. 4). In the unlikely event of a
large scale smallpox outbreak, it would be the people in less developed countries who
would be at risk of contracting and dying from smallpox.

Impact on History

Smallpox is ancient, with records dating back thousands years. The first
evidence of the virus was legions that were found on the mummy of Pharaoh Ramses V
who lived around 1150 BC. Smallpox was first spread to Europe somewhere between
the 5th and 7th centuries by the returning crusaders and it became major epidemic
during that time period (eMedicineHealthm, 2005, p. 1). The disease quickly spread
through Asia and Africa and by 1518 it had been spread to the Americas. Infected
crew from a Spanish ship landed on the current day Dominican Republic and Haiti
seeded the disease in the new land which spread and killed most of the native
population.

The most infamous historical use of smallpox is associated with Lord Jeffery
Amherst. During the French and Indian War it was suggested that the colonists spread
smallpox to thin the native population. However, there is little evidence to support
that Amherst himself actually enacted this plan. The reason most attribute spreading
smallpox to the natives with him is that he mentioned the idea in a correspondence
with Henry Bouquet. At the time the suggestion was made, Fort Pitt had been under
siege and it was a possible strategy for relieving the siege. Unknown to Amherst and
Bouquet, the men at Fort Pitt had already attempted this tactic. It is unknown how
effective this was, but it has become an infamous event in North American history.
Smallpox was the first agent of germ warfare (Knollenburg, 1954, p. 489-494).

Smallpox has a repetitive history in both Europe and the Americas. In any
sufficiently large population, such as that found in a city, a smallpox virus would

62
infect the majority of the population, some would die, and the rest would be
immune. After a new generation was born, the disease would resurface. Primarily
smallpox affected children, but that is because of the lifelong immunity granted by
surviving the disease. Often those who had already survived an outbreak of smallpox,
recognizable by the scars, would be desirable to employ as someone to watch the
children. The reasoning behind that is the people who already had smallpox could not
be a carrier for the disease (Koplow, 2003, p. 29).

In 1796 Edward Jenner proposed a method to treat smallpox that would entail
injecting a small amount of the cowpox disease into a subject. Since the cowpox
disease is a relative of smallpox that does not affect humans, the human body builds
up immunity to the family of diseases. Jenner tested his theory on a boy, and after
having successful results developed a smallpox vaccine called vacca. Cases of
smallpox had been eliminated and the last case of a natural occurrence of the virus
was in October 26, 1977. Since then, there have been only two cases of laboratory
mediated contraction of the disease. Currently the United States of America and
Russia are the only countries with laboratories containing smallpox.

In 1967 the World Health Organization issued a statement declaring their intent
to rid the world of smallpox within ten years. At the end of 1976 this goal appeared
possible as there were only two cases of smallpox in the world. Still, vigilance was
necessary to be sure that no new cases of smallpox were occurring. Anyone with a
rash or fever was checked by the medical staff working for the WHO. Although the
initiative was global, the clinics operated at a more local level to keep watch of the
population itself (Henderson, 1977, p. 84).

Literature Cited

American Medical Association. (1999, June 9). Smallpox as a Biological Weapon.


CONSENSUS STATEMENT , pp. 1-11.

eMedicineHealth. (2005, October 31). Smallpox. Retrieved March 28, 2008, from
eMedicineHealth: http://www.emedicinehealth.com/smallpox/article_em.htm

MSN Encarta. (2008, April 1). Smallpox Through History. Retrieved April 5, 2008, from
MSN Encarta: http://encarta.msn.com/media_701508643/smallpox_ through_history.
html

National Institute of Allergy and Infectious Diseases. (2007, December 17). Smallpox.
Retrieved March 30, 2008, from Medline Plus:
http://www.nlm.nih.gov/medlineplus/smallpox.html

UTAH DEPARTMENT OF HEALTH. (2008, December). Smallpox (Variola). Retrieved


April 1, 2008, from Office of Epidemiology: http://health.utah.gov/epi/fact_sheets/
smallpox.html

63
Chapter 10
Syphilis
Israel-Marc Kositsky, Sarah Erickson, and Shawn Wise

Introduction

“The first fruit the Spaniards brought to the New World was syphilis.” – Voltaire

It has been knows as “the French disease” to the Italians, “the Italian disease”
to the French, “the Spanish disease” to the Dutch, and “French pox” to the British.
Today, the medical community has agreed to call it syphilis, the term first coined in
1530 by Girolamo Fracastoro, an Italian physician, poet, and astronomer.

Syphilis is a contagious venereal disease that is transmitted either through


direct contact or from a pregnant mother to her child. Although the origins of this
bacterial infection are highly disputed, most theories concur that it was introduced
in Europe in the late thirteenth century.

History

The major outbreak of syphilis in Europe occurred in 1494, almost a year after
Columbus had returned from the New World. Having brought back several natives
from the New World, the possibility of syphilis being transferred to Europe through
Columbus’s voyage is a plausible theory of its origin. This theory provides a
reasonable explanation as to why there were no major European outbreaks of syphilis
before the late 1400s. Furthermore, it explains why evergreen trees native to the
West Indies and South America, such as Guaiacum officinale and Guaiacum sanctum,
have been used to treat the ailment. Critics of this theory, however, point that the
Columbus’s return voyage was unmarked by any major ailments. Moreover, the
natives were shown naked to the Spanish court, where the infamous syphilis skin rash
would have been definitely noticed. As for the trees, they were simply shipped to
Europe under the pretense that they cured the disease; they did not.

A second theory maintains that syphilis has its roots in Africa. This is a logical
conclusion because yaws, a disease bacterially indistinguishable from syphilis, has
plagued the African continent for centuries. Like syphilis, yaws causes a horrible skin
rash and has been frequently found among children who play together naked.
However, unlike syphilis, yaws usually appears in hot climates, such as those of
Africa but not in Europe. Some experts believe that yaws and syphilis are different
manifestations of the same disease, but this conclusion has yet to be verified.
Additionally, it is also believed that syphilis migrated to Europe directly from Africa
during the Spanish and Portuguese slave trade. Although this theory is conceivable,
there is a third theory that combines the two former ones to produce an interesting
explanation of the origin of syphilis (Cartwright, 1972, pp.58-61).

64
This third theory suggests that syphilis did, in fact, originate in Africa, possibly in the
form of yaws. The disease then made its way to the New World during the slave
trade, where the local population contracted it. Syphilis then set sail for Europe on
the third leg of the slave trade triangle, arriving at European harbors just in time for
the 1494 outbreak. This theory has the advantages of both of the previous theories
because it combines sensible facts from both.

Syphilis has been given such a multitude of names because no one can confirm
the precise location of its origin. However, there exists a possible explanation for the
multitude of names given to this disease. At the time associated with the syphilis
epidemic in Europe, Charles VIII, a French king, was leading a military campaign
against Naples. Soldiers from many nations fought in the Italian Wars, including those
from France, Spain, Portugal, and several Germanic tribes. Some of those soldiers
brought the disease from their native lands and spread it amongst each other. After
the battle, which France lost, these men returned home with the syphilis, thus
spreading the disease across Europe and starting an epidemic. Hence, the Italians
call syphilis “the French disease,” while the French call it “the Italian disease.” Also,
it cannot be excluded that syphilis was the cause of the French defeat in Naples
(Cartwright, 1972, pp.62-64).

During the sixteenth century, numerous accounts of the effects of syphilis were
recorded. All of these records had a similar description of the first phase of syphilis
as quick moving and highly-contagious and causing dementia and malformed limbs.
During the 1700s, the depictions of syphilis became less frightening. This caused the
disease to be more frequently confused with gonorrhea, thus less cases were being
recorded (McGrew, 1985, p. 332).
During World War I, there were syphilis prevention and education programs set
in place to tell the general public about this disease. There was also a vigilant watch
for those who had the disease and needed quick treatment. The trend in a decline of
cases of syphilis continued until the 1950s, when the trend reversed. Syphilis has
been on a rise ever since caused by the increase in the cost of health insurance, and
the lack of safe sex habits. According to this trend, the incidences of syphilis will
only continue to increase (McGrew, 1985, p. 332).

Causative Agent

Syphilis is a bacterial infection that is usually transmitted through sexual


contact. The bacteria, Treponema pallidum, enter the body through mucous
membranes, sometimes breaking down the membranes of the skin. The narrow
spirochete (corkscrew-shaped) bacteria cannot be seeing by regular light microscopy.
Techniques such as electron microscopy, phase-contrast, dark-field, and various
staining procedures must be employed to view it. The complete genome of syphilis
reveals that it lacks major metabolic functions and is thus, highly dependent on its
host for nutrition. Moreover, T. pallidum is highly sensitive to physical and
environmental agents.

65
The causative agent belongs to a family of loosely related bacteria, but each
behaves differently and causes a different ailment. T. pallidum pallidum is the
subspecies of T. pallidum that causes syphilis. Its cousin, T. pallidum pertenue,
causes yaws. Other subspecies of T. pallidum include T. pallidum carateum and T.
pallidum endemicum, which cause the diseases known as pinta and bejel
respectively. The four species are practically indistinguishable from one another, yet
T. pallidum pallidum is the only one that is transferred through sexual contact (Sci-
Tech Ency., 2008, para. 1-4).

Etiology

Syphilis is diagnosed via a blood test that tests for the presence of antibodies
produced as a response to syphilis. The disease itself is characterized by three main
stages: primary, secondary, tertiary, and latent, with the last two often occurring
simultaneously. Entry into the body is not always noticed by the patient, and the
disease has an incubation period ranging from ten to 90 days. Following this time
period, a chancre (a small ulcer) appears at the site of contact, usually in the vicinity
of the genital area, although there have been cases where the blister appears on
other body parts, such as the mouth or breasts. Although it is not painful and is often
overlooked, the unnoticed chancre can lead to the secondary stage without the
patient’s knowledge of having the disease. After a period of three to six weeks, the
ulcer disappears altogether and the disease progresses to the secondary stage if
proper treatment in not administered. It should be noted that some patients may
develop swollen lymph nodes in close proximity to the chancre (Ency. of Alt.
Medicine, 2008, para. 9).

The secondary stage may begin from six weeks to six months following the
infection. The secondary stage is characterized by a severe yet painless skin rash,
originating on the palms of the hands and the bottoms of feet and then spreading
throughout the rest of the body. Additional chancres may also appear. Syphilis is
often known as the great mimic because its symptoms are often confused with those
of other ailments; the ensuing rash resembles skin disorders that are caused by
ringworm, mononucleosis, pitvriasis rosea, and drug reactions. This rash may last up
to one year and is highly contagious. As in the primary stage, some patients may
develop swollen lymph nodes, while some may also develop inflammations on the
eyes, kidney, spleen, liver, joints, and bones.

Other symptoms of secondary-stage syphilis include a loss of appetite,


headaches, runny nose, sore throat, aching joints, and a constant overall ill feeling
(Ency. of Alt. Medicine, 2008, para. 10).
Following the secondary stage, a patient enters the latent stage of syphilis, which
can last years without the patients knowledge. There are very few external signs or
symptoms of the disease, but it is, nevertheless, active and contagious. Pregnant
women in the latent stage risk transferring the disease to their children, and patients
who engage in sexual activity also risk transferring the disease to their partner. Skin
rashes and ulcers may reappear during the latent stage.

66
The latent stage is divided into early latency and late latency. Early latency is
classified as two years following infection, and late latency is anything thereafter. It
is generally agreed that the risk of transmitting the disease is significantly lower in
the late stage than in the early stage. The two latent stages also differ in their
treatments; the early stage requires a small dosages of penicillin, whereas the later
stage requires much greater amounts (Ency. of Alt. Medicine, 2008, para. 11).

If the disease remains untreated, it will progress to a tertiary stage. Although this
occurs in approximately 35% of syphilis patients, the tertiary stage is the most
dangerous stage of the ailment; the only upside is that the disease can no longer be
transmitted to others. Some patients are lucky and develop benign late syphilis.
Tumor-like growths form on the skin and other areas of the body, such as long bones,
eyes, throat, liver, and stomach lining. These growths, known as gummas, can be
easily treated with penicillin and have become a rarity among syphilis patients.

Syphilis in the tertiary stage affects many vital body functions and may lead to
lasting damage or death. Approximately 10% of tertiary syphilis patients develop
cardiovascular syphilis, which often occurs in conjunction with neurosyphilis.
Cardiovascular syphilis causes an inflammation of the arteries that may lead to a
heart attack, congestive heart failure, scarring of the aortic valves, or the formation
of aortic aneurysm. It usually happens between ten and 25 years after infection.
Approximately 8% of patients with tertiary syphilis develop neurosyphilis, a fatal
condition in which patients often experience not only physical symptoms but also
psychiatric ones.

Neurosyphilis is more common among men and Caucasians, and usually occurs
anytime from five to 35 years after infection. This subset of syphilis is classified into
four types: asymptomatic, meningovascular, tabes dorsalis, and general paresis. In
asymptomatic neurosyphilis there are abnormal test results for the spinal fluid, but
there are no symptoms that affect the central nervous system. Meningovascular
neurosyphilis is characterized by a change in size of the blood vessels in the brain,
and the meninges (vital tissues surrouding the brain and spinal cord) may become
irritated. Patients with meningovascular neurosyphilis may develop headaches,
irritability, visual problems, and weakness of the upper arm muscles.

Tabes dorsalis, known also as locomotor ataxia, is a progressive degeneration


of the spinal cord and nerve roots. A person with lomotor ataxia is likely to
experience a loss of perception and body position and orientation. Patients sometimes
experience pains in their legs and abdomen, as well as throat, bladder, or rectum.
General paresis, also called dementia paralytica, is the set of effects of neurosyphilis
on the brain cortex. These effects are highly unpleasant and may include progressive
memory loss, a decreased ability to concentrate, less interest in self-care,
depression, delusions of grandeur, or complete psychosis (Ency. of Alt. Medicine,
2008, para.12).

67
Affected Populations

Today, syphilis does not pose such an extreme threat as it did in 1494. In fact,
75% of countries did not report a single syphilis case in 2006. The majority of people
with syphilis are sexually active individuals between 15 and 30 years of age, and
about 60% of syphilis patients are men (Gallagher, 2007, para.2). Homosexual men
have a high risk of contracting the disease. In 2006, the rate of syphilis among
African-Americans was almost as six times as much as that of whites (STD stats, 2008,
para.15-16).
Treatments and Immunization

The standard treatment for all stages of syphilis is penicillin G, but dosage may
vary among individuals. It is interesting to note that the effects of penicillin on
syphilis patients who also have human immunodeficiency virus (HIV) are no different
than the effects on patients with just syphilis (Rolfs et al., 1997, p.1) If a patient is
allergic to penicillin, then other antibiotics such as erythromycin and tetracycline are
used. Pregnant women are treated with parenteral penicillin G. To date, a vaccine
for syphilis has not been created, although the genome of the causing agent has been
known for a decade. Moreover, syphilis patients do not acquire a lifelong immunity to
the disease (Ency. of Alt. Medicine, 2008, para.38-41).

Literature Cited

Cartwright, F. F. (1972). Didease and History. New York: Dorset Press.

Gallagher, K. (2007, October 2). Who is affected by syphilid. Retrieved April 2, 2008, from
Yahoo! Health: http://health.yahoo.com/sexualhealth-resources/who-is-affected-by
syphilis/healthwise--hw195583.html

McGrew, R. E. (1985). Encyclopedia of Medical History. New York: Taylor Publishing.

Rolfs, R., Joesoef, M., Hendershot, E., Rompalo, A., Augenbraun, M., M., et al. (1997). New
England Journal of Medicine. A Radnomized Trial of Enhanced Therapy for Early Syphilis in
Patients with and without Human Immunodeficiency Virus Infection , 337:307-314.

Syphilis - CDC Fact Sheet. (n.d.). Retrieved March 25, 2007, from CDC: http://www.cdc.gov
/STD/ Syphilis/STDFact-Syphilis.htm

Syphilis. (n.d.). Encyclopedia of Alternative Medicine. Retrieved April 03, 2008, from:
http://www.answers.com/topic/syphilis

Syphilis. (n.d.). McGraw-Hill Encyclopedia of Science and Technology. Retrieved April 03,
2008, from: http://www.answers.com/topic/syphilis

68
STD statistics for the USA. (2008, March 14). Retrieved April 3, 2008, from AVERT:
http://www.avert.org/stdstatisticusa.htm

69
Chapter 12
Tuberculosis

Nick Gillum, Sean Ryan, and Maia Selkow

Introduction

Tuberculosis has plagued mankind for centuries. For instance, the effects of
the disease were especially fatal to the inhabitants of the Hawaiian Islands in the late
18th century, when explorers brought more than their sailing ships and religion to the
islands. After a few years of exposure to tuberculosis, the population of Hawaii had
fallen by 50%.

Now the World Health Organization (WHO) estimates that someone new
contracts tuberculosis once every second (2007, para. 3). While only 5% to 10% of
these new cases will ever develop symptoms, it is easy to see why tuberculosis is
considered the global emergency it was declared in 1993 (NFID, 1999, para. 1).
Tuberculosis is believed to kill more people annually than any other single organism
on the face of the planet, and tuberculosis-related illness are the fourth most deadly
type of infectious disease known to man (“Newsletter,” 2001, para. 4).
Civilizations as far back as the ancient Greeks have described its symptoms,
and it has left an indelible mark on the course of human history. Tuberculosis
epidemics in the 17th, 18th, and 19th centuries dramatically reshaped the populations
of entire cities, and modern outbreaks are proving to be nearly as devastating,
causing economic troubles in the developing countries where the disease is most
prevalent. As this disease becomes increasingly common, it is important to
understand what makes it so destructive.

History of Tuberculosis

Tuberculosis has existed for centuries and continues to be a deadly disease.


The first case of tuberculosis discovered was found in a Homo erectus from western
Turkey. It appears to be a young male who has small lesions in the cranium which are
characteristic of leptomeningitis tuberculosa, a type of tuberculosis that attacks the
meninges of the brain (Most ancient case of tuberculosis found, 2007, para. 2-4).
Before this discovery, the first mention of tuberculosis is in Ancient Greek writing,
where the famous physician Hippocrates mentioned in one of his books that
consumption (tuberculosis) was one of the most influential diseases of the time.
Evidence of tuberculosis exists in the preserved spinal columns of Egyptian
mummies dating from 2400 BC, which contain signs of tubercular decay. About 2000
years later, Hippocrates identified the disease as being widely prevalent in 460 BC
(“Brief History”, (n.d.), Treatment section, paras. 1-2). During the 17th century, the
so-called white plague spread rapidly across Europe; it infected nearly the entire
population and was responsible for thousands of deaths (“Tuberculosis”, 2008,
Mycobacterium tuberculosis section, para. 4).

70
The great white plague affected not only the genetic composition of Europe,
but also the artwork of the time period. The artists of the Romantic movement had a
morbid fascination with tuberculosis, which they referred to as the consumption. The
pale and atrophied bodies of tuberculosis patients were held up as the ideal form for
a human body. In particular, the image of the ideal woman was changed to reflect the
pale thinness of someone wasting away from TB. Portraits were commissioned that
displayed their subjects as pale and drawn, and white makeup became a popular
fashion accessory. The disease was viewed in such an idealistic light that many
Romantics viewed it as a fashionable death (Dormandy, 1999, pp. 85-100).

The disease also had an impact on European culture. During the 1800s, many
popular works of literature featured characters whose beauty was attributed to
tuberculosis (for example: Scènes de la Vie de Bohème, written by Henri Murger). The
authors and the readers believed that the disease ate away the victim’s body, leaving
behind their pure spirit; this line of reasoning was good enough to create this bizarre
staple of literature (Porter, 1996, pp. 107-108).

Remote locations that were free of tuberculosis suffered greatly when explores
introduced the disease. In 1778, Captain Cook sailed to Hawaii; his crew might have
brought the first cases of tuberculosis to the island. If these sailors did not accomplish
this, then the missionaries who arrived later definitely spread the disease. Many
natives were weakened by tuberculosis; their immune systems were unable to
withstand additional diseases, leading to many deaths. In an average year, 9% of the
natives died due to tuberculosis. After the span of three generations, as much as 50%
of all local families had been eliminated (Crosby, 2007, p. 135).

Tuberculosis has stumped physicians since its discovery in 1882 by physician


Robert Koch. He was awarded the Nobel Prize in Medicine and Physiology in 1842 (The
Nobel Foundation). In his acceptance speech he said, “If the importance of a disease
for mankind is measured by the number of fatalities it causes, then tuberculosis must
be considered much more important than those most feared infectious diseases,
plague, cholera and the like. One in seven of all human beings dies from tuberculosis.
If one only considers the productive middle-age groups, tuberculosis carries away one-
third, and often more," (Koch, 1882, 109). Once the causative agent of the disease
was discovered, the next task was to find the cure.

Scientists hypothesize that the cystic fibrosis allele occurs most frequently in
European ethnic groups because of the so called great white plague that raged from
the 16th century to the middle of the 20th century. The plague was a seemingly
inexorable tuberculosis epidemic that raged through Europe and caused an estimated
20% of all deaths during the time period (MacKenzie, 2006, para. 8). This caused a
bottlenecking effect on the gene pool of Europe. Those that were heterozygous for
cystic fibrosis were more likely to pass on their genes onwards, leading to an
increased incidence of the allele. The lasting effect of the epidemic on the gene pool
bears testament to its scale.

71
Today, the disease has lost its romantic appeal, and numerous public health
organizations are fighting a losing battle to eradicate the disease. The eradication
efforts started in the latter half of the twentieth century after antibiotics made
tuberculosis treatable. Initially, their efforts were seemed successful, and the number
of new cases of tuberculosis began to drop by an average of 5.5% every year.
However, during the late 1980s and 1990s the number of new TB cases rose sharply,
due mainly to a higher incidence of HIV infections. Although this number has recently
begun to fall again, there are few reliable predictions about the future of tuberculosis
(Loachimescu, 2004, para. 4).

Causative Agent

The causative agent behind tuberculosis is Mycobacterium tuberculosis. The


bacteria come from the same genus as Mycobacterium leprae, the microorganism
responsible for Hansen’s disease (leprosy). The cell cycle of M. tuberculosis is slow in
comparison to most other bacteria, with a twelve to 18 hour gap between cell
divisions (MicrobiologyBytes, 2007, para. 3). The tuberculosis bacterium is a single,
immobile, rod-shaped cell that is typically between 2 and 4 μm long and 0.2 to 0.5 μm
wide. It is an obligate aerobe, meaning that it needs to absorb oxygen in order to
survive. Typically, M. tuberculosis will infect the lungs (where it will receive plenty of
oxygen) and feed off of any macrophages that come to defend the body (Todar, 2005,
General Characteristics section).

The organism grows fastest in areas with high oxygen concentrations, which
explains its affinity for the oxygen-rich areas of the body like the brain and lungs
(Schachter, 2008, para. 11). Like many other Gram-negative bacteria, M.
tuberculosis is naturally resistant to the antibiotic penicillin, forcing physicians to
turn to alternate treatments for the disease.

Treatment

The first treatment for tuberculosis was developed long before penicillin had
even been discovered. Sanitariums, which were popular in the 1800s and early 1900s,
were isolated hospitals reserved entirely for patients with TB. By quarantining
tuberculosis patients into these secluded treatment facilities, physicians hoped to
prevent the contagious from infecting more people. Also, sanatoriums were seen as
the best place for a TB patient to recover. Many of the leading medical professionals
at the time realized that they were powerless in the face of tuberculosis. It was the
strength of patient’s immune system, not the skill of the patient’s doctor, that
determined whether he or she would survive. The environment of fresh air and
relaxation offered by the sanatorium was deemed conducive to a strong immune
system, so it was though to be the optimal place to treat someone with TB
(Dormandy, 1999, p. 149).
Sanatoriums would remain a popular treatment until the discovery of
streptomycin in 1943 heralded in the age of drug therapy. The compound, which was
derived from the from the soil fungus Streptomyces griseus, was first used to treat a

72
tubercular patient in 1944. Although the drug was useful, the multiple strains of
bacteria soon developed a resistance to the antibiotic, limiting its effectiveness.
However, when the newly discovered drug para-aminosalicylic acid was taken
alongside the streptomycin, their combined effect proved to be a slow but effective
method of treating TB. The third anti-tuberculosis medicine developed during the
middle of the twentieth century was isoniazid, a drug that soon became the most
popular treatment for TB (Nobel Prize Foundation, n.d., para. 13-16).

The modern day treatment for tuberculosis is a continuation of the multiple


drug technique developed to overcome the limitations of the streptomycin. For the
first two months of treatment, the two main tuberculosis medicines, isoniazid and
rifampin, are taken with pyrazinamide, another anti-tuberculosis drug. This initial
onslaught of antibiotics eliminates nearly all of the tuberculosis bacilli in the body,
alleviating the symptoms of the disease. For the remaining four months of treatment,
a low dose of insoniazid and rifampin are taken to kill any bacteria that may have
survived the first two months (Loachimescu, 2004, para. 17).

The number of infections in the United States has gradually fallen since the
1950s. In 2007, 13,293 cases were reported, coinciding with the lowest infection rate
ever recorded (4.4 cases out of 100,000 people). Unfortunately, the yearly decline in
infections was 3.8% from 2000 to 2007; this indicates an increase in tuberculosis
related drug resistance when compared to the yearly decline of the 1990s (7.3%).
Nonetheless, the number of reported cases is expected to continue to decrease
(“Tuberculosis”, 2008, Tuberculosis section, para. 2).

Detection

A tuberculosis infection is most commonly detected by the Mantoux test, which


is performed by injecting the patient’s forearm with a sample of PPD tuberculin. Over
the next 48 to 72 hours, the patient will either develop a hard lump (indicating a
positive result) or notice no change at all. Additional tests may be recommended
based on these results; two examples include chest x-rays to locate damaged tissue
and cultures of bacteria found in the patient’s sputum to confirm the infection
(“Tuberculosis”, 2008, Screening and Diagnosis section, paras. 1-8).

Despite the popularity of the Mantoux test, it can have both false-positive and
false-negative results. If the patient has been vaccinated for tuberculosis, a false-
positive may result. Additionally, a false-negative may be caused by low levels of
bacteria cells or by a weak immune system. For example, if the patient was recently
infected, the disease will be in insufficient concentrations to provoke a response.
Conversely, if the patient’s immune system has been overwhelmed by bacteria, it will
be unable to react to the Mantoux test (“Tuberculosis”, 2008, Screening and Diagnosis
section, paras. 1-8).

73
Multi-Drug Resistant Tuberculosis

If the patient stops taking the drugs before the six-month regimen is up, a
number of different complications can occur. First of all, the patient runs the risk of
his tuberculosis coming back, as any of the bacteria that survived the first two months
of treatment are capable of causing the disease again. The resurgent infection would
be more severe than the original because the initial treatment would have wiped out
all the bacteria without drug resistances. Thus the new colony would be composed
almost exclusively of resistant bacteria, making the infection much harder to treat. If
the patient were to pass on the illness to another person, he would be infecting him
or her with the more resilient bacteria.

Such scenarios explain the prevalence of multi-drug resistant tuberculosis


(MDR-TB), a form of the disease characterized by both insoniazid and rifampin
resistance. Patients may stop taking their medication once their symptoms go into
remission or once they can no longer afford to pay for it. In either case, their failure
to complete the full treatment contributes to the higher incidence of MDR-TB in
modern times. The WHO estimates that approximately 5.3% of all TB cases in the
world meet the definition of MDR-TB. It is also approximated that 20.0% of all
tuberculosis cases have some sort of antibiotic resistance, so even if a given infection
does not qualify as MDR-TB, there it is still probable that it will be capable of
withstanding at least one of the major drugs used to treat it (WHO, 2008, pp. 11-14.).

Mode of Transmission/Symptom Progression

Tuberculosis can be divided into three distinct phases: the initial infection and
subsequent latency, the resurgence, and the late stages. The initial infection is
almost exclusively caused by inhaling some of the droplet nuclei that are expelled
every time an infected person coughs or sneezes. The mode of transmission is very
rarely anything but airborne (“Guidelines”, 2005, p.4). The one major exception to
this is a tuberculosis infection caused by Mycobacterium bovis, a sister bacteria to M.
tuberculosis that is found dairy cows. This zoonotic bacterium is capable of crossing
the species barrier and is thought to be responsible for the first human cases of TB. It
is also hypothesized that M. tuberculosis is just an evolutionary offshoot of M. bovis
that became specifically adapted to survive in humans (MicrobiologyBytes, 2007, para.
2). Humans most often contract M. bovis when they drink unpasteurized milk, so it
does not pose a major threat it most developed countries.

Once a droplet nucleus is inhaled, the disease quickly establishes a foothold in


the lungs. As the bacterial colony begins to grow, it finds its way to the lymphatic
system, where it is transported throughout the rest of the body. This incites an
immune response, and a healthy human being is usually able to beat back this initial
infection within two to twelve weeks (“Guideline”, 2005, p. 4). The immune system is
not, however, capable of eradicating the disease completely, so some bacteria escape
and remain hidden in the body. During the subsequent stage of TB, known as latency,
the patient will be asymptomatic and incapable of spreading the disease. It is

74
estimated that one third of the population of the world is in the latent stage of
tuberculosis (WHO, 2007, para. 3). The disease will remain latent as long as the
immune system remains strong enough to keep the infection at bay.

However, if a person with a case of latent tuberculosis becomes


immunosupressed, the disease comes raging back. This explains the connection
between TB and HIV infections. HIV infections weaken their host’s immune systems,
allowing latent tuberculosis colonies to become active again and causing the once
dormant disease to flare up again. The vast majority of the immunocompromised HIV
patients are incapable of fighting off the new infection, so they almost invariably die
of the disease. This scenario occurs so often that tuberculosis is now the number one
killer of people with HIV (Goozé, 2004, para. 1).

Death by tuberculosis is slow and painful. On the outside, the patient becomes
pale and lethargic, slowly wasting away as the disease causes his weight to plummet.
His persistent cough soon begins to bring up blood from his lungs, and his normal
breathing begins to sound pained. During this stage, the patient is highly contagious,
capable of infecting doctors, nurses, or anyone else he may come in close contact
with. How long the patient will linger in this state varies, depending mainly on the
quality of care and the natural strength of the patient’s body (Luckmann & Sorensen,
pp. 1340-1345, 1980).

On the inside, the bacteria are wreaking havoc on the internal organs. Hardest
hit are the lungs, where bacterial colonies cause the lung tissue to caseate, or turn
into a jelly-like amalgam of dead bacilli, necrotizing lung tissue, dying macrophages,
and living bacteria. These caseous lesions begin to calcify as the disease progress,
rendering any remaining lung tissue in the lesion useless. Oftentimes doctors will use
x-ray machines to check for such calcified lesions when they are trying to diagnose
TB. Eventually, death comes when enough of the lung tissue caseates to cause
respiratory failure in the patient (Luckmann & Sorensen, pp. 1340-1345, 1980).

This is not the only way that tuberculosis can kill its host. If a lesion forms up
against any of the major blood vessels in the lungs, it can weaken it to the point of
failure, resulting in massive hemorrhaging and almost certain death. Alternatively,
the tuberculosis can spread to other parts of the body like the spine, brain, or kidneys
and cause complications there. One of the most famous non-pulmonary forms of TB is
Pott’s disease, where tuberculosis bacilli infest a patient’s vertebrae. The spinal
infection causes severe pain, spinal deformities, and ultimately, paralysis.

Populations Most Affected and Geographic Distribution

On a global scale, tuberculosis is a highly localized disease, as 11.2% of the


countries in the world account for over 80% of the annual cases of TB. Nearly one
third of these cases occur in Southeast Asia, where overcrowding and poor medical
facilities create an environment perfect for incubating epidemics. While staggering,
even this number pales in comparison to the number of cases in sub-Saharan Africa,

75
an area that is estimated to have nearly twice as many TB patients as Southeast Asia.
Despite this large number of cases, Africa is one of the worst places in the world to
contract TB. The prevalence of HIV coinfections and substandard medical care
combine to make Africa the most deadly place to contract tuberculosis. No other
region in the world has a higher morality rate per capita than Africa does (WHO, 2007,
para. 4).

On a local scale, tuberculosis is most likely to strike people of a low


socioeconomic status. More specifically, statistics show that homeless people, drug
abusers, and those confined to low-income housing are more likely to contract the
disease than people of other demographics. Being HIV positive also increases the
likelihood of an infection. In fact, thirteen percent of all people with an active case
of tuberculosis are coinfected with HIV (Loachimescu, 2004, para. 36). On the
opposite end of the spectrum, new research shows that members of European ethnic
groups are less likely to get an active case of TB than others are. Europeans have a
high incidence of the cystic fibrosis allele, which has been shown to help ward off
active tuberculosis in much the same way that the sickle cell anemia allele protects
against malaria (MacKenzie, 2006, para. 1). Similarly, a single Tay-Sachs allele will
protect against tuberculosis; this attribute was made clear after Tay-Sachs carrying
Europeans were exposed to a tuberculosis epidemic during World War II (Lewis, 1997,
Tay-Sachs Disease section, para. 1).

Immunization

At present, there are no widely accepted immunizations against tuberculosis.


Some organizations recommend that people be inoculated with Bacillus of Calmette-
Guérin, a vaccine derived from M. bovis bacteria, but there remain many unresolved
issues regarding the drug. Firstly, many doubt its effectiveness, claiming that it does
little to prevent the onset of TB. Secondly, the vaccine can cause a false positive on
the diagnostic test for tuberculosis, so many doctors feel the risk of a false positive
outweighs the potential benefits offered by the so called vaccine (Loachimescu, 2004,
para. 31). Researchers are presently looking for a new vaccine that is capable of
eradicating tuberculosis in the same manner polio was eradicated, but as of yet, their
efforts have been in vain.

Until then, tuberculosis will continue to have an impact on the course of human
history. Although it is unlikely that the human race will ever have to deal with an
tuberculosis epidemic the size of the great white plague again, the increasing
prevalence of MDR-TB raises fears of a resurgence of the disease. Even if the disease
remains at present levels, it is a great enough public health issue to warrant
immediate international attention. More specifically, more time and resources should
be dedicated to the lethal combination of HIV and TB that is becoming increasingly
common in sub-Saharan Africa. Such efforts will help bring man one step closer to
eradicating a scourge that has haunted it since ancient times.

76
Literature Cited
Brief history of tuberculosis. (n.d.) Retrieved March 3, 2008, from
http://www.umdnj.edu/~nt bcweb/history.htm

Crosby, A. W. (1994). Germs Seeds & Animals: Studies in Ecological History. Armonk,
New York: M. E. Sharpe.

Dormandy, T. (1999). The White Death: A History of Tuberculosis. New York: New
York University Press.

Goozé, L. (May 2004). Tuberculosis and HIV. Retrieved April 3, 2008 from
http://hivinsite.ucsf.edu/InSite?page=kb-05-01-06#S2X

Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health


Care Settings. (December 30, 2005). Morbidity and Mortality Weekly Report, 55(RR
17).

Lewis, R. (1997). Evolution: human genetics: concepts and application. Retrieved


April 23, 2008, from http://www.pbs.org/wgbh/evolution/educators/
course/session7/explain_b_pop1.html

Loachimescu, O., & Tomford, J., (2004) Tuberculosis. Retrieved March 4, 2008, from
http://www.clevelandclinicmeded.com/medicalpubs/diseasemanagement/pulmonary
/tb/tb.htm

Luckmann J. & Sorensen K., (1980). Medical-surgical nursing: A psychophysiologic


approach. Philapelphia: W.B. Saunders Co.

MacKenzie, D. (September 2006). Cystic fibrosis gene protects against tuberculosis.


New Scientist. 15(10). Retrieved April 3, 2008 from http://www.newscientist.com/
article.ns?id=dn10013

MicrobiologyBytes. (2007). Mycobacterium Tuberculosis. Retrieved April 1, 2008


from http://www.microbiologybytes.com/video/Mtuberculosis.html

Most Ancient Case of Tuberculosis Found In 500,000 year-old Human; Points To Modern
Health Issues. ScienceDaily (2007, December 8).. Retrieved April4, 2008, from
http: //www.sciencedaily.com /releases/2007/12/071207091852.htm

National Foundation for Infectious Diseases (NFID). (April 1999). Tuberculosis: a global
emergency. Retreived April 3, 2008 from http://www.nfid.org/factsheets/tb.html

Nobel Prize Foundation. (n.d.) Robert Koch and Tuberculosis. Retrieved April 3, 2008
from http://nobelprize.org/educational_games/medicine/tuberculosis/ readmore.
html

77
Porter, R. (1996). Medicine. New York: Cambridge University Press.

Schachter, E., (2008). Tuberculosis: The comeback bug. Retreived March 4, 2008 from
http://www.thedoctorwillseeyounow.com/articles/other/tb_10/

Todar, K. (2005). Tuberculosis. Retrieved April 1, 2008, from


http://www.textbookofbacteriology.net /tuberculosis.html

Topics in Infectious Diseases Newsletter. (September 2001). Retrieved April 2, 2008


from http://wordnet.com.au/Products/topics_in_infectious_diseases_Sep01.htm2
Tuberculosis. (2008). Retrieved April 1, 2008, from
http://www.mayoclinic.com/health/ tuber culosis/DS00372/

World Health Organization. (March 2007). Tuberculosis Fact Sheet. Retreived March 4,
2008 from http://www.who.int/mediacentre/factsheets/fs104/en/index.html

World Health Organization . (2008). Anti-tuberculosis drug resistance in the world:


Fourth Global Report. Geneva: World Health Organization.

78
Chapter 13
Typhus
Dan Marsden, Amber Truhanovitch, Adam Burks

Field Marshal Bernard Montgomery once said “One of the great laws of war is
never invade Russia” (Russia in World War 2, n.d, The Russian people starts fighting
seriously section, para. 1). One man who knows the dire consequences of disobeying
this law in the winter is the great French general Napoleon Bonaparte. As is described
in the following paragraphs, Napoleon’s epic campaign to create the great French
Empire was abashed in the Russian winter by one of the most influential diseases in
all of history, epidemic typhus.

History

The very first incidence of typhus is possibly the Athenian Plague of 430 B.C.
Thucydides describes a disease similar to typhus in his account of The Peloponnesian
War. Whether or not the pestilence truly was epidemic typhus caused by the
microorganism Rickettsia prowazeki, it was the primary cause of the downfall of
Athens (Conlon, n.d, Early History section, para. 1).

There were no large-scale outbreaks in the dark ages because there were no
great wars and no great empires to fall because of it. However, typhus ravaged
through the countryside because of the poor living conditions. The medieval Catholic
Church encouraged unsanitary living conditions, and people of the time
unquestioningly obeyed by not concerning themselves with hygiene because of their
deep religious beliefs (Conlon, n.d, The Dark Ages section, para. 2).

The earliest definite record of typhus occurred in the fifteenth century.


Ferdinand and Isabella of Spain raised an army to rid their country of the Muslim
Moors. However, an epidemic of typhus spread through their troops, killing 17,000 of
the original 25,000 men. Because of the epidemic, Ferdinand and Isabella’s troops
were defeated, and Spain was in Muslim hands for centuries, greatly impeding the
country’s progress in the Renaissance and Enlightenment. Furthermore, because this
is the first definite record of typhus, it is possible that the Spanish crusaders
introduced it to the rest of Europe on their return from Cyprus (Conlon, n.d, The
Fifteen Century section, para. 1).

In the first half of the sixteenth century, Spain and France were battling for
European dominance. During the fight, the Spanish army was hit by a plague in the
city of Naples where they were occupying. The army was diminished to 11,000 troops,
and the French had no trouble finishing the job that the plague had started. The
disease that ravaged the Spanish army was not typhus, but the plague that
subsequently hit the French army was. The original mass of 35,000 troops was shortly

79
reduced to only 10,000. The people of the time considered the destruction an act of
God; they thought He was unhappy with the French victory. Charles V of the Holy
Roman Empire defeated the debilitated French army, and he was named Holy Roman
Emperor because of the victory. However, the interaction with the French army
introduced typhus to his troops. This was important in ruining Charles V’s dream of
destroying the northern Protestants, thereby halting the northern Renaissance.
However, because of typhus, there are countless numbers of Christian sects (Conlon,
n.d, The Sixteen Century section, para. 2-6).

Typhus influenced two major occurrences in the seventeenth century. The


first half of the Thirty Years’ War was dominated by the disease, causing the death of
ten million people with the help of the plague and starvation. The Thirty Years’ War
was broken up into three eras: the Bohemian, the Swedish, and the French. The
Bohemian army of Baron Von Wallenstein and the Swedish army of Gustavos Adolphus
were set and ready to battle, but a bout of epidemic typhus hit both armies. The
disease caused both sides to suffer significant losses, and they had to postpone battle.
The outcome of either phase of the war could have ended quite differently (Conlon,
n.d, The Seventeenth Century section, para. 1).

The concept of the divine right of kings was prominent in time periods up to
the seventeenth century. However, typhus brought the first real challenge to this
type of monarchy. The Earl of Exeter of the time wished to depose the ruler of
England, Charles I, who had proclaimed himself absolute despot in the name of God.
Charles assembled a large army to bring shame to the Earl, but typhus caused many of
Charles’ men to perish. In light of the emerging scientific revolution, this brought
skepticism to the people who truly believed Charles’ divine justification to the
throne. The divine right of kings never had a wide acceptance after Charles I (Conlon,
n.d, The Seventeenth Century section, para. 2).

The War of the Austrian Succession provided prime conditions for the
transmission of typhus. It caused the death of 30,000 Prussians in the French attempt
to seize Prague, allowing the French to take over the city and defeat the Prussians.
This war may not have been significant in history, but it caused the Germans to begin
the search for preventative measures (Conlon, n.d, The Eighteenth Century section,
para. 1).

Possibly the most famous incident of typhus in history is its effect on Napoleon
and his Grandee Army. In 1812, Napoleon wanted to invade Russia and India to
continue expanding and glorifying the French Empire. Napoleon knew the dangers of
war-time epidemics, typhus in particular. Hence, he took the initiative to establish
field hospitals in Germany and enforce good hygiene among his troops. However, as
he marched through Europe, he disregarded the severity of the Russian winter, which
would eventually prove to be his downfall (Conlon, n.d, The Nineteenth Century
section, para. 2).

80
Napoleon’s supply trains got caught on the poorly-constructed Polish roads.
Disregarding his general’s advice to not advance, Napoleon marched on. At this time
the Polish people were ridden with typhus, so Napoleon ordered his troops not to
fraternize with the citizens. Without the supply trains, food quickly ran short, and
soldiers needed to raid the villages for food. The hungry soldiers consequently caught
the typhus of the people. Eighty thousand troops died the first month because the
infected soldiers brought it to the healthy soldiers too. Furthermore, a European
drought that year made water scarce. Since the soldiers could only find enough water
to drink, the troops washed neither themselves nor their clothes, rendering the
conditions unhygienic, thereby breeding more body lice. To make matters worse,
Poland experienced an unusually cold winter that year. To remain warm, the soldiers
had to huddle around each other, making the spread of body lice and typhus even
easier (Conlon, n.d, The Nineteenth Century section, para. 3).

The Russians did not even bother acting. They knew exactly what was going to
happen, so they sat and waited as Napoleon’s army drudged on to Moscow. Only
90,000 soldiers out of the original 600,000 made it to Moscow. Out of those who
perished, approximately 300,000 had died from epidemic typhus. When the Grandee
Army entered Russia, they noticed that Russia had employed the “scorched earth”
method to worsen the French food shortage. Needless to say, Napoleon’s army was in
ruins, and only 90,000 troops returned to France (Conlon, n.d, The Nineteenth
Century section, para. 4).

Despite Napoleon’s blunder, the very next year he managed to assemble an


army of 500,000 men to accomplish what his previous campaign could not. This time,
219,000 of his troops died of epidemic typhus. Once again, the greatest general the
world has ever known was defeated not by a formidable foe on the battlefield, but by
epidemic typhus (Conlon, n.d, The Nineteenth Century section, para. 6).

Epidemics of typhus often accompanied crop failures in Ireland throughout the


th
19 century. The potato famine of 1846 virtually starved the Irish into begging the
English for assistance. The English forced the Irish to do petty jobs such as building
roads that led nowhere. The Irish were forced to live in crowded workhouses, which
were breeding grounds for body lice and epidemic typhus because of the poor hygiene
that accompanied the Irishmen. The death, starvation, and horrendous life conditions
caused by the epidemic typhus forced one million Irishmen to emigrate (Conlon, n.d,
the Nineteenth Century section, para. 8).

The Twentieth Century

Although Nicolle had discovered so much about epidemic typhus, advancements


did not progress fast enough for World War I. The war began when Austria invaded
Serbia. The over-crowding, poor hygiene, and general hysteria caused by the invasion
provided prime conditions for the spread of the disease. The epidemic in Serbia then
spread to the Austrian armies. In the course of one month in 1914, typhus caused

81
200,000 Serbian troops to perish, and was spread to the rest of the Serbian population
(Conlon, n.d, the Twentieth Century section, para. 1-3).

The Austro-German army could have easily invaded Serbia and devastated the
remaining people, but the memories of the typhus in the War of Austrian Succession
prevented theirdoing so. The Germans had to wait six months to invade Serbia again,
and in the meantime the western powers were able to assemble a formidable army.
The western armies then pushed Russia into a two-front war, eventually causing the
demise of Russia. Clearly, typhus was integral in the outcome of World War I (Conlon,
n.d, The Twentieth Century section, 4-5).

Between 1918 and 1922, Russia suffered from poor socioeconomic, military,
and political conditions. In this time period, 25 million cases of typhus were
diagnosed, 3 million of whom died. The Russian citizens were fed up with their
incompetent government and miserable due to the poor conditions that they were
forced to endure. It can be reasonably extrapolated that the epidemic typhus was one
of the main players in the overthrow of the old government. It can also be argued
that typhus allowed the change from czarism to communism (Conlon, n.d, The
Twentieth Century section, para. 7).

Thanks to Dr. Nicolle and Dr. Cox, typhus did not determine the outcome of
World War II. Although few soldiers were inflicted with the disease, the inmates at
the concentration camps were not so lucky. Buchenwald was home to 8000 typhus
patients upon its liberation alone. The total number of cases in the war is debated,
but it is more than probable that an astonishing number of inmates died in the
concentration camps because of the epidemic (Conlon, n.d, The Twentieth Century
section, para. 12). Anne Frank, the teenaged girl whose diary account of the
Holocaust has received international renown, died in Aushwitz along with her sister
Margot as a victim of typhus.

The final time that a city was threatened by a major epidemic of typhus was
Naples, Italy, in the winter of 1943. The Germans, during WWII, occupied Naples for
some time, and the civilians of Naples found safety in cramped bomb-shelters to
escape ally bombing. In January of 1944, 700 civilians had already died from typhus,
and the inhabitants of Naples were afraid of an ensuing epidemic. Fortunately, by this
time the allies had occupied the city and immediately organized a mass-delousing
using DDT and immunization of all the inhabitants of Naples. This proved that our
methods for halting typhus were useful (Wheeler, 1944, pp. 119-121).

Treatment
\
Dr. Charles Nicolle was the single most influential person in discovering the
properties of typhus and in and in developing its preventative measures. When a
victim bathed and changed clothes, Nicolle found that the person was no longer able

82
to pass the disease on to anybody else. Because of this discovery he concluded that
the body louse was the means of transmission (Henschen, 1928, para. 10).

Dr. Nicolle successfully inoculated chimpanzees with typhus via blood


injection, thereby proving that the lice can spread the disease simply by biting the
victim (Henschen, 1928, para. 11). Nicolle found that he could inoculate many
different animals with the bacteria, thereby preserving the disease in the laboratory
for an unlimited length of time (Henschen, 1928, para. 16). Furthermore, during this
study, he noticed that monkeys who had survived their original bout of typhus were
immune to it for up to a year. This was perhaps his greatest achievement because it
paved the way for the development of an immunization shot (Henschen, 1928, para.
15).
He discovered that an organism can have the R. prowazeki bacteria, but not
show any symptoms of the disease. He found that some infected guinea pigs were not
affected by typhus, but they were still carriers and could unknowingly transmit the
Rickettsia microorganism to others (Henschen, 1928, para. 17).

Nicolle and his team tested his immunization shot in real life situations by
introducing it to the people of the Tunisian city of Tunis. In Tunis, typhus had raged
each winter in recent memory, but, within two years, Nicolle had completely rid the
city of the disease (The Nobel Foundation, 1928, para. 2).

A simple and effective way to treat a disease is to prevent it from occurring in


the first place. Drs. Wheeler and Soper of the Rockefeller Foundation Health
Commission introduced a delousing method by way of DDT. This is a highly effective
and rapid method of preventative care (Soper, 1944, pp. 36-37).

Causative Agent

Typhus is caused by the bacteria Rickettsia prowazeki, which is transmitted by


ticks, fleas, mites, and, most commonly, lice. Endemic typhus is spread by infected
fleas, which feed on healthy rats, passing on the bacteria. Healthy fleas then become
carriers of the disease when they feed on the rats. Finally, the fleas feed on human
blood, thereby inflicting the human with typhus. This is a rather slow process,
rendering endemic typhus not a very effective killer (Bayne-Jones, 1944, p. 822).
Epidemic typhus, however, is more dangerous, and typically causes hundreds of
thousands of deaths in a very short time. Epidemic typhus is passed from one victim
to the next via body lice (Pediculus humanus corporis), which thrive in cramped,
unhygienic places such as prisons, concentration camps, and war-torn cities. The body
louse feeds on its prey, but this does not transmit the disease. Rather, while the louse
feeds, it defecates on the victim’s skin. The feces hold the Rickettsia bacteria, and it
often rolls around on the victim’s skin and enters the open wound, thereby inflicting
the victim with typhus (Dolce, 1941, p. 555).

83
The name Rickettsia prowazeki was coined by Brazilian physician, pathologist,
and epidemiologist, Henrique da Rocha Lima. He named the disease after two
scientists who perished because of exposure to the disease. Howard Taylor Ricketts
was an American bacteriologist and pathologist. Stanislaus von Prowazek was a
coworker of Rocha Lima who contracted typhus while the two were studying an
outbreak among Russian prisoners of war (Henrique da Rocha Lima, n.d., para. 2-4).

Symptoms and Diagnosis

The patient suddenly becomes seriously ill with chills, headaches, and bodily
pains preceding an elevated fever. The most recognizable characteristic of a typhus
sufferer is the pink-spotted rash that covers the body, except for the face and head,
which arises four to six days after the outbreak of the fever (Dolce, 1941, p. 556).
There are many other symptoms associated with typhus: clouded mind, irrationality,
dehydration, gangrene, blood vessel thrombosis, bronchitis, pneumonia, kidney
damage, muscular twitching, and deafness. Not all of these conditions occur in all
typhus patients, however. The fatality rate ranges from 5% for children to 60% for the
elderly (Jones, 1944, p. 823).

In most cases the spotted rash that appears early is enough to diagnose typhus
because it is so distinctly recognizable from other diseases. Also, most cases of typhus
occur in epidemics; therefore, if an epidemic has broken, it is reasonable to assume
that the rash would be a symptom of typhus. However, sometimes it is not that easy,
and the Weil-Felix reaction takes into account such situations. This process requires
urine samples to find evidence of the agglutination (clumping together) of blood
serum on the Bacillus proteus strain known as X-19. The agglutination occurs in high
enough dilutions by the fifth or sixth day of the disease, by which time the rash
should supply sufficient evidence of typhus (Dolce, 1941, p. 557).

Literature Cited

Bayne-Jones, S. (1944). Typhus. American Journal of Nursing , 44 (9), 821-823.

Busvine, J. R. (1976). Insects, Hygiene, and History. London: The Athlone Press.

Conlon, J. M. (n.d). The Historical Impact of Epidemic Typhus.

Henrique da Rocha Lima. (n.d.). Retrieved March 3, 2008, from Who Named It?:
http://www.whonamedit.com/doctor.cfm/3185.html

Henschen, P. F. (1928, December 10). Presentation Speech - The Nobel Prize in


Physiology or Medicine. Sweden.

84
James A. and Dolce, M. (1941). Typhus Fever. American Journal of Nursing , 41 (5),
555-558.

Lt. Col. Charles M. Wheeler Sn.C., A. (1944). Control of Typhus in Italy, 1943-1944 by
Use of DDT. American Journal ofPublic Health , 119-129.

Roosevelt, F. D. (1942). Establishing the United States of America typhus commission.


Washington D.C.

Russia in World War 2. (n.d.). Retrieved April 13, 2008, from www.2worldwar2.com:
http://www.2worldwar2.com/russia.htm

Soper, F. L. (1944, February 28). Typhus in Naples. Life Magazine , pp. 36-37.

The Conquest of Typhus. (1944, June 4). New York Times .

The Nobel Foundation. (1928). Charles Nicolle - Biography. Retrieved March 6, 2008,
from Nobelprize.org: http://nobelprize.org/nobel_prizes/medicine/
aureates/1928/nicolle-bio.html

85
Chapter 14
Typhoid Fever

Kate Baker, Ariel Wexler, and Maegan Boutout

Typhoid fever is an intestinal disease caused by a typhoid bacillus microbe.


When infected, bacilli invade the lymphatic tissues of the entire intestinal tract
leading to a mild case of enteritis –- inflammation of the small intestine (Edsall, 1959,
p.989). Salmonella typhi, a virulent serotype of Salmonella which causes typhoid
fever, is not found in animals; the disease is only spread through the feces of humans
and contaminated food and water (Tortora, Berdell, & Case, 2004, pp.714-715).
There is a two to three week incubation period after the typhoid bacillus enters the
body followed by multiplication of the infectious germ which leads to ulcerations of
the intestines and to growth of the mesenteric glands (Whipple, 1908, pp. 1-2).
Although the prominence of typhoid fever is currently declining in the United States,
it is still a frequent slayer in unsanitary portions of the world. In recent years there
have been 350 to 500 annual cases of typhoid in the United States, most of which
were acquired during periods of foreign travel (Tortora et al., 2004, p. 715).

Symptoms

Possible symptoms of an infected person will include a high fever of


approximately 104F and a continual headache (Tortora et al., 2004, p. 715).
Additional symptoms include frequent and loose bowel discharging, 0.3 centimeter
diameter rose-spots appearing on the abdomen, and nervous tremors and delirium
becoming apparent. The disease typically lasts for approximately four to five weeks
before the body will naturally recover. The initial symptoms worsen during the
second and third weeks of the infection, and they begin to diminish toward the fifth
week. Typhoid will cause the afflicted to experience severe illness and discomfort;
there are many cases in which people die from typhoid fever as well.

Early History

Given that typhoid is a disease of civilization, spread primarily by food, water,


and human contact, it probably caused little illness in prehistoric man. Yet, it has
almost definitely been a concern of the early cities established in the Tigris-Euphrates
Valley (Edsall, 1959, p.989). Attaining this disease was more common before the days
of proper sewage disposal and food sanitation due to the nature of the transmission of
the disease (Tortora et al., 2004, p. 715).
Although typhoid fever existed in societies for centuries, it was only recognized
as a distinct disease in the nineteenth century when Bretonneau, a French physician,
entitled it “dothinenteritis” –- literally meaning, a “boil on the intestines.” Within
several years of Bretonneau’s discovery of dothinenteritis, a colleague of his, Louis,
coined the more common name for the disease that translated into English as
“typhoid fever.” Unfortunately, this name was misleading because it meant “typhus

86
like.” The result of this confusing nomenclature was the general public not
differentiating between typhoid and typhus for a number of years (Edsall, 1959,
pp.990).

Trousseau’s papers on typhoid written in 1826 explained that “This disease,


just as common and no less murderous than smallpox, measles, and scarlet fever, that
few people go to the end of their lives without having experienced its attacks, affects
an individual only once during life, and is perhaps of a contagious nature” (Abhyankar,
2002, para. 8). Three years after Trousseau’s paper on typhoid was published, Pierre
Louis detailed on the post-mortem findings of typhoid. He thoroughly described the
inflammation and ulceration of the Peyer’s patches, and he was also the first person
to actually call this disease by its modern name: typhoid (Abhyankar, 2002, para. 9).

Thomas Willis is the first person to differentiate between typhoid fever and the
many diseases that are similar to it. His remarks were first recorded in 1659, yet his
work had not been recognized until 1984 when it was translated into English
(Abhyankar, 2002, para. 2). Given that Thomas Willis’s descriptions of typhoid from
approximately 350 years ago are very similar to the modern descriptions of typhoid, it
is safe to assume that the typhoid bacillus has not evolved into a more virulent
bacterium.

One commonly mentioned case of typhoid fever is that of Mary Mallon. She
was the first “healthy carrier” of typhoid fever; her symptoms were stealth to the
average observer. She was born on September 23, 1869 in Ireland and immigrated to
the United States at approximately the age of 15. Upon unveiling her innate
capabilities as a cook, she decided to use her skill to earn a respectable wage
(Rosenburg, n.d., p. 1). Over the summer of 1906, Mrs. Mallon became a cook at the
Charles Henry Warren residence, which comprised eleven people. Within a short
period of time, six of these residents became ill with typhoid fever (Rosenburg, n.d.,
p. 1).
After the Warren incident, George Soper, a civil engineer who investigated
typhoid outbreaks, began tracking the employment history of Mrs. Mallon suspecting
that she was the cause of the locally spread illness. From the year 1900 to 1907, Mary
had worked at seven jobs in which 22 people had fallen ill with typhoid fever shortly
after she began her service at each location (Rosenburg, n.d., p. 1). Subsequent to
finding Mary Mallon, Soper and several policemen forcefully escorted her to a nearby
hospital, but not without a slight battle from the frightened Mrs. Mallon. At the
hospital, Mary was determined to have typhoid fever after identifying that her blood
samples contained typhoid bacilli (Rosenburg, n.d., p. 2).

Treatment

While the infectious nature of diseases was still little understood, a noteworthy
explanation of the spreading of typhoid was suggested by William Budd of Devon in

87
1856 –- 20 years before bacterial origins of diseases had been discovered. He
purported that typhoid was not spread by stench, but rather spread by contaminated
water, milk, and the hands of those who attended the sick (Abhyankar, 2002, para.
12). We now know that William Budd’s theories are true given the tools of modern
science.

Taking heed that unhygienic conditions were a possible cause of typhoid, a


Public Health act of England improved their sanitary conditions and water supplies in
the year of 1875. These modifications to localized societies had more than halved the
death rate in the next decade and reduced the local incidence of typhoid fever. In
fact, during the early 20th century, the simple solution of bathing the infected person
in a bath of cold water every three hours was still preached. Nevertheless, this
approach did in fact reduce both mortality and toxemia for the diseased (Abhyankar,
2002, para. 20).

The first prophylactic inoculation to typhoid was released into the public in
1897. This vaccine became popular during the 1914-18 War after showing its
potential in India before the Great War. Later, in 1948 an article was published on
the use of Chloromycetin against typhoid explaining the drastic affects of the drug.
The period of illness was diminished by a factor of ten (from 35 to 3.5 days) with a
corresponding lessening of blood poisoning and death rate (Abhyankara, 2002, para.
24). By 1951, after continual experimentation with various drugs, reports were
published regarding treatment trials of cortisone mixed with chloramphenicol which
indicated an increased rate of recovery. Unfortunately, the possibilities of relapse
and death were still unresolved.

Immunization

Immunization can be acquired via the oral vaccine (Vivotif) or by a single-dose


injectable vaccine. Vivotif contains a weakened strain of the typhoid bacilli which
allows the body to easily fight off the bacteria. Approximately one week is required
for the Vivotif capsules to reach the intestines in which they begin treating the
infection. Protection provided by this oral vaccine should last about 5 years (Barnas,
1998, para. 2).

On the other hand, there is the single-dose injectable vaccine (Typhim Vi)
which contains capsular polysaccharide antigen. This vaccination became available in
February of1995, and its effects would remain robust for an average of two years
(Barnas, 1998, para. 3).

88
Literature Cited

Edsall, G. (1959). Typhoid fever. The American Journal of Nursing ,59,989-992.

Barnas, G. P. (1998). Typhoid fever vaccines. Retrieved April 3, 2008, from


http://healthlink.mcw.edu/article/907107823.html

Rosenburg, J. (n.d.). Typhoid Mary. Retrieved March 4, 2008, from


http://history1900s.about.com/od/1900s/a/typhoidmary.htm

Abhyankar, A. (2002, April 21). History of Typhoid. Retrieved March 4, 2008,


from http://www.geocities.com/avinash_abhyankar/history.htm

Whipple, G. C. (1908). Typhoid Fever. New York: Chapman & Hall, Ltd.

Tortora G.J, Berdell R. F., & Case C.L. (2004). Microbiology. San Francisco:
Pearson Education.

89
Chapter 15
Yellow Fever

Jennifer Lu, Joseph McMahon, Audrey Ross

Introduction

Yellow fever has likely been around for all of human existence, but only began
menacing human populations when travelers who are prone to the disease ventured to
areas with vector mosquitoes. It is a zoonotic disease, which is transmitted by
mosquitoes, and may be contracted by primates. It is currently a major threat to
travelers who go to tropical areas without prior vaccination. The vaccination,
however, has a high chance to cause viscerotropic disease, which has a 60% fatality
rate.

Yellow fever has an average of a 50% fatality rate. Its name originates from its
most easily identifiable symptom: bright yellow skin. Its etiology and epidemiology
make it clear that it is a very unique and ancient disease, with a very close
connection to human history. Modern day efforts against yellow fever include:
vaccination, mosquito eradication, and incubation prevention.

Causative Agent

Yellow fever is a single stranded RNA virus, of the family Flaviviridae, which
contains over fifty other types of viruses, such as dengue fever and West Nile virus.
Yellow fever is a transferred between primates by vector mosquitoes. The zoonotic
transfer from the mosquitoes to primates generally includes humans and monkeys in
South America and West Africa. There are two versions of yellow fever: urban and
jungle. Urban yellow fever is the more historically renowned of the two because of
the ease at which it spreads through highly populated areas in the seventeenth to
twentieth centuries.

The urban version is spread by female Aedes aegypti mosquitoes, which breed
well in urban settings because they require a container of shallow water to
reproduce. Humans containing urban yellow fever are bitten by these mosquitoes,
which become vectors after the virus replicates for about ten days in the salivary
glands, and then bite another person, therefore transmitting the disease into the new
person’s bloodstream. The jungle fever, which was recognized recently in the 1930s,
is transmitted by female Haemogogus capricorni, Haemogogus spegazzini, and
Haemogogus spegazzini falco mosquitoes (Bryant, Holmes, & Barrett, 2007,
Introduction).

Native jungle monkeys serve as the zoonotic connection for the jungle strand.
There is also a clear connection between the arrival of port cities and the magnitude
of yellow fever infections. Old ships usually contained water barrels or other
containers for drinking or washing water, which perfectly suited mosquito

90
reproduction. This allowed a large range of travel for mosquitoes, and theresore
yellow fever. In the past, this and other factors have accounted for millions of deaths,
which has had a significant historical impact (McGrew, 1985, p. 357).

Yellow fever has many complications. Symptoms develop three to six days after
getting bitten by an infected mosquito. Yellow fever has three main stages,
beginning with a headache, muscle ache, fever, loss of appetite, vomiting, and
jaundice can be common. This occurs for three to four days, and then it goes into
remission. After the symptoms are gone most people start to recover completely. But
some patients move to the third and most dangerous stage within 24 hours of the
period of remission.

The third stage, the period of intoxication, multi-organ dysfunction occurs,


which is when the body suffers from liver and kidney failure. Other symptoms of the
third stage include bleeding disorders/hemorrhage, delirium, seizures, coma, shock,
and most likely death. Unfortunately, when patients with yellow fever reach the third
stage, death occurs 50% of the time (Monath, 2007, pp. 2222-2225).

History

In the past, disease has played a major role in history. Yellow fever has had its
own impact; in particular, those who have witnessed its atrocious symptoms began to
become very fearful due to its horrifying symptoms. Yellow fever has likely existed for
an extended period of time, which is shown by the immunities of Africans to it. Some
of the earliest records of yellow fever were written shortly after Columbus arrived in
the New World. It is not known whether yellow fever was in South America prior to
this event, because there is evidence supporting both sides.

Analysis of historical Mayan records suggests that yellow fever was present in
South America. However, in 1495, there were reports of a virus with similar symptoms
onboard the Santa Domingo. Both pieces of evidence seem doubtful, because the first
solid evidence of yellow fever in the Caribbean was in 1635, which means that the
disease was brought almost 200 years later by the slave trade (Haider, 2008, Origins).
In 1647, an epidemic broke out in the Barbados. It was apocalyptic to Native
Americans, who were very susceptible to the virus. Yellow fever spread to: the
Yucatan, St. Kitts, Guadeloupe, and Cuba in 1648 and in 1649, Sierra Leone in 1764,
and Senegal in 1778 (McGrew, 1985, p. 358).

Yellow fever first encountered colonial North America towards the end of the
seventeenth century. Boston ports were apprehensive of the virus after receiving
reports regarding the devastation in the Caribbean, but they never had any conflicts.
In 1692, the first account of yellow fever in North America was made by Cotton
Mather, he wrote about a fever with the symptoms of turning yellow, vomiting, and
bleeding every way. Yellow fever spread most rampantly in the eighteenth and
nineteenth centuries. West Africa became known as the so-called white man’s
graveyard due to the high infection and fatality rate of yellow fever there. The

91
disease began to spread to the Iberian Peninsula: Lisbon suffered an epidemic in 1723,
Malaga in 1741, and Cadiz had five separate outbreaks between 1700 and 1780. North
American port cities began to contract the disease. Charleston and Philadelphia had
their first outbreaks in 1699 and New York in 1702 (McGrew, 1985, p. 358).

Mortality rate was very high, for example, due to the New York epidemic, at
least 570 people died, which is an estimated 10% of the urban population at that
time. Outbreaks continued to spread across Northern American port cities and were
much more devastating in 1793, 400 recorded deaths were caused by yellow fever.
The most renowned historical occurrence was in 1802, when Napoleon Bonaparte sent
an army to Haiti. They did succeed in suppressing a native uprising led by Pierre
Toussaint-Louverture, but almost the entire French army was killed by yellow fever.
This was one of the largest factors of the Louisiana Purchase, when Napoleon sold a
large amount of land to the United States to pay for his campaigns (McGrew, 1985, p.
359).

There has been a dramatic re-emergence of yellow fever in Africa and South
America since the 1980s. As well as, serious outbreaks continued in South Eastern
United States, but effectively ended in 1906, when the mosquito vectors were
recognized and proper precautions were taken. The disease is currently controlled,
due to mosquito eradication and the production of an effective vaccine. The main
necessary precaution is to make sure susceptible travelers to tropical regions are
immunized (McGrew, 1985, p. 359).

Prevention and Treatment

Blood tests can confirm diagnosis of yellow fever, but there is no specific
treatment. The basic treatment for yellow fever symptoms includes blood products
for severe bleeding, dialysis for kidney failure, and intravenous fluids. The best way
to prevent contracting yellow fever is to have a vaccination.

Vaccination for the yellow fever is extremely important for travelers going to
tropical areas known to have a history with yellow fever. It has only one serotype,
which made the production of a vaccine much easier. In 1936 the original vaccine was
made by serial passage in chicken embryo tissue. This vaccine has proven 99%
effective in over 400 million people. Approximately 250,000 people travelling from
the United States are immunized every year.

The most critical time of immunization for yellow fever prone areas is nine
months old because immunizing before that risks viscerotropic disease, and too long
after that risks infection and death (McGrew, 1985, p. 363). This vaccine-originated
disease was first investigated in 2001. Viscerotropic disease shows similar symptoms
as wild-type yellow fever, and has a 60% fatality rate. However, there is only one case
of viscerotropic disease per every 200,000 to 400,000 vaccinations (one in 50,000 in
people over sixty), making it one of the most dangerous vaccines known to man. It

92
also contains ingredients so that people with allergies to egg proteins or gelatin
should not take the 17D vaccine (Massad, 2008, paragraph one).

The disease, for the most part, can only be contracted in tropical areas by
travelers who has not been vaccinated and do not have natural immunities. It is
possibly one of the greatest natural defense mechanisms of tropical areas against
invaders. It is interesting how the virus cannot be spread from person to person, yet
outbreaks were so frequent due to the lack of knowledge regarding the mosquito
vectors. There are approximately 100,000 reported cases of yellow fever every year,
with an estimated 50% fatality rate. Vaccination, mosquito eradication, and the use of
protective fly nets are extremely important to keep yellow fever mortality at a
minimum (Harder, 2008, pp. 104-105).

Literature Cited

Bryant, J. E., Holmes, E. C., & Barrett A. D. T. (2007). Out of Africa: A Molecular
Perspective on the Introduction of Yellow Fever Virus into the Americas.
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=186895
6. Retrieved March 3, 2008, from PubMed Central database (1868956).

Harder, B. Don’t Let the Bugs Bite: Can genetic engineering defeat diseases spread by
insects? http:// www.jstor.org/view/00368423/ap071430/07a00090/0?current
Result=00368423%2bap071430%2b07a00090%2b0%2c06&searchUrl=http%3A%2F%2Fwww
.jstor.org%2Fsearch%2FBasicResults%3Fhp%3D25%26so%3DNewestFirst%26si%3D26%26g
w%3Djtx%26jtxsi%3D26%26jcpsi%3D1%26artsi%3D1%26Query%3D%2522yellow%2Bfever%2
522%26wc%3Don. Retrieved March 3, 2008, from JSTOR database.

Massad, E., Coutinho, F. A., Burattini, M. N., Lopez, L. F., & Struchiner, C. J. Yellow
fever vaccination: how much is enough? http://www.ncbi.nlm.nih.gov/pubmed/
15917112?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.P
ubmed_RVAbstractPlusDrugs1. Retrieved March 3, 2008, from PubMed database.

McGrew, R. E. (1985). Encyclopedia of medical history. New York: McGraw-Hill Book


Company.

Monath, T. P. (2007). Dengue and Yellow Fever — Challenges for the Development
and Use of Vaccines. http://content.nejm.org/cgi/content/full/357/22/2222.
Retrieved March 3, 2008, from The New England Journal of Medicine database.

Weir, E. & Haider, S. (2004). Yellow fever: readily prevented but difficult to treat.
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=421715
. Retrieved March 3, 2008, from PubMed Central database (421715).

93

You might also like