You are on page 1of 17


Global warming refers to the increase in the average temperature of the Earth's near-surface air and oceans in recent
decades and its projected continuation.
The global average air temperature near the Earth's surface rose 0.74 0.18 C (1.33 0.32 F) during the last 100
years. The Intergovernmental Panel on Climate Change (IPCC) concludes, "most of the observed increase in
globally averaged temperatures since the mid-20th century is very likely due to the observed increase in
anthropogenic greenhouse gas concentrations" via the greenhouse effect. Natural phenomena such as solar variation
combined with volcanoes probably had a small warming effect from pre-industrial times to 1950 and a small
cooling effect from 1950 onward. These basic conclusions have been endorsed by at least 30 scientific societies and
academies of science, including all of the national academies of science of the major industrialized countries. A few
individual scientists disagree with some of the main conclusions of the IPCC.
Climate models referenced by the IPCC project that global surface temperatures are likely to increase by 1.1 to
6.4 C (2.0 to 11.5 F) between 1990 and 2100. The range of values results from the use of differing scenarios of
future greenhouse gas emissions as well as models with differing climate sensitivity. Although most studies focus
on the period up to 2100, warming and sea level rise are expected to continue for more than a millennium even if
greenhouse gas levels are stabilized. This reflects the large heat capacity of the oceans.
An increase in global temperatures is expected to cause other changes, including sea level rise, increased intensity
of extreme weather events, and changes in the amount and pattern of precipitation. Other effects include changes in
agricultural yields, glacier retreat, species extinctions and increases in the ranges of disease vectors.
Remaining scientific uncertainties include the amount of warming expected in the future, and how warming and
related changes will vary from region to region around the globe. There is ongoing political and public debate
worldwide regarding what, if any, action should be taken to reduce or reverse future warming or to adapt to its
expected consequences. Most national governments have signed and ratified the Kyoto Protocol, aimed at
reducing greenhouse gas emissions.


The greenhouse effect was discovered by Joseph Fourier in 1824 and was first investigated quantitatively by
Svante Arrhenius in 1896. It is the process by which absorption and emission of infrared radiation by atmospheric
gases warms a planet's atmosphere and surface.
Existence of the greenhouse effect as such is not disputed. Naturally occurring greenhouse gases have a mean
warming effect of about 30 C (54 F), without which Earth would be uninhabitable. The debate centers on how
the strength of the greenhouse effect is changed when human activity increases the atmospheric concentrations of
some greenhouse gases.
On Earth, the major greenhouse gases are water vapor, which causes about 3670% of the greenhouse effect (not
including clouds); carbon dioxide (CO2), which causes 926%; methane (CH4), which causes 49%; and ozone,
which causes 37%. Some other naturally occurring gases contribute very small fractions of the greenhouse effect;
one of these, nitrous oxide (N2O), is increasing in concentration owing to human activity such as agriculture. The
atmospheric concentrations of CO2 and methane have increased by 31% and 149% respectively above preindustrial levels since 1750. These levels are considerably higher than at any time during the last 650,000 years, the
period for which reliable data has been extracted from ice cores. From less direct geological evidence it is believed
that CO2 values this high were last attained 20 million years ago. Fossil fuel burning has produced about threequarters of the increase in CO2 from human activity over the past 20 years. Most of the rest is due to land-use
change, in particular deforestation.
The present atmospheric concentration of CO2 is about 383 parts per million (ppm) by volume. Future CO2 levels
are expected to rise due to ongoing burning of fossil fuels and land-use change. The rate of rise will depend on
uncertain economic, sociological, technological, and natural developments, but may be ultimately limited by the
availability of fossil fuels. The IPCC Special Report on Emissions Scenarios gives a wide range of future CO2
scenarios, ranging from 541 to 970 ppm by the year 2100. Fossil fuel reserves are sufficient to reach this level and
continue emissions past 2100, if coal, tar sands or methane clathrates are extensively used.
Positive feedback effects such as the expected release of methane from the melting of permafrost peat bogs in
Siberia (possibly up to 70,000 million tonnes) may lead to significant additional sources of greenhouse gas
emissions not included in climate models cited by the IPCC.

An earthquake is the result of a sudden release of stored energy in the Earth's crust that creates seismic waves.
Earthquakes are accordingly measured with a seismometer, commonly known as a seismograph. The magnitude of
an earthquake is conventionally reported using the Richter scale or a related Moment scale (with magnitude 3 or
lower earthquakes being hard to notice and magnitude 7 causing serious damage over large areas).
At the Earth's surface, earthquakes may manifest themselves by a shaking or displacement of the ground.
Sometimes, they cause tsunamis, which may lead to loss of life and destruction of property. An earthquake is
caused by tectonic plates getting stuck and putting a strain on the ground. The strain becomes so great that rocks
give way by breaking and sliding along fault planes.
Earthquakes may occur naturally or as a result of human activities. Smaller earthquakes can also be caused by
volcanic activity, landslides, mine blasts, and nuclear experiments. In its most generic sense, the word earthquake is
used to describe any seismic eventwhether a natural phenomenon or an event caused by humansthat generates
seismic waves.
An earthquake's point of initial ground rupture is called its focus or hypocenter. The term epicenter means the point
at ground level directly above this.
There are many effects of earthquakes including, but not limited to the following:
Shaking and ground rupture
Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe
damage to buildings or other rigid structures. The severity of the local effects depends on the complex combination
of the earthquake magnitude, the distance from epicenter, and the local geological and geomorphological
conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground
Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the
ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally
due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic
energy focalization owing to typical geometrical setting of the deposits.


The Kyoto Protocol to the United Nations Framework Convention on Climate Change is an amendment to the
international treaty on climate change, assigning mandatory emission limitations for the reduction of greenhouse
gas emissions to the signatory nations.
The objective of the protocol is the "stabilization of greenhouse gas concentrations in the atmosphere at a level that
would prevent dangerous anthropogenic interference with the climate system."
As of December 2006, a total of 169 countries and other governmental entities have ratified the agreement
(representing over 61.6% of emissions from Annex I countries). Notable exceptions include the United States and
Australia. Other countries, like India and China, which have ratified the protocol, are not required to reduce carbon
emissions under the present agreement.
There is some debate about the usefulness of the protocol, and there have been some cost-benefit studies performed.
The treaty was negotiated in Kyoto, Japan in December 1997, opened for signature on March 16, 1998, and closed
on March 15, 1999. The agreement came into force on February 16, 2005 following ratification by Russia on
November 18, 2004. As of December 2006, a total of 169 countries and other governmental entities have ratified
the agreement (representing over 61.6% of emissions from Annex I countries).
Advocates of the Kyoto Protocol state that reducing these emissions is crucially important, as carbon dioxide is
causing the earth's atmosphere to heat up. This is supported by attribution analysis.
No country has passed national legislation requiring compliance with their treaty obligation. The governments of all
of the countries whose parliaments have ratified the Protocol are supporting it. Most prominent among advocates of
Kyoto have been the European Union and many environmentalist organizations. The United Nations and some
individual nations' scientific advisory bodies (including the G8 national science academies) have also issued
reports favoring the Kyoto Protocol.
An international day of action was planned for 3 December 2005, to coincide with the Meeting of the Parties in
Montreal. The planned demonstrations were endorsed by the Assembly of Movements of the World Social Forum.
A group of major Canadian corporations also called for urgent action regarding climate change, and have suggested
that Kyoto is only a first step.
In the United States, there is at least one student group, Kyoto Now!, which aims to use student interest to support
pressure towards reducing emissions as targeted by the Kyoto Protocol compliance.

Coral reefs are aragonite structures produced by living organisms, found in shallow, tropical marine waters with
little to no nutrients in the water. High nutrient levels such as that found in runoff from agricultural areas can harm
the reef by encouraging the growth of algae. In most reefs, the predominant organisms are stony corals, colonial
cnidarians that secrete an exoskeleton of calcium carbonate (limestone). The accumulation of skeletal material,
broken and piled up by wave action and bioeroders, produces a massive calcareous formation that supports the
living corals and a great variety of other animal and plant life. Although corals are found both in temperate and
tropical waters, reefs are formed only in a zone extending at most from 30N to 30S of the equator; the reefforming corals do not grow at depths of over 30 m (100 ft) or where the water temperature falls below 16 C
(72 F).
Coral reefs are estimated to cover 284,300 square kilometres, with the Indo-Pacific region (including the Red Sea,
Indian Ocean, Southeast Asia and the Pacific) accounting for 91.9% of the total. Southeast Asia accounts for 32.3%
of that figure, while the Pacific including Australia accounts for 40.8%. Atlantic and Caribbean coral reefs only
account for 7.6% of the world total (Spalding et al., 2001).
Coral reefs are either restricted or absent from along the west coast of the Americas, as well as the west coast of
Africa. This is due primarily to upwelling and strong cold coastal currents that reduce water temperatures in these
areas (Nybakken, 1997). Corals are also restricted from off the coastline of South Asia from Pakistan to Bangladesh
(Spalding et al., 2001). They are also restricted along the coast around north-eastern South America and Bangladesh
due to the release of vast quantities of freshwater from the Amazon and Ganges Rivers respectively.
Human activity continues to represent the single greatest threat to coral reefs living in Earth's oceans. In particular,
pollution and over-fishing are the most serious threats to these ecosystems. Physical destruction of reefs due to boat
and shipping traffic is also a problem. The live food fish trade has been implicated as a driver of decline due to the
use of cyanide and other chemicals in the capture of small fishes. Finally, above normal water temperatures, due to
climate phenomena such as El Nio and global warming, can cause coral bleaching. According to The Nature
Conservancy, if destruction increases at the current rate, 70% of the worlds coral reefs will have disappeared within
50 years. This loss would be an economic disaster for peoples living in the tropics. Hughes, et al, (2003), writes that
"with increased human population and improved storage and transport systems, the scale of human impacts on reefs
has grown exponentially. For example, markets for fishes and other natural resources have become global,
supplying demand for reef resources far removed from their tropical sources".
Currently researchers are working to determine the degree various factors impact the reef systems. The list of
factors is long but includes the oceans acting as a carbon dioxide sink, changes in Earth's atmosphere, ultraviolet
light, ocean acidification, biological virus, impacts of dust storms carrying agents to far flung reef systems, various
pollutants, impacts of algal blooms and others... Reefs are threatened well beyond coastal areas and so the problem
is broader than factors from land development and pollution though those are too causing considerable damage.

For a long time, the fundamental question regarding the history of the Moon was of its origin. Early hypotheses
included fission from the Earth, capture, and co-accretion. Today, the giant impact hypothesis is widely accepted by
the scientific community.
Fission hypothesis
The idea that the early Earth, with an accelerated rotation, expelled a piece of its mass was proposed by George
Darwin (son of the famous biologist Charles Darwin). It was commonly assumed that the Pacific Ocean represented
the scar of this event. However, today it is known that the oceanic crust that makes up this ocean basin is relatively
young, about 200 million years old or less, whereas the Moon is much older. This hypothesis can not account for the
angular momentum of the Earth-Moon system.
Lunar capture
This hypothesis states that the Moon was captured, completely formed, by the gravitational field of the Earth. This
is unlikely, since a close encounter with the Earth would have produced either a collision or an alteration of the
trajectory of the body in question, so if it had indeed happened, the Moon probably would never return to meet
again with the Earth. For this hypothesis to function, there would have to be a large atmosphere extended around the
primitive Earth, which would be able to slow the movement of the Moon before it could escape. This hypothesis is
considered to explain the irregular satellite orbits of Jupiter and Saturn; nevertheless, it is very difficult to believe
that this would explain the origin of our moon. In addition, this hypothesis has difficulty explaining the similar
oxygen isotope ratio of the two worlds.
Co-accretion hypothesis
This hypothesis states that the Earth and the Moon formed together as a double system from the primoridial
accretion disk of the Solar System. The problem with this hypothesis is that it does not explain the angular
momentum of the Earth-Moon system, nor why the Moon is depleted in metallic iron.
Giant impact theory
Main article: Giant impact theory
At present the best explanation for the origin of the Moon involves a collision of two protoplanetary bodies during
the early accretional period of Solar system evolution. This "giant impact theory", which became popular in 1984
(although it originated in the mid-1970s) satisfies the orbital conditions of the Earth and Moon and can account for
the relatively small metallic core of the Moon. Collisions between planetesimals are now recognized to lead to the
growth of planetary bodies early in the evolution of the solar system, and in this framework it is inevitable that large
impacts will sometimes occur when the planets are nearly formed.

The theory requires a collision between a body about 90% the present size of the Earth, and another the diameter of
Mars (half of the terrestrial radius and a tenth of its mass). The colliding body has sometimes been referred to as
Theia, the mother of Selene, the Moon goddess in Greek mythology. This size ratio is needed in order for the
resulting system to possess sufficient angular momentum to match the current orbital configuration. Such an impact
would have put enough material into orbit about the Earth to have eventually accumulated to form the Moon.
Computer simulations of this event appear to show that the collision must occur with a somewhat glancing blow.
This will cause a small portion of the colliding body to form a long arm of material that will then shear off. The
asymmetrical shape of the Earth following the collision then causes this material to settle into an orbit around the
main mass. The energy involved in this collision is impressive: trillions of tons of material would have been
vaporized and melted. In parts of the Earth the temperature would have risen to 10,000 C.
This formation theory helps explain why the Moon possesses only a small iron core (roughly 25% of its radius, in
comparison to about 50% for the Earth). Most of the iron core from the impacting body is predicted to have accreted
to the core of the Earth. The lack of volatiles in the lunar samples is also in part explained by the energy of the
collision. The energy liberated during the reaccreation of material in orbit about the Earth would have been
sufficient to melt a large portion of Moon, leading to the generation of a magma ocean.
The newly formed moon orbited at about one-tenth the distance that it does today, and became tidally-locked with
the Earth, where one side continually faces toward the Earth. The geology of the Moon has since been independent
of the Earth. While this theory explains many aspects of the Earth-Moon system, there are still a few unresolved
problems facing this theory, such as the Moon's volatile elements not being as depleted as expected from such an
energetic impact.

Drip irrigation, also known as trickle irrigation or microirrigation is an irrigation method that minimizes the use of
water and fertilizer by allowing water to drip slowly to the roots of plants, either onto the soil surface or directly
onto the root zone, through a network of valves, pipes, tubing, and emitters.
Modern drip irrigation has arguably become the most important innovation in agriculture since the invention of the
impact sprinkler in the 1930s, which replaced wasteful flood irrigation. Drip irrigation may also use devices called
micro-spray heads, which spray water in a small area, instead of dripping emitters. These are generally used on tree
and vine crops with wider root zones. Subsurface drip irrigation or SDI uses permanently or temporarily buried
dripperline or drip tape located at or below the plant roots. It is becoming more widely used for row crop irrigation
especially in areas where water supplies are limited or recycled water is used for irrigation. Careful study of all the
relevant factors like land topography, soil, water, crop and agro-climatic conditions, are needed to determine the
most suitable drip irrigation system and components to be used in a specific installation.
Drip irrigation has been used since ancient times when buried clay pots were filled with water and the water
gradually seeped into the soil. Modern drip irrigation began its development in Germany in 1860 when researchers
began experimenting with subirrigation using clay pipe to create combination irrigation and drainage systems. In
1913, E.B. House at Colorado State University succeeded in applying water to the root zone of plants without
raising the water table. Perforated pipe was introduced in Germany in the 1920s and in 1934, O.E. Robey
experimented with irrigating through porous canvas hose at Michigan State University.
With the advent of modern plastics during and after World War II, major improvements in drip irrigation became
possible. Plastic microtubing and various types of emitters began to be used in the greenhouses of Europe and the
United States.
The advantages of drip irrigation are:

Minimized fertilizer/nutrient loss due to localized application and reduced leaching.

High water distribution efficiency.

Leveling of the field not necessary.

Allows safe use of recycled water.

Moisture within the root zone can be maintained at field capacity.

Soil type plays less important role in frequency of irrigation.

Minimized soil erosion.

Highly uniform distribution of water i.e., controlled by output of each nozzle.

Lower labour cost.

Variation in supply can be regulated by regulating the valves and drippers.

Fertigation can easily be included with minimal waste of fertilizers.

Early maturity and a bountiful harvest (season after season, year after year)

The disadvantages of drip irrigation are:

Expense. Initial cost can be more than overhead systems.

Waste. The plastic tubing and "tapes" generally last 1-3 seasons before being replaced.

Clogging, if the water is not properly filtered and the equipment not properly maintained.

Drip irrigation might be unsatisfactory if herbicides or top dressed fertilizers need sprinkler irrigation for

Drip tape causes extra cleanup costs after harvest. You'll need to plan for drip tape winding, disposal,
recycling or reuse.

Waste of water, time & harvest, if not installed properly. These systems requires careful study of all the
relevant factors like land topography, soil, water, crop and agro-climatic conditions, and suitability of drip
irrigation system and its components

Rainforests, or rain forests, are forests characterized by high rainfall, with definitions setting minimum normal
annual rainfall between 1750 mm and 2000 mm (68 inches to 78 inches).
Rainforests are home to two-thirds of all the living animal and plant species on the planet. It has been estimated that
many hundreds of millions of new species of plants, insects and microorganisms are still undiscovered. Tropical
rain forests are called the "jewels of the earth", and the "world's largest pharmacy" because of the large amount of
natural medicines discovered there. Tropical rain forests are also often called the "Earth's lungs", however there is
no scientific basis for such a claim as tropical rainforests are known to be essentially oxygen neutral, with little or
no net oxygen production.
The undergrowth in a rainforest is restricted in many areas by the lack of sunlight at ground level. This makes it
possible for people and other animals to walk through the forest. If the leaf canopy is destroyed or thinned for any
reason, the ground beneath is soon colonized by a dense tangled growth of vines, shrubs and small trees called
In contradiction to popular belief, rainforests are not major consumers of carbon dioxide and like all mature forests
are approximately carbon neutral. Recent evidence suggests that rainforests are in fact net carbon emitters of
between 18 billion tonnes and 100 million tonnes of carbon annually. However, rainforests do play a major role in
the global carbon cycle as stable carbon pools. Clearance of rainforest leads to increased levels of atmospheric
carbon dioxide. Rainforests may also play a role in cooling air that passes through them. As such, rainforests are of
vital importance within the global climate system.
Tropical and temperate rain forests have been subjected to heavy logging and agricultural clearance throughout the
20th century, and the area covered by rainforests around the world is rapidly shrinking. Biologists have estimated
that large numbers of species are being driven to extinction (possibly more than 50,000 a year) due to the removal
of habitat with destruction of the rainforests [1]. Protection and regeneration of the rainforests is a key goal of many
environmental charities and organizations. (It is doubtful that this rate will be sustained as the relative cost of
logging rises with dwindling resources.)
Another factor causing the loss of rainforest is expanding urban areas. Littoral Rainforest growing along coastal
areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange
About half of the mature tropical rainforests, between 750 to 800 million hectares of the original 1.5 to 1.6 billion
hectares that once graced the planet have already been felled. The devastation is already acute in South East Asia,
the second of the world's great biodiversity hot spots. Most of what remains is in the Amazon basin, where the
Amazon rainforest covered more than 600 million hectares, an area nearly two thirds the size of the United States.
The forests are being destroyed at an ever-quickening pace.

In biology, evolution is the change in the inherited traits of a population from generation to generation. These traits
are the expression of genes that are copied and passed on to offspring during reproduction. Mutations in these genes
can produce new or altered traits, resulting in heritable differences (genetic variation) between organisms. New
traits can also come from transfer of genes between populations, as in migration, or between species, in horizontal
gene transfer. Evolution occurs when these heritable differences become more common or rare in a population,
either non-randomly through natural selection or randomly through genetic drift.
Natural selection is a process that causes heritable traits that are helpful for survival and reproduction to become
more common, and harmful traits to become rarer. This occurs because organisms with advantageous traits pass on
more copies of these traits to the next generation. [1][2] Over many generations, adaptations occur through a
combination of successive, small, random changes in traits, and natural selection of those variants best-suited for
their environment.[3] In contrast, genetic drift produces random changes in the frequency of traits in a population.
Genetic drift arises from the element of chance involved in which individuals survive and reproduce.
One definition of a species is a group of organisms that can reproduce with one another and produce fertile
offspring. However, when a species is separated into populations that are prevented from interbreeding, mutations,
genetic drift, and the selection of novel traits cause the accumulation of differences over generations and the
emergence of new species.[4] The similarities between organisms suggest that all known species are descended from
a common ancestor (or ancestral gene pool) through this process of gradual divergence. [1]
The theory of evolution by natural selection was first proposed by Charles Darwin and Alfred Russel Wallace and
set out in detail in Darwin's 1859 book On the Origin of Species.[5] In the 1930s, Darwinian natural selection was
combined with Mendelian inheritance to form the modern evolutionary synthesis,[3] in which the connection
between the units of evolution (genes) and the mechanism of evolution (natural selection) was made. This powerful
explanatory and predictive theory has become the central organizing principle of modern biology, providing a
unifying explanation for the diversity of life on Earth.

In biology and ecology, extinction is the cessation of existence of a species or group of taxa, reducing biodiversity.
The moment of extinction is generally considered to be the death of the last individual of that species (although the
capacity to breed and recover may have been lost before this point). Because a species' potential range may be very
large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena
such as Lazarus taxa, where a species presumed extinct abruptly "re-appears" (typically in the fossil record) after a
period of apparent absence.
Through evolution, new species arise through the process of speciation where new varieties of organisms arise
and thrive when they are able to find and exploit an ecological niche and species become extinct when they are
no longer able to survive in changing conditions or against superior competition. A typical species becomes extinct
within 10 million years of its first appearance, although some species, called living fossils, survive virtually
unchanged for hundreds of millions of years. Only one in a thousand species that have existed remain today.
Prior to the dispersion of humans across the earth, extinction generally occurred at a continuous low rate, mass
extinctions being relatively rare events. Starting approximately 100,000 years ago, and coinciding with an increase
in the numbers and range of humans, species extinctions have increased to a rate unprecedented since the
CretaceousTertiary extinction event. This is known as the Holocene extinction event and is at least the sixth such
extinction event. Some experts have estimated that up to half of presently existing species may become extinct by
There are a variety of causes that can contribute directly or indirectly to the extinction of a species or group of
species. "Just as each species is unique," write Beverly and Stephen Stearns, "so is each extinction... the causes for
each are varied some subtle and complex, others obvious and simple".Most simply, any species that is unable to
survive or reproduce in its environment, and unable to move to a new environment where it can do so, dies out and
becomes extinct. Extinction of a species may come suddenly when an otherwise healthy species is wiped out
completely, as when toxic pollution renders its entire habitat unlivable; or may occur gradually over thousands or
millions of years, such as when a species gradually loses out competition for food to better adapted competitors.
Currently, environmental groups and some governments are concerned with the extinction of species caused by
humanity, and are attempting to combat further extinctions through a variety of conservation programs. Humans can
cause extinction of a species through overharvesting, pollution, habitat destruction, introduction of new predators
and food competitors, and other influences. According to the World Conservation Union (WCU, also known as
IUCN), 784 extinctions have been recorded since the year 1500, the arbitrary date selected to define "modern"
extinctions, with many more likely to have gone unnoticed.

A popular way of classifying magmatic volcanoes goes by their frequency of eruption, with those that erupt
regularly called active, those that have erupted in historical times but are now quiet called dormant, and those that
have not erupted in historical times called extinct. However, these popular classificationsextinct in particular
are practically meaningless to scientists. They use classifications which refer to a particular volcano's formative and
eruptive processes and resulting shapes, which was explained above.
There is no real consensus among volcanologists on how to define an "active" volcano. The lifespan of a volcano
can vary from months to several million years, making such a distinction sometimes meaningless when compared to
the lifespans of humans or even civilizations. For example, many of Earth's volcanoes have erupted dozens of times
in the past few thousand years but are not currently showing signs of eruption. Given the long lifespan of such
volcanoes, they are very active. By our lifespans, however, they are not. Complicating the definition are volcanoes
that become restless (producing earthquakes, venting gases, or other non-eruptive activities) but do not actually
erupt. A rule of thumb of volcanic activity would be the Dhiskiaon noises, graded from S (soft) through to L-VL
(loud-very loud) to VVL (very, very, loud) where the audible panubra of volcanic activity may suggest the level of
"activity" and the proximity of an actual catastropic event.[citation needed
Scientists usually consider a volcano active if it is currently erupting or showing signs of unrest, such as unusual
earthquake activity or significant new gas emissions. Many scientists also consider a volcano active if it has erupted
in historic time. It is important to note that the span of recorded history differs from region to region; in the
Mediterranean, recorded history reaches back more than 3,000 years but in the Pacific Northwest of the United
States, it reaches back less than 300 years, and in Hawaii, little more than 200 years. The Smithsonian Global
Volcanism Program's definition of 'active' is having erupted within the last 10,000 years.
Dormant volcanoes are those that are not currently active (as defined above), but could become restless or erupt
again. Confusion however, can arise because many volcanoes which scientists consider to be active are referred to
as dormant by laypersons or in the media.
Extinct volcanoes are those that scientists consider unlikely to erupt again. Whether a volcano is truly extinct is
often difficult to determine. Since "supervolcano" calderas can have eruptive lifespans sometimes measured in
millions of years, a caldera that has not produced an eruption in tens of thousands of years is likely to be considered
dormant instead of extinct.
For example, the Yellowstone Caldera in Yellowstone National Park is at least 2 million years old and hasn't erupted
violently for approximately 640,000 years, although there has been some minor activity relatively recently, with
hydrothermal eruptions less than 10,000 years ago and lava flows about 70,000 years ago. For this reason, scientists
do not consider the Yellowstone Caldera extinct.

A meteorite is a natural object originating in outer space that survives an impact with the Earth's surface without
being destroyed. While in space it is called a meteoroid. When it enters the atmosphere, air resistance causes the
body to heat up and emit light, thus forming a fireball, also known as a meteor or shooting star. The term bolide
refers to either an extraterrestrial body that collides with the Earth, or to an exceptionally bright, fireball-like meteor
regardless of whether it ultimately impacts the surface. The meteorite is the source of the light.
One of the leading theories for the cause of the Cretaceous-tertiary mass extinction that included the dinosaurs is a
large meteorite impact. The Chicxulub Crater has been identified as the site of this impact. There has been a lively
scientific debate as to whether other major extinctions, including the ones at the end of the Permian and Triassic
periods might also have been the result of large impact events, but the evidence is much less compelling than for the
end Cretaceous extinction. Tollmann's hypothetical bolide is one such meteorite that some speculate had a major
impact on world wide geology, although there is no direct evidence that any such meteorite ever existed.
There are several reported instances of falling meteorites having killed both people and livestock, but a few of these
appear more credible than others. The most infamous reported fatality from a meteorite impact is that of an
Egyptian dog who was killed in 1911, although this report is highly disputed. This particular meteorite fall was
identified in the 1980s as Martian in origin. However, there is substantial evidence that the meteorite known as
Valera hit and killed a cow upoin impact, nearly dividing the animal in two, and similar unsubstantiated reports of a
horse being struck and killed by a stone of the New Concord fall also abound. Throughout history, many first and
second-hand reports of meteorites falling on and killing both humans and other animals abound, but none have been
well documented.
The first known modern case of a human hit by a space rock occurred on 30 November 1954 in Sylacauga,
Alabama. There a 4 kg stone chondrite crashed through a roof and hit Ann Hodges in her living room after it
bounced off her . She was badly bruised.
Other than the Sylacauga event, the most plausible of these claims was put forth my a young boy who stated that he
had been hit by a small (~3 gram) stone of the Mbale meteorite fall from Uganda, and who stood to gain nothing
from this assertion. The stone reportedly fell through a number of banana leaves before striking the boy on the head,
causing little to no pain, as it was small enough to have been slowed by both friction with the atmosphere as well as
that with banana leaves, before striking the boy. Although it is impossible to prove this claim either way, it seems as
though he had little reason to lie about such an event occurring. Several persons have since claimed to have been
struck by 'meteorites' but no verifiable

Carbon dioxide is a colorless, odorless gas. When inhaled at concentrations higher than usual atmospheric levels, it
can produce a sour taste in the mouth and a stinging sensation in the nose and throat. These effects result from the
gas dissolving in the mucous membranes and saliva, forming a weak solution of carbonic acid. This sensation can
also occur during an attempt to stifle a burp after drinking a carbonated beverage. Amounts above 800 ppm are
considered unhealthy, amounts above 5,000 ppm are considered very unhealthy, and those above about 50,000 ppm
are considered dangerous to animal life.
At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m, about 1.5 times that of
air. The carbon dioxide molecule (O=C=O) contains two double bonds and has a linear shape. It has no electrical
dipole, and as it is fully oxidized, it is not very reactive and is non-flammable.
At 78.5 C, carbon dioxide changes directly from a solid phase to a gaseous phase through sublimation, or from
gaseous to solid through deposition. Solid carbon dioxide is normally called "dry ice", a generic trademark. It was
first observed in 1825 by the French chemist Charles Thilorier. Dry ice is commonly used as a versatile cooling
agent, and it is relatively inexpensive. As it warms, solid carbon dioxide sublimes directly into the gas phase,
making its use convenient as it leaves no liquid. It can often be found in groceries and laboratories, and it is also
used in the shipping industry. The largest non-cooling use for dry ice is blast cleaning.
Carbon dioxide was one of the first gases to be described as a substance distinct from air. In the seventeenth century,
the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of
the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal
had been transmuted into an invisible substance he termed a "gas" or "wild spirit" (spiritus sylvestre).
The properties of carbon dioxide were studied more thoroughly in the 1750s by the Scottish physician Joseph Black.
He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed
air." He observed that the fixed air was denser than air and did not support either flame or animal life. He also found
that when bubbled through an aqueous solution of lime (calcium hydroxide), it would precipitate calcium carbonate.
He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial
fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed
Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order
to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas.

A wind turbine is a machine that converts the kinetic energy in wind into mechanical energy. If the mechanical
energy is used directly by machinery, such as a pump or grinding stones, the machine is usually called a windmill. If
the mechanical energy is then converted to electricity, the machine is called a wind generator.
This article discusses the energy-conversion machinery. See the broader article on wind power for more on turbine
placement, economics, public concerns, and controversy: in particular, see the wind energy section of that article for
an understanding of the temporal distribution of wind energy and how that affects wind-turbine design. See
environmental concerns with electricity generation for discussion of environmental problems with wind-energy
Wind turbines can be separated into two types based on the axis about which the turbine rotates. Turbines that rotate
around a horizontal axis are more common. Vertical-axis turbines are less frequently used.
Wind turbines can also be classified by the location in which they are to be used. Onshore, offshore, or even aerial
wind turbines have unique design characteristics, which are explained in more detail in the section on turbine design
and construction.
Wind turbines may also be used in conjunction with a solar collector to extract the energy due to air heated by the
Sun and rising through a large vertical solar updraft tower.
Wind machines were used for grinding grain in Persia as early as 200 B.C. This type of machine was introduced
into the Roman Empire by 250 A.D. By the 14th century Dutch windmills were in use to drain areas of the Rhine
River delta. In Denmark by 1900 there were about 2500 windmills for mechanical loads such as pumps and mills,
producing an estimated combined peak power of about 30 MW. The first windmill for electricity production was
built in Cleveland, Ohio by Charles F Brush in 1888, and in 1908 there were 72 wind-driven electric generators
from 5 kW to 25 kW. The largest machines were on 24 m (79 ft) towers with four-bladed 23 m (75 ft) diameter
By the 1930s windmills were mainly used to generate electricity on farms, mostly in the United States where
distribution systems had not yet been installed. In this period, high-tensile steel was cheap, and windmills were
placed atop prefabricated open steel lattice towers. A forerunner of modern horizontal-axis wind generators was in
service at Yalta, USSR in 1931. This was a 100 kW generator on a 30 m (100 ft) tower, connected to the local 6.3
kV distribution system. It was reported to have an annual load factor of 32 per cent, not much different from current
wind machines.