You are on page 1of 249



Engineering, term applied to the profession in which a knowledge of the mathematical and
natural sciences, gained by study, experience, and practice, is applied to the efficient use of
the materials and forces of nature. The term engineer properly denotes a person who has
received professional training in pure and applied science, but is often loosely used to describe
the operator of an engine, as in the terms locomotive engineer, marine engineer, or stationary
engineer. In modern terminology these latter occupations are known as crafts or trades.
Between the professional engineer and the craftsperson or tradesperson, however, are those
individuals known as subprofessionals or paraprofessionals, who apply scientific and
engineering skills to technical problems; typical of these are engineering aides, technicians,
inspectors, draftsmen, and the like.

Before the middle of the 18th century, large-scale construction work was usually placed in the
hands of military engineers. Military engineering involved such work as the preparation of
topographical maps, the location, design, and construction of roads and bridges; and the
building of forts and docks; see Military Engineering below. In the 18th century, however, the
term civil engineering came into use to describe engineering work that was performed by
civilians for nonmilitary purposes. With the increasing use of machinery in the 19th century,
mechanical engineering was recognized as a separate branch of engineering, and later mining
engineering was similarly recognized.

The technical advances of the 19th century greatly broadened the field of engineering and
introduced a large number of engineering specialties, and the rapidly changing demands of the
socioeconomic environment in the 20th century have widened the scope even further.


The main branches of engineering are discussed below in alphabetical order. The engineer who
works in any of these fields usually requires a basic knowledge of the other engineering fields,
because most engineering problems are complex and interrelated. Thus a chemical engineer
designing a plant for the electrolytic refining of metal ores must deal with the design of
structures, machinery, and electrical devices, as well as with purely chemical problems.

Besides the principal branches discussed below, engineering includes many more specialties
than can be described here, such as acoustical engineering (see Acoustics), architectural
engineering (see Architecture: Construction), automotive engineering, ceramic engineering,
transportation engineering, and textile engineering.
A Aeronautical and Aerospace Engineering

Aeronautics deals with the whole field of design, manufacture, maintenance, testing, and use
of aircraft for both civilian and military purposes. It involves the knowledge of aerodynamics,
structural design, propulsion engines, navigation, communication, and other related areas.
See Airplane; Aviation.

Aerospace engineering is closely allied to aeronautics, but is concerned with the flight of
vehicles in space, beyond the earth's atmosphere, and includes the study and development of
rocket engines, artificial satellites, and spacecraft for the exploration of outer space. See
Space Exploration.

B Chemical Engineering

This branch of engineering is concerned with the design, construction, and management of
factories in which the essential processes consist of chemical reactions. Because of the
diversity of the materials dealt with, the practice, for more than 50 years, has been to analyze
chemical engineering problems in terms of fundamental unit operations or unit processes such
as the grinding or pulverizing of solids. It is the task of the chemical engineer to select and
specify the design that will best meet the particular requirements of production and the most
appropriate equipment for the new applications.

With the advance of technology, the number of unit operations increases, but of continuing
importance are distillation, crystallization, dissolution, filtration, and extraction. In each unit
operation, engineers are concerned with four fundamentals: (1) the conservation of matter;
(2) the conservation of energy; (3) the principles of chemical equilibrium; (4) the principles of
chemical reactivity. In addition, chemical engineers must organize the unit operations in their
correct sequence, and they must consider the economic cost of the overall process. Because a
continuous, or assembly-line, operation is more economical than a batch process, and is
frequently amenable to automatic control, chemical engineers were among the first to
incorporate automatic controls into their designs.
C Civil Engineering

Civil engineering is perhaps the broadest of the engineering fields, for it deals with the
creation, improvement, and protection of the communal environment, providing facilities for
living, industry and transportation, including large buildings, roads, bridges, canals, railroad
lines, airports, water-supply systems, dams, irrigation, harbors, docks, aqueducts, tunnels,
and other engineered constructions. The civil engineer must have a thorough knowledge of all
types of surveying, of the properties and mechanics of construction materials, the mechanics
of structures and soils, and of hydraulics and fluid mechanics. Among the important
subdivisions of the field are construction engineering, irrigation engineering, transportation
engineering, soils and foundation engineering, geodetic engineering, hydraulic engineering,
and coastal and ocean engineering.

D Electrical and Electronics Engineering

The largest and most diverse field of engineering, it is concerned with the development and
design, application, and manufacture of systems and devices that use electric power and
signals. Among the most important subjects in the field in the late 1980s are electric power
and machinery, electronic circuits, control systems, computer design, superconductors, solid-
state electronics, medical imaging systems, robotics, lasers, radar, consumer electronics, and
fiber optics.

Despite its diversity, electrical engineering can be divided into four main branches: electric
power and machinery, electronics, communications and control, and computers.

D1 Electric Power and Machinery

The field of electric power is concerned with the design and operation of systems for
generating, transmitting, and distributing electric power. Engineers in this field have brought
about several important developments since the late 1970s. One of these is the ability to
transmit power at extremely high voltages in both the direct current (DC) and alternating
current (AC) modes, reducing power losses proportionately. Another is the real-time control of
power generation, transmission, and distribution, using computers to analyze the data fed
back from the power system to a central station and thereby optimizing the efficiency of the
system while it is in operation.

A significant advance in the engineering of electric machinery has been the introduction of
electronic controls that enable AC motors to run at variable speeds by adjusting the frequency
of the current fed into them. DC motors have also been made to run more efficiently this way.
See also Electric Motors and Generators; Electric Power Systems.

D2 Electronics

Electronic engineering deals with the research, design, integration, and application of circuits
and devices used in the transmission and processing of information. Information is now
generated, transmitted, received, and stored electronically on a scale unprecedented in
history, and there is every indication that the explosive rate of growth in this field will continue

Electronic engineers design circuits to perform specific tasks, such as amplifying electronic
signals, adding binary numbers, and demodulating radio signals to recover the information
they carry. Circuits are also used to generate waveforms useful for synchronization and
timing, as in television, and for correcting errors in digital information, as in
telecommunications. See also Electronics.

Prior to the 1960s, circuits consisted of separate electronic devices—resistors, capacitors,

inductors, and vacuum tubes—assembled on a chassis and connected by wires to form a bulky
package. Since then, there has been a revolutionary trend toward integrating electronic
devices on a single tiny chip of silicon or some other semiconductive material. The complex
task of manufacturing these chips uses the most advanced technology, including computers,
electron-beam lithography, micro-manipulators, ion-beam implantation, and ultraclean
environments. Much of the research in electronics is directed toward creating even smaller
chips, faster switching of components, and three-dimensional integrated circuits.

D3 Communications and Control

Engineers in this field are concerned with all aspects of electrical communications, from
fundamental questions such as “What is information?” to the highly practical, such as design
of telephone systems. In designing communication systems, engineers rely heavily on various
branches of advanced mathematics, such as Fourier analysis, linear systems theory, linear
algebra, complex variables, differential equations, and probability theory. See also
Mathematics; Matrix Theory and Linear Algebra; Probability.

Engineers work on control systems ranging from the everyday, passenger-actuated, as those
that run an elevator, to the exotic, as systems for keeping spacecraft on course. Control
systems are used extensively in aircraft and ships, in military fire-control systems, in power
transmission and distribution, in automated manufacturing, and in robotics.

Engineers have been working to bring about two revolutionary changes in the field of
communications and control: Digital systems are replacing analog ones at the same time that
fiber optics are superseding copper cables. Digital systems offer far greater immunity to
electrical noise. Fiber optics are likewise immune to interference; they also have tremendous
carrying capacity, and are extremely light and inexpensive to manufacture.

D4 Computers

Virtually unknown just a few decades ago, computer engineering is now among the most
rapidly growing fields. The electronics of computers involve engineers in design and
manufacture of memory systems, of central processing units, and of peripheral devices (see
Computer). Foremost among the avenues now being pursued are the design of Very Large
Scale Integration (VLSI) and new computer architectures. The field of computer science is
closely related to computer engineering; however, the task of making computers more
“intelligent” (artificial intelligence,), through creation of sophisticated programs or
development of higher level machine languages or other means, is generally regarded as
being in the realm of computer science.

One current trend in computer engineering is microminiaturization. Using VLSI, engineers

continue to work to squeeze greater and greater numbers of circuit elements onto smaller and
smaller chips. Another trend is toward increasing the speed of computer operations through
use of parallel processors, superconducting materials, and the like.

E Geological and Mining Engineering

This branch of engineering includes activities related to the discovery and exploration of
mineral deposits and the financing, construction, development, operation, recovery,
processing, purification, and marketing of crude minerals and mineral products. The mining
engineer is trained in historical geology, mineralogy, paleontology, and geophysics, and
employs such tools as the seismograph and the magnetometer for the location of ore or
petroleum deposits beneath the surface of the earth (see Petroleum; Seismology). The
surveying and drawing of geological maps and sections is an important part of the work of the
engineering geologist, who is also responsible for determining whether the geological structure
of a given location is suitable for the building of such large structures as dams.

F Industrial or Management Engineering

This field pertains to the efficient use of machinery, labor, and raw materials in industrial
production. It is particularly important from the viewpoint of costs and economics of
production, safety of human operators, and the most advantageous deployment of automatic
G Mechanical Engineering

Engineers in this field design, test, build, and operate machinery of all types; they also work
on a variety of manufactured goods and certain kinds of structures. The field is divided into
(1) machinery, mechanisms, materials, hydraulics, and pneumatics; and (2) heat as applied to
engines, work and energy, heating, ventilating, and air conditioning. The mechanical engineer,
therefore, must be trained in mechanics, hydraulics, and thermodynamics and must be fully
grounded in such subjects as metallurgy and machine design. Some mechanical engineers
specialize in particular types of machines such as pumps or steam turbines. A mechanical
engineer designs not only the machines that make products but the products themselves, and
must design for both economy and efficiency. A typical example of the complexity of modern
mechanical engineering is the design of an automobile, which entails not only the design of
the engine that drives the car but also all its attendant accessories such as the steering and
braking systems, the lighting system, the gearing by which the engine's power is delivered to
the wheels, the controls, and the body, including such details as the door latches and the type
of seat upholstery.

H Military Engineering
This branch is concerned with the application of the engineering sciences to military purposes.
It is generally divided into permanent land defense (see Fortification and Siege Warfare) and
field engineering. In war, army engineer battalions have been used to construct ports,
harbors, depots, and airfields. In the U.S., military engineers also construct some public
works, national monuments, and dams (see Army Corps of Engineers).

Military engineering has become an increasingly specialized science, resulting in separate

engineering subdisciplines such as ordnance, which applies mechanical engineering to the
development of guns and chemical engineering to the development of propellants, and the
Signal Corps, which applies electrical engineering to all problems of telegraph, telephone,
radio, and other communication.

I Naval or Marine Engineering

Engineers who have the overall responsibility for designing and supervising construction of
ships are called naval architects. The ships they design range in size from ocean-going
supertankers as much as 1300 feet long to small tugboats that operate in rivers and bays.
Regardless of size, ships must be designed and built so that they are safe, stable, strong, and
fast enough to perform the type of work intended for them. To accomplish this, a naval
architect must be familiar with the variety of techniques of modern shipbuilding, and must
have a thorough grounding in applied sciences, such as fluid mechanics, that bear directly on
how ships move through water.

Marine engineering is a specialized branch of mechanical engineering devoted to the design

and operation of systems, both mechanical and electrical, needed to propel a ship. In helping
the naval architect design ships, the marine engineer must choose a propulsion unit, such as a
diesel engine or geared steam turbine, that provides enough power to move the ship at the
speed required. In doing so, the engineer must take into consideration how much the engine
and fuel bunkers will weigh and how much space they will occupy, as well as the projected
costs of fuel and maintenance. See also Ships and Shipbuilding.

J Nuclear Engineering

This branch of engineering is concerned with the design and construction of nuclear reactors
and devices, and the manner in which nuclear fission may find practical applications, such as
the production of commercial power from the energy generated by nuclear reactions and the
use of nuclear reactors for propulsion and of nuclear radiation to induce chemical and
biological changes. In addition to designing nuclear reactors to yield specified amounts of
power, nuclear engineers develop the special materials necessary to withstand the high
temperatures and concentrated bombardment of nuclear particles that accompany nuclear
fission and fusion. Nuclear engineers also develop methods to shield people from the harmful
radiation produced by nuclear reactions and to ensure safe storage and disposal of fissionable
materials. See Nuclear Energy.

K Safety Engineering

This field of engineering has as its object the prevention of accidents. In recent years safety
engineering has become a specialty adopted by individuals trained in other branches of
engineering. Safety engineers develop methods and procedures to safeguard workers in
hazardous occupations. They also assist in designing machinery, factories, ships, and roads,
suggesting alterations and improvements to reduce the likelihood of accident. In the design of
machinery, for example, the safety engineer seeks to cover all moving parts or keep them
from accidental contact with the operator, to put cutoff switches within reach of the operator,
and to eliminate dangerous projecting parts. In designing roads the safety engineer seeks to
avoid such hazards as sharp turns and blind intersections, known to result in traffic accidents.
Many large industrial and construction firms, and insurance companies engaged in the field of
workers compensation, today maintain safety engineering departments. See Industrial Safety;
National Safety Council.

L Sanitary Engineering

This is a branch of civil engineering, but because of its great importance for a healthy
environment, especially in dense urban-population areas, it has acquired the importance of a
specialized field. It chiefly deals with problems involving water supply, treatment, and
distribution; disposal of community wastes and reclamation of useful components of such
wastes; control of pollution of surface waterways, groundwaters, and soils; milk and food
sanitation; housing and institutional sanitation; rural and recreational-site sanitation; insect
and vermin control; control of atmospheric pollution; industrial hygiene, including control of
light, noise, vibration, and toxic materials in work areas; and other fields concerned with the
control of environmental factors affecting health. The methods used for supplying communities
with pure water and for the disposal of sewage and other wastes are described separately. See
Plumbing; Sewage Disposal; Solid Waste Disposal; Water Pollution; Water Supply and


Scientific methods of engineering are applied in several fields not connected directly to
manufacture and construction. Modern engineering is characterized by the broad application of
what is known as systems engineering principles. The systems approach is a methodology of
decision-making in design, operation, or construction that adopts (1) the formal process
included in what is known as the scientific method; (2) an interdisciplinary, or team, approach,
using specialists from not only the various engineering disciplines, but from legal, social,
aesthetic, and behavioral fields as well; (3) a formal sequence of procedure employing the
principles of operations research.

In effect, therefore, transportation engineering in its broadest sense includes not only design
of the transportation system and building of its lines and rolling stock, but also determination
of the traffic requirements of the route followed. It is also concerned with setting up efficient
and safe schedules, and the interaction of the system with the community and the
environment. Engineers in industry work not only with machines but also with people, to
determine, for example, how machines can be operated most efficiently by the workers. A
small change in the location of the controls of a machine or of its position with relation to
other machines or equipment, or a change in the muscular movements of the operator, often
results in greatly increased production. This type of engineering work is called time-study

A related field of engineering, human-factors engineering, also known as ergonomics, received

wide attention in the late 1970s and the '80s when the safety of nuclear reactors was
questioned following serious accidents that were caused by operator errors, design failures,
and malfunctioning equipment. Human-factors engineering seeks to establish criteria for the
efficient, human-centered design of, among other things, the large, complicated control panels
that monitor and govern nuclear reactor operations.

Among various recent trends in the engineering profession, licensing and computerization are
the most widespread. Today, many engineers, like doctors and lawyers, are licensed by the
state. Approvals by professionally licensed engineers are required for construction of public
and commercial structures, especially installations where public and worker safety is a
consideration. The trend in modern engineering offices is overwhelmingly toward
computerization. Computers are increasingly used for solving complex problems as well as for
handling, storing, and generating the enormous volume of data modern engineers must work

The National Academy of Engineering, founded in 1964 as a private organization, sponsors

engineering programs aimed at meeting national needs, encourages new research, and is
concerned with the relationship of engineering to society.

Mechanics, branch of physics concerning the motions of objects and their response to forces.
Modern descriptions of such behavior begin with a careful definition of such quantities as
displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400
years ago, however, motion was explained from a very different point of view. For example,
following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a
cannonball falls down because its natural position is in the earth; the sun, the moon, and the
stars travel in circles around the earth because it is the nature of heavenly objects to travel in
perfect circles.

The Italian physicist and astronomer Galileo brought together the ideas of other great thinkers
of his time and began to analyze motion in terms of distance traveled from some starting
position and the time that it took. He showed that the speed of falling objects increases
steadily during the time of their fall. This acceleration is the same for heavy objects as for light
ones, provided air friction (air resistance) is discounted. The English mathematician and
physicist Sir Isaac Newton improved this analysis by defining force and mass and relating
these to acceleration. For objects traveling at speeds close to the speed of light, Newton’s laws
were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles,
Newton’s laws were superseded by quantum theory. For everyday phenomena, however,
Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what
causes motion.


Kinetics is the description of motion without regard to what causes the motion. Velocity (the
time rate of change of position) is defined as the distance traveled divided by the time
interval. Velocity may be measured in such units as kilometers per hour, miles per hour, or
meters per second. Acceleration is defined as the time rate of change of velocity: the change
of velocity divided by the time interval during the change. Acceleration may be measured in
such units as meters per second per second or feet per second per second. Regarding the size
or weight of the moving object, no mathematical problems are presented if the object is very
small compared with the distances involved. If the object is large, it contains one point, called
the center of mass, the motion of which can be described as characteristic of the whole object.
If the object is rotating, it is frequently convenient to describe its rotation about an axis that
goes through the center of mass.
To fully describe the motion of an object, the direction of the displacement must be given.
Velocity, for example, has both magnitude (a scalar quantity measured, for example, in
meters per second) and direction (measured, for example, in degrees of arc from a reference
point). The magnitude of velocity is called speed.

Several special types of motion are easily described. First, velocity may be constant. In the
simplest case, the velocity might be zero; position would not change during the time interval.
With constant velocity, the average velocity is equal to the velocity at any particular time. If
time, t, is measured with a clock starting at t = 0, then the distance, d, traveled at constant
velocity, v, is equal to the product of velocity and time.

d = vt

In the second special type of motion, acceleration is constant. Because the velocity is
changing, instantaneous velocity, or the velocity at a given instant, must be defined. For
constant acceleration, a, starting with zero velocity ( v = 0) at t = 0, the instantaneous
velocity at time, t, is

v = at
The distance traveled during this time is

d = at2
An important feature revealed in this equation is the dependence of distance on the square of
the time (t2, or “t squared,” is the short way of notating t × t). A heavy object falling freely
(uninfluenced by air friction) near the surface of the earth undergoes constant acceleration. In
this case the acceleration is 9.8 m/sec/sec (32 ft/sec/sec). At the end of the first second, a
ball would have fallen 4.9 m (16 ft) and would have a speed of 9.8 m/sec (32 ft/sec). At the
end of the second second, the ball would have fallen 19.6 m (64 ft) and would have a speed of
19.6 m/sec (64 ft/sec).

Circular motion is another simple type of motion. If an object has constant speed but an
acceleration always at right angles to its velocity, it will travel in a circle. The required
acceleration is directed toward the center of the circle and is called centripetal acceleration
(see Centripetal Force). For an object traveling at speed, v, in a circle of radius, r, the
centripetal acceleration is

Another simple type of motion that is frequently observed occurs when a ball is thrown at an
angle into the air. Because of gravitation, the ball undergoes a constant downward
acceleration that first slows its original upward speed and then increases its downward speed
as it falls back to earth. Meanwhile the horizontal component of the original velocity remains
constant (ignoring air resistance), making the ball travel at a constant speed in the horizontal
direction until it hits the earth. The vertical and horizontal components of the motion are
independent, and they can be analyzed separately. The resulting path of the ball is in the
shape of a parabola. See Ballistics.


To understand why and how objects accelerate, force and mass must be defined. At the
intuitive level, a force is just a push or a pull. It can be measured in terms of either of two
effects. A force can either distort something, such as a spring, or accelerate an object. The
first effect can be used in the calibration of a spring scale, which can in turn be used to
measure the amplitude of a force: the greater the force, F, the greater the stretch, x. For
many springs, over a limited range, the stretch is proportional to the force

F = kx
where k is a constant that depends on the nature of the spring material and its dimensions.

If an object is motionless, the net force on it must be zero. A book lying on a table is being
pulled down by the earth’s gravitational attraction and is being pushed up by the molecular
repulsion of the tabletop. The net force is zero; the book is in equilibrium. When calculating
the net force, it is necessary to add the forces as vectors. See Vector.


For equilibrium, all the horizontal components of the force must cancel one another, and all
the vertical components must cancel one another as well. This condition is necessary for
equilibrium, but not sufficient. For example, if a person stands a book up on a table and
pushes on the book equally hard with one hand in one direction and with the other hand in the
other direction, the book will remain motionless if the person’s hands are opposite each other.
(The net result is that the book is being squeezed). If, however, one hand is near the top of
the book and the other hand near the bottom, a torque is produced, and the book will fall on
its side. For equilibrium to exist it is also necessary that the sum of the torques about any axis
be zero.

A torque is the product of a force and the perpendicular distance to a turning axis. When a
force is applied to a heavy door to open it, the force is exerted perpendicularly to the door and
at the greatest distance from the hinges. Thus, a maximum torque is created. If the door were
shoved with the same force at a point halfway between handle and hinge, the torque would be
only half of its previous magnitude. If the force were applied parallel to the door (that is, edge
on), the torque would be zero. For an object to be in equilibrium, the clockwise torques about
any axis must be canceled by the counterclockwise torques about that axis. Therefore, one
could prove that if the torques cancel for any particular axis, they cancel for all axes.


Newton’s first law of motion states that if the vector sum of the forces acting on an object is
zero, then the object will remain at rest or remain moving at constant velocity. If the force
exerted on an object is zero, the object does not necessarily have zero velocity. Without any
forces acting on it, including friction, an object in motion will continue to travel at constant

A The Second Law

Newton’s second law relates net force and acceleration. A net force on an object will accelerate
it—that is, change its velocity. The acceleration will be proportional to the magnitude of the
force and in the same direction as the force. The proportionality constant is the mass, m, of
the object.

F = ma
In the International System of Units (also known as SI, after the initials of Système
International), acceleration, a, is measured in meters per second per second. Mass is
measured in kilograms; force, F, in newtons. A newton is defined as the force necessary to
impart to a mass of 1 kg an acceleration of 1 m/sec/sec; this is equivalent to about 0.2248 lb.

A massive object will require a greater force for a given acceleration than a small, light object.
What is remarkable is that mass, which is a measure of the inertia of an object (inertia is its
reluctance to change velocity), is also a measure of the gravitational attraction that the object
exerts on other objects. It is surprising and profound that the inertial property and the
gravitational property are determined by the same thing. The implication of this phenomenon
is that it is impossible to distinguish at a point whether the point is in a gravitational field or in
an accelerated frame of reference. Einstein made this one of the cornerstones of his general
theory of relativity, which is the currently accepted theory of gravitation.

B Friction
Friction acts like a force applied in the direction opposite to an object’s velocity. For dry sliding
friction, where no lubrication is present, the friction force is almost independent of velocity.
Also, the friction force does not depend on the apparent area of contact between an object and
the surface upon which it slides. The actual contact area—that is, the area where the
microscopic bumps on the object and sliding surface are actually touching each other—is
relatively small. As the object moves across the sliding surface, the tiny bumps on the object
and sliding surface collide, and force is required to move the bumps past each other. The
actual contact area depends on the perpendicular force between the object and sliding surface.
Frequently this force is just the weight of the sliding object. If the object is pushed at an angle
to the horizontal, however, the downward vertical component of the force will, in effect, add to
the weight of the object. The friction force is proportional to the total perpendicular force.

Where friction is present, Newton’s second law is expanded to

The left side of the equation is simply the net effective force. (Acceleration will be constant in
the direction of the effective force). When an object moves through a liquid, however, the
magnitude of the friction depends on the velocity. For most human-size objects moving in
water or air (at subsonic speeds), the resulting friction is proportional to the square of the
speed. Newton’s second law then becomes
The proportionality constant, k, is characteristic of the two materials that are sliding past each
other, and depends on the area of contact between the two surfaces and the degree of
streamlining of the moving object.

C The Third Law

Newton’s third law of motion states that an object experiences a force because it is interacting
with some other object. The force that object 1 exerts on object 2 must be of the same
magnitude but in the opposite direction as the force that object 2 exerts on object 1. If, for
example, a large adult gently shoves away a child on a skating rink, in addition to the force
the adult imparts on the child, the child imparts an equal but oppositely directed force on the
adult. Because the mass of the adult is larger, however, the acceleration of the adult will be

Newton’s third law also requires the conservation of momentum, or the product of mass and
velocity. For an isolated system, with no external forces acting on it, the momentum must
remain constant. In the example of the adult and child on the skating rink, their initial
velocities are zero, and thus the initial momentum of the system is zero. During the
interaction, internal forces are at work between adult and child, but net external forces equal
zero. Therefore, the momentum of the system must remain zero. After the adult pushes the
child away, the product of the large mass and small velocity of the adult must equal the
product of the small mass and large velocity of the child. The momenta are equal in
magnitude but opposite in direction, thus adding to zero.

Another conserved quantity of great importance is angular (rotational) momentum. The

angular momentum of a rotating object depends on its speed of rotation, its mass, and the
distance of the mass from the axis. When a skater standing on a friction-free point spins faster
and faster, angular momentum is conserved despite the increasing speed. At the start of the
spin, the skater’s arms are outstretched. Part of the mass is therefore at a large radius. As the
skater’s arms are lowered, thus decreasing their distance from the axis of rotation, the
rotational speed must increase in order to maintain constant angular momentum.

The quantity called energy ties together all branches of physics. In the field of mechanics,
energy must be provided to do work; work is defined as the product of force and the distance
an object moves in the direction of the force. When a force is exerted on an object but the
force does not cause the object to move, no work is done. Energy and work are both
measured in the same units—ergs, joules, or foot-pounds, for example.

If work is done lifting an object to a greater height, energy has been stored in the form of
gravitational potential energy. Many other forms of energy exist: electric and magnetic
potential energy; kinetic energy; energy stored in stretched springs, compressed gases, or
molecular bonds; thermal energy; and mass itself. In all transformations from one kind of
energy to another, the total energy is conserved. For instance, if work is done on a rubber ball
to raise it, its gravitational potential energy is increased. If the ball is then dropped, the
gravitational potential energy is transformed to kinetic energy. When the ball hits the ground,
it becomes distorted and thereby creates friction between the molecules of the ball material.
This friction is transformed into heat, or thermal energy.

Friction, force that opposes the motion of an object when the object is in contact with another
object or surface. Friction results from two surfaces rubbing against each other or moving
relative to one another. It can hinder the motion of an object or prevent an object from
moving at all. The strength of frictional force depends on the nature of the surfaces that are in
contact and the force pushing them together. This force is usually related to the weight of the
object or objects. In cases involving fluid friction, the force depends upon the shape and speed
of an object as it moves through air, water, or other fluid.

Friction occurs to some degree in almost all situations involving physical objects. In many
cases, such as in a running automobile engine, it hinders a process. For example, friction
between the moving parts of an engine resists the engine’s motion and turns energy into heat,
reducing the engine’s efficiency. Friction also makes it difficult to slide a heavy object, such as
a refrigerator or bookcase, along the ground. In other cases, friction is helpful. Friction
between people’s shoes and the ground allows people to walk by pushing off the ground
without slipping. On a slick surface, such as ice, shoes slip and slide instead of gripping
because of the lack of friction, making walking difficult. Friction allows car tires to grip and roll
along the road without skidding. Friction between nails and beams prevents the nails from
Sliding out and keeps buildings standing.

When friction affects a moving object, it turns the object’s kinetic energy, or energy of motion,
into heat. People welcome the heat caused by friction when rubbing their hands together to
stay warm. Frictional heat is not so welcome when it damages machine parts, such as car


Friction occurs in part because rough surfaces tend to catch on one another as they slide past
each other. Even surfaces that are apparently smooth can be rough at the microscopic level.
They have many ridges and grooves. The ridges of each surface can get stuck in the grooves
of the other, effectively creating a type of mechanical bond, or glue, between the surfaces.

Two surfaces in contact also tend to attract one another at the molecular level, forming
chemical bonds (see Chemistry). These bonds can prevent an object from moving, even when
it is pushed. If an object is in motion, these bonds form and release. Making and breaking the
bonds takes energy away from the motion of the object.

Scientists do not yet fully understand the details of how friction works, but through
experiments they have found a way to describe frictional forces in a wide variety of situations.
The force of friction between an object and a surface is equal to a constant number times the
force the object exerts directly on the surface. The constant number is called the coefficient of
friction for the two materials and is abbreviated µ. The force the object exerts directly on the
surface is called the normal force and is abbreviated N. Friction depends on this force because
increasing the amount of force increases the amount of contact that the object has with the
surface at the microscopic level. The force of friction between an object and a surface can be
calculated from the following formula:

In this equation, F is the force of friction, µ is the coefficient of friction between the object and
the surface, and N is the normal force.

Scientists have measured the coefficient of friction for many combinations of materials.
Coefficients of friction depend on whether the objects are initially moving or stationary and on
the types of material involved. The coefficient of friction for rubber sliding on concrete is 0.8
(relatively high), while the coefficient for Teflon sliding on steel is 0.04 (relatively low).

The normal force is the force the object exerts perpendicular to the surface. In the case of a
level surface, the normal force is equal to the weight of the object. If the surface is inclined,
only a fraction of the object’s weight pushes directly into the surface, so the normal force is
less than the object’s weight.


Different kinds of motion give rise to different types of friction between objects. Static friction
occurs between stationary objects, while sliding friction occurs between objects as they slide
against each other. Other types of friction include rolling friction and fluid friction. The
coefficient of friction for two materials may differ depending on the type of friction involved.

Static friction prevents an object from moving against a surface. It is the force that keeps a
book from sliding off a desk, even when the desk is slightly tilted, and that allows you to pick
up an object without the object slipping through your fingers. In order to move something,
you must first overcome the force of static friction between the object and the surface on
which it is resting. This force depends on the coefficient of static friction (µs) between the
object and the surface and the normal force (N) of the object.

A book sliding off a desk or brakes slowing down a wheel are both examples of sliding friction,
also called kinetic friction. Sliding friction acts in the direction opposite the direction of motion.
It prevents the book or wheel from moving as fast as it would without friction. When sliding
friction is acting, another force must be present to keep an object moving. In the case of a
book sliding off a desk, this force is gravity. The force of kinetic friction depends on the
coefficient of kinetic friction between the object and the surface on which it is moving (µk) and
the normal force (N) of the object. For any pair of objects, the coefficient of kinetic friction is
usually less than the coefficient of static friction. This means that it takes more force to start a
book sliding than it does to keep the book sliding.

Rolling friction hinders the motion of an object rolling along a surface. Rolling friction slows
down a ball rolling on a basketball court or softball field, and it slows down the motion of a tire
rolling along the ground. Another force must be present to keep an object rolling. For
example, a pedaling bicyclist provides the force necessary to the keep a bike in motion. Rolling
friction depends on the coefficient of rolling friction between the two materials (µr) and the
normal force (N) of the object. The coefficient of rolling friction is usually about  that of
sliding friction. Wheels and other round objects will roll along the ground much more easily
than they will slide along it.

Metals, group of chemical elements that exhibit all or most of the following physical qualities:
they are solid at ordinary temperatures; opaque, except in extremely thin films; good
electrical and thermal conductors (see Conductor, Electrical); lustrous when polished; and
have a crystalline structure when in the solid state. Metals and nonmetals are separated in the
periodic table by a diagonal line of elements. Elements to the left of this diagonal are metals,
and elements to the right are nonmetals. Elements that make up this diagonal—boron, silicon,
germanium, arsenic, antimony, tellurium, polonium, and astatine—have both metallic and
nonmetallic properties (see Periodic Law). The common metallic elements include the
following: aluminum, barium, beryllium, bismuth, cadmium, calcium, cerium, chromium,
cobalt, copper, gold, iridium, iron, lead, lithium, magnesium, manganese, mercury,
molybdenum, nickel, osmium, palladium, platinum, potassium, radium, rhodium, silver,
sodium, tantalum, thallium, thorium, tin, titanium, tungsten, uranium, vanadium, and zinc.
Metallic elements can combine with one another and with certain other elements, either as
compounds, as solutions, or as intimate mixtures. A substance composed of two or more
metals, or a substance composed of a metal and certain nonmetals such as carbon are called
alloys. Alloys of mercury with other metallic elements are known as amalgams.

Within the general limits of the definition of a metal, the properties of metals vary widely.
Most metals are grayish in color, but bismuth is pinkish, copper is red, and gold is yellow.
Some metals display more than one color, a phenomenon called pleochroism. The melting
points of metals range from about -39° C (about -38° F) for mercury to 3410° C (6170° F) for
tungsten. Osmium and iridium (specific gravity 22.6) are the most dense metals, and lithium
(specific gravity 0.53) is the least dense. The majority of metals crystallize in the cubic
system, but some crystallize in the hexagonal and tetragonal systems (see Crystal). Bismuth
has the lowest electrical conductivity of the metallic elements, and silver the highest at
ordinary temperatures. (For conductivity at low temperatures, see Cryogenics;
Superconductivity.) The conductivity of most metals can be lowered by alloying. All metals
expand when heated and contract when cooled, but certain alloys, such as platinum and
iridium alloys, have extremely low coefficients of expansion.

Metals are generally very strong and resistant to different types of stresses. Though there is
considerable variation from one metal to the next, in general metals are marked by such
properties as hardness, the resistance to surface deformation or abrasion; tensile strength,
the resistance to breakage; elasticity, the ability to return to the original shape after
deformation; malleability, the ability to be shaped by hammering; fatigue resistance, the
ability to resist repeated stresses; and ductility, the ability to undergo deformation without
breaking. See Materials Science and Technology.


Metals typically have positive valences in most of their compounds, which means they tend to
donate electrons to the atoms to which they bond. Also, metals tend to form basic oxides.
Typical nonmetallic elements, such as nitrogen, sulfur, and chlorine, have negative valences in
most of their compounds—meaning they tend to accept electrons—and form acidic oxides (see
Acids and Bases; Chemical Reaction).

Metals typically have low ionization potentials. This means that metals react easily by loss of
electrons to form positive ions, or cations. Thus, metals can form salts (chlorides, sulfides, and
carbonates, for example) by serving as reducing agents (electron donors).

In early attempts to explain the electronic configurations of the metals, scientists cited the
characteristics of high thermal and electrical conductivity in support of a theory that metals
consist of ionized atoms in which the free electrons form a homogeneous sea of negative
charge. The electrostatic attraction between the positive metal ions and the free-moving and
homogeneous sea of electrons was thought to be responsible for the bonds between the metal
atoms. Free movement of the electrons was then held to be responsible for the high thermal
and electrical conductivities. The principal objection to this theory was that the metals should
then have higher specific heats than they do.

In 1928 the German physicist Arnold Sommerfeld proposed that the electrons in metals exist
in a quantized arrangement in which low energy levels available to the electrons are almost
fully occupied (see Atom; Quantum Theory). In the same year the Swiss-American physicist
Felix Bloch and later the French physicist Louis Brillouin used this idea of quantization in the
currently accepted “band” theory of bonding in metallic solids.

According to the band theory, any given metal atom has only a limited number of valence
electrons with which to bond to all of its nearest neighbors. Extensive sharing of electrons
among individual atoms is therefore required. This sharing of electrons is accomplished
through overlap of equivalent-energy atomic orbitals on the metal atoms that are immediately
adjacent to one another. This overlap is delocalized throughout the entire metal sample to
form extensive orbitals that span the entire solid rather than being part of individual atoms.
Each of these orbitals lies at different energies because the atomic orbitals from which they
were constructed were at different energies to begin with. The orbitals, equal in number to the
number of individual atomic orbitals that have been combined, each hold two electrons, and
are filled in order from lowest to highest energy until the number of available electrons has
been used up. Groups of electrons are then said to reside in bands, which are collections of
orbitals. Each band has a range of energy values that the electrons must possess to be part of
that band; in some metals, there are energy gaps between bands, meaning that there are
certain energies that the electrons cannot possess. The highest energy band in a metal is not
filled with electrons because metals characteristically possess too few electrons to fill it. The
high thermal electrical conductivities of metals is then explained by the notion that electrons
may be promoted by absorption of thermal energy into these unfilled energy levels of the

Objects moving through a fluid experience fluid friction, or drag. Drag acts between the object
and the fluid and hinders the motion of the object. The force of drag depends upon the
object’s shape, material, and speed, as well as the fluid’s viscosity. Viscosity is a measure of a
fluid’s resistance to flow. It results from the friction that occurs between the fluid’s molecules,
and it differs depending on the type of fluid. Drag slows down airplanes flying through the air
and fish swimming through water. An airplane’s engines help it overcome drag and travel
forward, while a fish uses its muscles to overcome drag and swim. Calculating the force of
drag is much more complicated than calculating other types of friction. (see Aerodynamics)


Friction helps people convert one form of motion into another. For example, when people
walk, friction allows them to convert a push backward along the ground into forward motion.
Similarly, when car or bicycle tires push backward along the ground, friction with the ground
makes the tires roll forward. Friction allows us to push and slide objects along the ground
without our shoes slipping along the ground in the opposite direction.

While friction allows us to convert one form of motion to another, it also converts some energy
into heat, noise, and wear and tear on material. Losing energy to these effects often reduces
the efficiency of a machine. For example, a cyclist uses friction between shoes and pedals, the
chain and gears, and the bicycle’s tires and the road to make the bicycle move forward. At the
same time, friction between the chain and gears, between the tires and the road, and between
the cyclist and the air all resist the cyclist’s motion. As the cyclist pedals, friction converts
some of the cyclist’s energy into heat, noise, and wear and tear on the bicycle. This energy
loss reduces the efficiency of the bicycle. In automobiles and airplanes, friction converts some
of the energy in the fuel into heat, noise, and wear and tear on the engine’s parts. Excess
frictional heat can damage an engine and braking system. The wearing away of material in
engines makes it necessary to periodically replace some parts.
Sometimes the heat that friction produces is useful. When a person strikes a match against a
rough surface, friction produces a large amount of heat on the head of the match and triggers
the chemical process of burning. Static friction, which prevents motion, does not create heat.


Reducing the amount of friction in a machine increases the machine’s efficiency. Less friction
means less energy lost to heat, noise, and wearing down of material. People normally use two
methods to reduce friction. The first method involves reducing the roughness of the surfaces
in contact. For example, sanding two pieces of wood lessens the amount of friction that occurs
between them when they slide against one another. Teflon creates very little friction because
it is so smooth.

Applying a lubricant to a surface can also reduce friction. Common examples of lubricants are
oil and grease. They reduce friction by minimizing the contact between rough surfaces. The
lubricant’s particles slide easily against each other and cause far less friction than would occur
between the surfaces. Lubricants such as machine oil reduce the amount of energy lost to
frictional heating and reduce the wear damage to the machine surfaces caused by friction.
Inclined Plane

Inclined Plane, simple machine, consisting of a ramp or a similar wedge-shaped device, that
makes doing a given amount of work easier. In physical terms, work is the result of a force,
such as the effort of pushing or pulling something, that moves an object over a distance. An
inclined plane makes it easier to lift heavy objects by enabling a person to apply the necessary
force over a greater distance. The same amount of work is accomplished in lifting the object
with or without the inclined plane, but because the inclined plane increases the distance over
which the force is applied, the work requires less force.

The inclined plane is one of the four simple machines (along with the lever, the wheel and
axle, and the pulley). Two other simple machines, the screw and the wedge, are really
alternate forms of the inclined plane. One of the most common examples of an inclined plane
is a staircase, which allows people to move within a building from one floor to another with
less effort than climbing straight up a ladder would require. Some jacks that are used to lift
cars use threaded screws. A sharp knife is an everyday example of a wedge.


An inclined plane makes doing work easier by changing both the direction and the amount of
effort that are used to lift an object. Work, in physics, is defined as the amount of force
applied to an object multiplied by the distance over which the force is applied. Mathematically,
this can be expressed by the following equation:

Work = Force x Distance

When lifting an object is the work being done, the force needed is the effort required to lift the
object, and the distance corresponds to the distance the object is lifted. Rather than lifting an
object straight up, an inclined plane allows a person to lift an object gradually (at an angle)
over a greater distance. By increasing distance, the inclined plane decreases the amount of
force needed to do the same amount of work without the plane.

The mechanical advantage (MA) of an inclined plane measures how much the plane magnifies
the effort applied to the machine. There are two kinds of MA: theoretical and actual.
Theoretical MA is the MA a machine would have if it were perfect. All machines, however, lose
some of their MA to friction, a resistance created between objects when they move against
each other. Friction makes the process of moving objects, and therefore doing work, more
difficult. The actual MA of a machine is less than the theoretical MA because of friction.

The MA of an inclined plane without any friction is equal to the length of the plane divided by
the height of the plane. A ramp that is twice as long as it is high has a mechanical advantage
of 2. This means that the ramp doubles the effort applied by the user, or that the user needs
to apply only half as much effort to lift an object to a desired height as he or she would
without the ramp. Increasing the ratio of the length of the ramp to the height of the ramp
decreases the effort needed to lift an object. This idea explains why climbing up a steep hill
takes more effort (and seems more difficult) than walking up a longer, more gradual path to
the same height as that of the steep hill. The longer the inclined plane, the larger the MA will
be. If the length of a ramp was equal to its height, the ramp would simply run straight up, like
a vertical ladder. In this case, the mechanical advantage would be 1, which means the ramp
did not magnify the user’s effort.

Friction is a phenomenon that reduces the efficiency of all machines. Walking up an inclined
plane or rolling a load (such as a barrel) up a plane creates little friction, and the actual MA is
close to the theoretical MA. However, sliding a load (especially a flat load such as a crate) up a
plane creates friction and causes the plane to lose much of its MA. Wheels can be added to the
load to decrease friction. People also frequently build inclined planes with small rollers or
casters built into the plane to reduce friction.


The screw and the wedge are common adaptations of the inclined plane. A screw is an inclined
plane wrapped around an axis, or pole. The edge of the inclined plane forms a helix, or spiral,
around the axis. The mechanical advantage of a screw is related to the circumference of the
screw divided by the pitch of the threads. The pitch of a thread is the distance along the axis
of the screw from one thread to the next. Since the pitch is generally small compared to the
circumference, large mechanical advantages can be achieved by using screws. Screws are
often used to raise objects, and some jacks used to lift automobiles rely on screws. A jack has
a large screw attached to a small platform, which is placed under a vehicle. Turning the screw
many times produces a small amount of vertical lift on the platform, and raises the
automobile. The screw requires a lot of turning, which equates with effort applied over a long
distance; this allows heavy loads to be lifted with a small amount of effort. Screws are also
useful as fastening devices. Screws driven straight into wood or other materials, as well as
threaded nuts and bolts take advantage of the friction that results from the contact between
the inclined plane and other objects. These devices use friction to hold things together.

A wedge is another form of inclined plane. A wedge is essentially a double inclined plane,
where two planes are joined at their bases. The joined inclined planes form a blunt end that
narrows down to a tip. Wedges transfer downward effort applied to the blunt edge of the
wedge out to the sides of the wedge to help it cut through an object. Effort is applied directly
to the wedge, which differs from an inclined plane, where the effort travels along the plane.
Wedges are often used to split materials such as wood or stone. Since there is much friction
involved, the mechanical advantage of a wedge is difficult to determine. The main benefit of
the wedge is changing the direction of effort to help split or cut through an object. A knife is
also a form of wedge. The wedge shape of the knife edge helps the user cut through material.


The inclined plane is undoubtedly one of the first of the simple machines people ever used. A
person walking up a gradual path to the top of a mountain rather than climbing straight up a
steep face is taking advantage of the principle of the inclined plane. There are indications that
the Egyptians created earthen ramps to raise huge blocks of stone during the construction of
the pyramids, from about 2700 BC to 1000 BC. Evidence from drawings of that time indicates
that the Egyptians used a lubricant, probably milk, to reduce the sliding friction and thus
increase the efficiency of the inclined planes.

People used wedges in ancient times to split wood, transferring the force they applied to the
blunt edge out to the sides of the wedge. People also used wooden wedges in prehistoric times
to split rocks. They placed dry wooden wedges into cracks in rocks and then allowed the
wedges to swell by absorbing water. The resulting pressure in the cracks caused the rocks to
split. Screws were used in ancient times as lifting devices. Historians believe that Greek
inventor Archimedes (287-212 BC) invented a screw-type device (known as Archimedes’
screw) for raising water. It consists of a cylinder with a wide-threaded screw inside. The
bottom end of the cylinder is set in water, and turning the screw lifts water up the cylinder to
a higher level. This principle is still used in some pumps today.
Computer-Aided Design/Computer-
Aided Manufacturing (CAD/CAM)

Computer-Aided Design/Computer-Aided Manufacturing

(CAD/CAM), the application of computers in the design and manufacture of components used
in the production of items such as automobiles and jet engines. CAD is software for creating
precise engineering drawings. CAM adds a computer to a machine tool, such as a drill or a
lathe. CAM engineers similarly use computer modeling to determine the best overall
manufacturing procedures for use in an industrial plant, including the testing and handling of
finished products. Engineers use CAD and CAM together to create the design in CAD on one
computer, then transmit the design to a second computer that creates the part using CAM.


Engineers use CAD to create two- and three-dimensional drawings, such as those for
automobile and airplane parts, floor plans, and maps. While it may be faster for an engineer to
create an initial drawing by hand, it is much more efficient to change and distribute drawings
by computer.

In the design stage, drafting and computer graphics techniques are combined to produce
models of objects. Designers manipulate and test these models on video display screens until
they incorporate the best balance of features, including ease of production and cost. The CAD
information is then combined with CAM procedures through shared databases. Today, it is
possible to perform the six-step "art-to-part" process with a computer. The first two steps in
this process are the use of sketching software to capture the initial design ideas and to
produce accurate engineering drawings. The third step is rendering an accurate image of what
the part will look like. Next, engineers use analysis software to ensure that the part is strong
enough. Step five is the production of a prototype, or model. In the final step the CAM
software controls the machine that produces the part.


CAM uses a computer to control the manufacture of objects such as parts, which are most
often made of metal, plastic, or wood. The manufacturing operations may include milling,
drilling, lathing, and polishing. CAM software selects the best cutting tools for the material and
sets the most effective cutting speed. The software generates an image, called a toolpath
display, that shows how the tool will cut the material, much as print preview in a word-
processing program displays a page before it is printed. The tool's path has three stages: the
containment area, beyond which the tool may not cut; the rough cut, which removes large
areas of material; and the surface finish cut, which removes gouges, produces a smooth
finish, and cleans up the part.


American Ivan Sutherland invented CAD in 1961 when he described a computerized sketchpad
in a doctoral thesis while attending the Massachusetts Institute of Technology (MIT) in
Cambridge, Massachusetts. He designed CAD to replace the traditional drafting board and
other tools drafters used, such as the ink pen, plastic stencil, and electric eraser. Early CAD
software ran on large, expensive computers. Today, engineers can run CAD software on
personal computers or UNIX workstations.

The earliest CAM software was a simple computer attached to a milling machine. Punching
buttons on the computer’s front panel programmed the software for the machine. Since the
mid-1980s CAD and CAM have come closer together, as some CAM software operates within
the CAD software programs instead of through shared databases

Robot, computer-controlled machine that is programmed to move, manipulate objects, and

accomplish work while interacting with its environment. Robots are able to perform repetitive
tasks more quickly, cheaply, and accurately than humans. The term robot originates from the
Czech word robota, meaning “compulsory labor.” It was first used in the 1921 play R.U.R.
(Rossum's Universal Robots) by the Czech novelist and playwright Karel Capek. The word
robot has been used since to refer to a machine that performs work to assist people or work
that humans find difficult or undesirable.


The concept of automated machines dates to antiquity with myths of mechanical beings
brought to life. Automata, or manlike machines, also appeared in the clockwork figures of
medieval churches, and 18th-century watchmakers were famous for their clever mechanical

Feedback (self-correcting) control mechanisms were used in some of the earliest robots and
are still in use today. An example of feedback control is a watering trough that uses a float to
sense the water level. When the water falls past a certain level, the float drops, opens a valve,
and releases more water into the trough. As the water rises, so does the float. When the float
reaches a certain height, the valve is closed and the water is shut off.

The first true feedback controller was the Watt governor, invented in 1788 by the Scottish
engineer James Watt. This device featured two metal balls connected to the drive shaft of a
steam engine and also coupled to a valve that regulated the flow of steam. As the engine
speed increased, the balls swung out due to centrifugal force, closing the valve. The flow of
steam to the engine was decreased, thus regulating the speed.

Feedback control, the development of specialized tools, and the division of work into smaller
tasks that could be performed by either workers or machines were essential ingredients in the
automation of factories in the 18th century. As technology improved, specialized machines
were developed for tasks such as placing caps on bottles or pouring liquid rubber into tire
molds. These machines, however, had none of the versatility of the human arm; they could
not reach for objects and place them in a desired location.
The development of the multijointed artificial arm, or manipulator, led to the modern robot. A
primitive arm that could be programmed to perform specific tasks was developed by the
American inventor George Devol, Jr., in 1954. In 1975 the American mechanical engineer
Victor Scheinman, while a graduate student at Stanford University in California, developed a
truly flexible multipurpose manipulator known as the Programmable Universal Manipulation
Arm (PUMA). PUMA was capable of moving an object and placing it with any orientation in a
desired location within its reach. The basic multijointed concept of the PUMA is the template
for most contemporary robots.


The inspiration for the design of a robot manipulator is the human arm, but with some
differences. For example, a robot arm can extend by telescoping—that is, by sliding cylindrical
sections one over another to lengthen the arm. Robot arms also can be constructed so that
they bend like an elephant trunk. Grippers, or end effectors, are designed to mimic the
function and structure of the human hand. Many robots are equipped with special purpose
grippers to grasp particular devices such as a rack of test tubes or an arc-welder.

The joints of a robotic arm are usually driven by electric motors. In most robots, the gripper is
moved from one position to another, changing its orientation. A computer calculates the joint
angles needed to move the gripper to the desired position in a process known as inverse
Some multijointed arms are equipped with servo, or feedback, controllers that receive input
from a computer. Each joint in the arm has a device to measure its angle and send that value
to the controller. If the actual angle of the arm does not equal the computed angle for the
desired position, the servo controller moves the joint until the arm's angle matches the
computed angle. Controllers and associated computers also must process sensor information
collected from cameras that locate objects to be grasped, or they must touch sensors on
grippers that regulate the grasping force.

Any robot designed to move in an unstructured or unknown environment will require multiple
sensors and controls, such as ultrasonic or infrared sensors, to avoid obstacles. Robots, such
as the National Aeronautics and Space Administration (NASA) planetary rovers, require a
multitude of sensors and powerful onboard computers to process the complex information that
allows them mobility. This is particularly true for robots designed to work in close proximity
with human beings, such as robots that assist persons with disabilities and robots that deliver
meals in a hospital. Safety must be integral to the design of human service robots.


In 1995 about 700,000 robots were operating in the industrialized world. Over 500,000 were
used in Japan, about 120,000 in Western Europe, and about 60,000 in the United States.
Many robot applications are for tasks that are either dangerous or unpleasant for human
beings. In medical laboratories, robots handle potentially hazardous materials, such as blood
or urine samples. In other cases, robots are used in repetitive, monotonous tasks in which
human performance might degrade over time. Robots can perform these repetitive, high-
precision operations 24 hours a day without fatigue. A major user of robots is the automobile
industry. General Motors Corporation uses approximately 16,000 robots for tasks such as spot
welding, painting, machine loading, parts transfer, and assembly. Assembly is one of the
fastest growing industrial applications of robotics. It requires higher precision than welding or
painting and depends on low-cost sensor systems and powerful inexpensive computers.
Robots are used in electronic assembly where they mount microchips on circuit boards.

Activities in environments that pose great danger to humans, such as locating sunken ships,
cleanup of nuclear waste, prospecting for underwater mineral deposits, and active volcano
exploration, are ideally suited to robots. Similarly, robots can explore distant planets. NASA's
Galileo, an unpiloted space probe, traveled to Jupiter in 1996 and performed tasks such as
determining the chemical content of the Jovian atmosphere.

Robots are being used to assist surgeons in installing artificial hips, and very high-precision
robots can assist surgeons with delicate operations on the human eye. Research in telesurgery
uses robots, under the remote control of expert surgeons that may one day perform
operations in distant battlefields.


Robotic manipulators create manufactured products that are of higher quality and lower cost.
But robots can cause the loss of unskilled jobs, particularly on assembly lines in factories. New
jobs are created in software and sensor development, in robot installation and maintenance,
and in the conversion of old factories and the design of new ones. These new jobs, however,
require higher levels of skill and training. Technologically oriented societies must face the task
of retraining workers who lose jobs to automation, providing them with new skills so that they
can be employable in the industries of the 21st century.


Automated machines will increasingly assist humans in the manufacture of new products, the
maintenance of the world's infrastructure, and the care of homes and businesses. Robots will
be able to make new highways, construct steel frameworks of buildings, clean underground
pipelines, and mow lawns. Prototypes of systems to perform all of these tasks already exist.

One important trend is the development of microelectromechanical systems, ranging in size

from centimeters to millimeters. These tiny robots may be used to move through blood
vessels to deliver medicine or clean arterial blockages. They also may work inside large
machines to diagnose impending mechanical problems.

Perhaps the most dramatic changes in future robots will arise from their increasing ability to
reason. The field of artificial intelligence is moving rapidly from university laboratories to
practical application in industry, and machines are being developed that can perform cognitive
tasks, such as strategic planning and learning from experience. Increasingly, diagnosis of
failures in aircraft or satellites, the management of a battlefield, or the control of a large
factory will be performed by intelligent computers.

Automobile, self-propelled vehicle used primarily on public roads but adaptable to other
surfaces. Automobiles changed the world during the 20th century, particularly in the United
States and other industrialized nations. From the growth of suburbs to the development of
elaborate road and highway systems, the so-called horseless carriage has forever altered the
modern landscape. The manufacture, sale, and servicing of automobiles have become key
elements of industrial economies. But along with greater mobility and job creation, the
automobile has brought noise and air pollution, and automobile accidents rank among the
leading causes of death and injury throughout the world. But for better or worse, the 1900s
can be called the Age of the Automobile, and cars will no doubt continue to shape our culture
and economy well into the 21st century.

Automobiles are classified by size, style, number of doors, and intended use. The typical
automobile, also called a car, auto, motorcar, and passenger car, has four wheels and can
carry up to six people, including a driver. Larger vehicles designed to carry more passengers
are called vans, minivans, omnibuses, or buses. Those used to carry cargo are called pickups
or trucks, depending on their size and design. Minivans are van-style vehicles built on a
passenger car frame that can usually carry up to eight passengers. Sport-utility vehicles, also
known as SUVs, are more rugged than passenger cars and are designed for driving in mud or

In 2001 manufacturing plants in more than 35 countries produced 39.5 million passenger cars.
About 7.3 million passenger vehicles were produced in North America in 2001. For information
on the business of making cars, see Automobile Industry.

The automobile is built around an engine. Various systems supply the engine with fuel, cool it
during operation, lubricate its moving parts, and remove exhaust gases it creates. The engine
produces mechanical power that is transmitted to the automobile’s wheels through a
drivetrain, which includes a transmission, one or more driveshafts, a differential gear, and
axles. Suspension systems, which include springs and shock absorbers, cushion the ride and
help protect the vehicle from being damaged by bumps, heavy loads, and other stresses.
Wheels and tires support the vehicle on the roadway and, when rotated by powered axles,
propel the vehicle forward or backward. Steering and braking systems provide control over
direction and speed. An electrical system starts and operates the engine, monitors and
controls many aspects of the vehicle’s operation, and powers such components as headlights
and radios. Safety features such as bumpers, air bags, and seat belts help protect occupants
in an accident.


Gasoline internal-combustion engines power most automobiles, but some engines use diesel
fuel, electricity, natural gas, solar energy, or fuels derived from methanol (wood alcohol) and
ethanol (grain alcohol).
Most gasoline engines work in the following way: Turning the ignition key operates a switch
that sends electricity from a battery to a starter motor. The starter motor turns a disk known
as a flywheel, which in turn causes the engine’s crankshaft to revolve. The rotating crankshaft
causes pistons, which are solid cylinders that fit snugly inside the engine’s hollow cylinders, to
move up and down. Fuel-injection systems or, in older cars, a carburetor deliver fuel vapor
from the gas tank to the engine cylinders.

The pistons compress the vapor inside the cylinders. An electric current flows through a spark
plug to ignite the vapor. The fuel mixture explodes, or combusts, creating hot expanding
gases that push the pistons down the cylinders and cause the crankshaft to rotate. The
crankshaft is now rotating via the up-and-down motion of the pistons, permitting the starter
motor to disengage from the flywheel.

A Engine

The basic components of an internal-combustion engine are the engine block, cylinder head,
cylinders, pistons, valves, crankshaft, and camshaft. The lower part of the engine, called the
engine block, houses the cylinders, pistons, and crankshaft. The components of other engine
systems bolt or attach to the engine block. The block is manufactured with internal
passageways for lubricants and coolant. Engine blocks are made of cast iron or aluminum alloy
and formed with a set of round cylinders.

The upper part of the engine is the cylinder head. Bolted to the top of the block, it seals the
tops of the cylinders. Pistons compress air and fuel against the cylinder head prior to ignition.
The top of the piston forms the floor of the combustion chamber. A rod connects the bottom of
the piston to the crankshaft. Lubricated bearings enable both ends of the connecting rod to
pivot, transferring the piston’s vertical motion into the crankshaft’s rotational force, or torque.
The pistons’ motion rotates the crankshaft at speeds ranging from about 600 to thousands of
revolutions per minute (rpm), depending on how much fuel is delivered to the cylinders.

Fuel vapor enters and exhaust gases leave the combustion chamber through openings in the
cylinder head controlled by valves. The typical engine valve is a metal shaft with a disk at one
end fitted to block the opening. The other end of the shaft is mechanically linked to a
camshaft, a round rod with odd-shaped lobes located inside the engine block or in the cylinder
head. Inlet valves open to allow fuel to enter the combustion chambers. Outlet valves open to
let exhaust gases out.

A gear wheel, belt, or chain links the camshaft to the crankshaft. When the crankshaft forces
the camshaft to turn, lobes on the camshaft cause valves to open and close at precise
moments in the engine’s cycle. When fuel vapor ignites, the intake and outlet valves close
tightly to direct the force of the explosion downward on the piston.

B Engine Types

The blocks in most internal-combustion engines are in-line designs or V designs. In-line
designs are arranged so that the cylinders stand upright in a single line over the crankshaft. In
a V design, two rows of cylinders are set at an angle to form a V. At the bottom of the V is the
crankshaft. In-line configurations of six or eight cylinders require long engine compartments
found more often in trucks than in cars. The V design allows the same number of cylinders to
fit into a shorter, although wider, space. Another engine design that fits into shorter, shallower
spaces is a horizontally opposed, or flat, arrangement in which the crankshaft lies between
two rows of cylinders.

Engines become more powerful, and use more fuel, as the size and number of cylinders
increase. Most modern vehicles in the United States have 4-, 6-, or 8-cylinder engines, but car
engines have been designed with 1, 2, 3, 5, 12, and more cylinders.

Diesel engines, common in large trucks or buses, are similar to gasoline internal-combustion
engines, but they have a different ignition system. Diesels compress air inside the cylinders
with greater force than a gasoline engine does, producing temperatures hot enough to ignite
the diesel fuel on contact. Some cars have rotary engines, also known as Wankel engines,
which have one or more elliptical chambers in which triangular-shaped rotors, instead of
pistons, rotate.

Electric motors have been used to power automobiles since the late 1800s. Electric power
supplied by batteries runs the motor, which rotates a driveshaft, the shaft that transmits
engine power to the axles. Commercial electric car models for specialized purposes were
available in the 1980s. General Motors Corporation introduced a mass-production all-electric
car in the mid-1990s.

Automobiles that combine two or more types of engines are called hybrids. A typical hybrid is
an electric motor with batteries that are recharged by a generator run by a small gas- or
diesel-powered engine. These hybrids are known as hybrid electric vehicles (HEVs). By relying
more on electricity and less on fuel combustion, HEVs have higher fuel efficiency and emit
fewer pollutants. Several automakers have experimented with hybrids. In 1997 Toyota Motor
Corporation became the first to mass-produce a hybrid vehicle, the Prius. It became available
in Japan in 1997 and in North America in 2000. The first hybrid available for sale in North
America, the Honda Insight, was offered by Honda Motor Co., Ltd., in 1999.

C Fuel Supply

The internal-combustion engine is powered by the burning of a precise mixture of liquefied

fuel and air in the cylinders’ combustion chambers. Fuel is stored in a tank until it is needed,
then pumped to a carburetor or, in newer cars, to a fuel-injection system.

The carburetor controls the mixture of gas and air that travels to the engine. It mixes fuel with
air at the head of a pipe, called the intake manifold, leading to the cylinders. A vacuum
created by the downward strokes of pistons draws air through the carburetor and intake
manifold. Inside the carburetor, the airflow transforms drops of fuel into a fine mist, or vapor.
The intake manifold delivers the fuel vapor to the cylinders, where it is ignited.

All new cars produced today are equipped with fuel injection systems instead of carburetors.
Fuel injectors spray carefully calibrated bursts of fuel mist into cylinders either at or near
openings to the combustion chambers. Since the exact quantity of gas needed is injected into
the cylinders, fuel injection is more precise, easier to adjust, and more consistent than a
carburetor, delivering better efficiency, gas mileage, engine responsiveness, and pollution
control. Fuel-injection systems vary widely, but most are operated or managed electronically.
High-performance automobiles are often fitted with air-compressing equipment that increases
an engine’s output. By increasing the air and fuel flow to the engine, these features produce
greater horsepower. Superchargers are compressors powered by the crankshaft.
Turbochargers are turbine-powered compressors run by pressurized exhaust gas.

D Exhaust System

The exhaust system carries exhaust gases from the engine’s combustion chamber to the
atmosphere and reduces, or muffles, engine noise. Exhaust gases leave the engine in a pipe,
traveling through a catalytic converter and a muffler before exiting through the tailpipe.

Chemical reactions inside the catalytic converter change most of the hazardous hydrocarbons
and carbon monoxide produced by the engine into water vapor and carbon dioxide.

The conventional muffler is an enclosed metal tube packed with sound-deadening material.
Most conventional mufflers are round or oval-shaped with an inlet and outlet pipe at either
end. Some contain partitions to help reduce engine noise.

Car manufacturers are experimenting with an electronic muffler, which uses sensors to
monitor the sound waves of the exhaust noise. The sound wave data are sent to a computer
that controls speakers near the tailpipe. The system generates sound waves 180 degrees out
of phase with the engine noise. The sound waves from the electronic muffler collide with the
exhaust sound waves and they cancel each other out, leaving only low-level heat to emerge
from the tailpipe.

E Cooling and Heating System

Combustion inside an engine produces temperatures high enough to melt cast iron. A cooling
system conducts this heat away from the engine’s cylinders and radiates it into the air.

In most automobiles, a liquid coolant circulates through the engine. A pump sends the coolant
from the engine to a radiator, which transfers heat from the coolant to the air. In early
engines, the coolant was water. In most automobiles today, the coolant is a chemical solution
called antifreeze that has a higher boiling point and lower freezing point than water, making it
effective in temperature extremes. Some engines are air cooled, that is, they are designed so
a flow of air can reach metal fins that conduct heat away from the cylinders.

A second, smaller radiator is fitted to all modern cars. This unit uses engine heat to warm the
interior of the passenger compartment and supply heat to the windshield defroster.

The rotational force of the engine’s crankshaft turns other shafts and gears that eventually
cause the drive wheels to rotate. The various components that link the crankshaft to the drive
wheels make up the drivetrain. The major parts of the drivetrain include the transmission, one
or more driveshafts, differential gears, and axles.

A Transmission

The transmission, also known as the gearbox, transfers power from the engine to the
driveshaft. As the engine’s crankshaft rotates, combinations of transmission gears pass the
energy along to a driveshaft. The driveshaft causes axles to rotate and turn the wheels. By
using gears of different sizes, a transmission alters the rotational speed and torque of the
engine passed along to the driveshaft. Higher gears permit the car to travel faster, while low
gears provide more power for starting a car from

a standstill
and for climbing hills.

The transmission usually is located just behind the engine, although some automobiles were
designed with a transmission mounted on the rear axle. There are three basic transmission
types: manual, automatic, and continuously variable.

A manual transmission has a gearbox from which the driver selects specific gears depending
on road speed and engine load. Gears are selected with a shift lever located on the floor next
to the driver or on the steering column. The driver presses on the clutch to disengage the
transmission from the engine to permit a change of gears. The clutch disk attaches to the
transmission’s input shaft. It presses against a circular plate attached to the engine’s flywheel.
When the driver presses down on the clutch pedal to shift gears, a mechanical lever called a
clutch fork and a device called a throwout bearing separate the two disks. Releasing the clutch
pedal presses the two disks together, transferring torque from the engine to the transmission.

An automatic transmission selects gears itself according to road conditions and the amount of
load on the engine. Instead of a manual clutch, automatic transmissions use a hydraulic
torque converter to transfer engine power to the transmission.

Instead of making distinct changes from one gear to the next, a continuously variable
transmission uses belts and pulleys to smoothly slide the gear ratio up or down. Continuously
variable transmissions appeared on machinery during the 19th century and on a few small-
engine automobiles as early as 1900. The transmission keeps the engine running at its most
efficient speed by more precisely matching the gear ratio to the situation. Commercial
applications have been limited to small engines.

B Front- and Rear-Wheel Drive

Depending on the vehicle’s design, engine power is transmitted by the transmission to the
front wheels, the rear wheels, or to all four wheels. The wheels receiving power are called
drive wheels: They propel the vehicle forward or backward. Most automobiles either are front-
wheel or rear-wheel drive. In some vehicles, four-wheel drive is an option the driver selects
for certain road conditions; others feature full-time, all-wh

eel drive.
The differential is a gear assembly in an axle that enables each powered wheel to turn at
different speeds when the vehicle makes a turn. The driveshaft connects the transmission’s
output shaft to a differential gear in the axle. Universal joints at both ends of the driveshaft
allow it to rotate as the axles move up and down over the road surface.

In rear-wheel drive, the driveshaft runs under the car to a differential gear at the rear axle. In
front-wheel drive, the differential is on the front axle and the connections to the transmission
are much shorter. Four-wheel-drive vehicles have drive shafts and differentials for both axles.


Automobiles would deliver jolting rides, especially on unpaved roads, without a system of
shock absorbers and other devices to protect the auto body and passenger compartment from
severe bumps and bounces.

A Suspension System

The suspension system, part of the undercarriage of an automobile, contains springs that
move up and down to absorb bumps and vibrations. In one type of suspension system, a long
tube, or strut, has a shock absorber built into its center section. Shock absorbers control, or
dampen, the sudden loading and unloading of suspension springs to reduce wheel bounce and
the shock transferred from the road wheels to the body. One shock absorber is installed at
each wheel. Modern shock absorbers have a telescoping design and use oil, gas, and air, or a
combination to absorb energy.

Luxury sedans generally have a soft suspension for comfortable riding. Sports cars and sport-
utility vehicles have firmer suspensions to improve cornering ability and control over rough

Older automobiles were equipped with one-piece front axles attached to the frame with
semielliptic leaf springs, much like the arrangement on horse-drawn buggies. Front wheels on
modern cars roll independently of each other on half-shafts instead of on a common axle. Each
wheel has its own axle and suspension supports, so the shock of one wheel hitting a bump is
not transferred across a common axle to the other wheel or the rest of the car. Many rear-axle
suspensions for automobiles and heavier vehicles use rigid axles with coil or leaf springs.
However, advanced passenger cars, luxury sedans, and sports cars feature independent rear-
wheel suspension systems.

Active suspensions are computer-controlled adjustments of the downward force of each wheel
as the vehicle corners or rides over uneven terrain. Sensors, a pump, and hydraulic cylinders,
all monitored and controlled by computer, enable the vehicle to lean into corners and
compensate for the dips and dives that accompany emergency stops and rapid acceleration.

B Wheels and Tires

Wheels support the vehicle’s weight and transfer torque to the tires from the drivetrain and
braking systems. Automobile wheels generally are made of steel or aluminum. Aluminum
wheels are lighter, more impact absorbent, and more expensive.

Pneumatic (air-filled) rubber tires, first patented in 1845, fit on the outside rims of the wheels.
Tires help smooth out the ride and provide the automobile’s only contact with the road, so
traction and strength are primary requirements. Tire treads come in several varieties to match
driving conditions.


A driver controls the automobile’s motion by keeping the wheels pointed in the desired
direction, and by stopping or slowing the speed at which the wheels rotate. These controls are
made possible by the steering and braking systems. In addition, the driver controls the
vehicle’s speed with the transmission and the gas pedal, which adjusts the amount of fuel fed
to the engine.
A Steering

Automobiles are steered by turning the front wheels, although a few automobile types have
all-wheel steering. Most steering systems link the front wheels together by means of a tie-rod.
The tie-rod insures that the turning of one wheel is matched by a corresponding turn in the

When a driver turns the steering wheel, the mechanical action rotates a steering shaft inside
the steering column. Depending on the steering mechanism, gears or other devices convert
the rotating motion of the steering wheel into a horizontal force that turns the wheels.

Manual steering relies only on the force exerted by the driver to turn the wheels. Conventional
power steering uses hydraulic pressure, operated by the pressure or movement of a liquid, to
augment that force, requiring less effort by the driver. Electric power steering uses an electric
motor instead of hydraulic pressure.

B Brakes

Brakes enable the driver to slow or stop the moving vehicle. The first automobile brakes were
much like those on horse-drawn wagons. By pulling a lever, the driver pressed a block of
wood, leather, or metal, known as the shoe, against the wheel rims. With sufficient pressure,
friction between the wheel and the brake shoe caused the vehicle to slow down or stop.
Another method was to use a lever to clamp a strap or brake shoes tightly around the

A brake system with shoes that pressed against the inside of a drum fitted to the wheel, called
drum brakes, appeared in 1903. Since the drum and wheel rotate together, friction applied by
the shoes inside the drum slowed or stopped the wheel. Cotton and leather shoe coverings, or
linings, were replaced by asbestos after 1908, greatly extending the life of the brake
mechanism. Hydraulically assisted braking was introduced in the 1920s. Disk brakes, in which
friction pads clamp down on both sides of a disk attached to the axle, were in use by the

An antilock braking system (ABS) uses a computer, sensors, and a hydraulic pump to stop the
automobile’s forward motion without locking the wheels and putting the vehicle into a skid.
Introduced in the 1980s, ABS helps the driver maintain better control over the car during
emergency stops and while braking on slippery surfaces.

Automobiles are also equipped with a hand-operated brake used for emergencies and to
securely park the car, especially on uneven terrain. Pulling on a lever or pushing down on a
foot pedal sets the brake.

The automobile depends on electricity for fuel ignition, headlights, turn signals, horn, radio,
windshield wipers, and other accessories. A battery and an alternator supply electricity. The
battery stores electricity for starting the car. The alternator generates electric current while
the engine is running, recharging the battery and powering the rest of the car’s electrical

Early automotive electrical systems ran on 6 volts, but 12 volts became standard after World
War II (1939-1945) to operate the growing number of electrical accessories. Eventually, 24-
or 48-volt systems may become the standard as more computers and electronics are built into

A Ignition System

The ignition system supplies high-voltage current to spark plugs to ignite fuel vapor in the
cylinders. There are many variations, but all gasoline-engine ignition systems draw electric
current from the battery, significantly increase the current’s voltage, then deliver it to spark
plugs that project into the combustion chambers. An electric arc between two electrodes at
the bottom of the spark plug ignites the fuel vapor.

In older vehicles, a distributor, which is an electrical switching device, routes high-voltage

current to the spark plugs. The distributor’s housing contains a switch called the breaker
points. A rotating shaft in the distributor causes the switch to open and close, interrupting the
supply of low-voltage current to a transformer called a coil. The coil uses electromagnetic
induction (see Electricity: Electromagnetism) to convert interruptions of the 12-volt current
into surges of 20,000 volts or more. This high-voltage current passes back to the distributor,
which mechanically routes it through wires to spark plugs, producing a spark that ignites the
gas vapor in the cylinders. A condenser absorbs excess current and protects the breaker
points from damage by the high-voltage surge. The distributor and other devices control the
timing of the spark-plug discharges.

In modern ignition systems, the distributor, coil, points, and condenser have been replaced by
solid-state electronics controlled by a computer. A computer controls the ignition system and
adjusts it to provide maximum efficiency in a variety of driving conditions.


Manufacturers continue to build lighter vehicles with improved structural rigidity and ability to
protect the driver and passengers during collisions.

Bumpers evolved as rails or bars to protect the front and rear of the car’s body from damage
in minor collisions. Over the years, bumpers became stylish and, in some cases, not strong
enough to survive minor collisions without expensive repairs. Eventually, government
regulations required bumpers designed to withstand low-speed collisions with less damage.
Some bumpers can withstand 4-km/h (2.5-mph) collisions with no damage, while others can
withstand 8-km/h (5-mph) collisions with no damage.

Modern vehicles feature crumple zones, portions of the automobile designed to absorb forces
that otherwise would be transmitted to the passenger compartment. Passenger compartments
on many vehicles also have reinforced roll bar structures in the roof, in case the vehicle
overturns, and protective beams in the doors to help protect passengers from side impacts.

Seat belt and upper-body restraints that relax to permit comfort but tighten automatically
during an impact are now common. Some car models are equipped with shoulder-restraint
belts that slide into position automatically when the car’s doors close.

An air bag is a high-speed inflation device hidden in the hub of the steering wheel or in the
dash on the passenger’s side. Some automobiles have side-impact air bags, located in doors
or seats. At impact, the bag inflates almost instantaneously. The inflated bag creates a
cushion between the occupant and the vehicle’s interior. Air bags first appeared in the mid-
1970s, available as an optional accessory. Today they are installed on all new passenger cars
sold in the United States.
Air bags inflate with great force, which occasionally endangers a child or infant passenger.
Some newer automobile models are equipped with switches to disable the passenger-side air
bags when a child or infant is traveling in the passenger seat. Automakers continue to
research ways to make air-bag systems less dangerous for frail and small passengers, yet
effective in collisions.


The history of the automobile actually began about 4,000 years ago when the first wheel was
used for transportation in India. In the early 15th century the Portuguese arrived in China and
the interaction of the two cultures led to a variety of new technologies, including the creation
of a wheel that turned under its own power. By the 1600s small steam-powered engine
models had been developed, but it was another century before a full-sized engine-powered
vehicle was created.

In 1769 French Army officer Captain Nicolas-Joseph Cugnot built what has been called the first
automobile. Cugnot’s three-wheeled, steam-powered vehicle carried four persons. Designed to
move artillery pieces, it had a top speed of a little more than 3.2 km/h (2 mph) and had to
stop every 20 minutes to build up a fresh head of steam.

As early as 1801 successful but very heavy steam automobiles were introduced in England.
Laws barred them from public roads and forced their owners to run them like trains on private
tracks. In 1802 a steam-powered coach designed by British engineer Richard Trevithick
journeyed more than 160 km (100 mi) from Cornwall to London. Steam power caught the
attention of other vehicle builders. In 1804 American inventor Oliver Evans built a steam-
powered vehicle in Chicago, Illinois. French engineer Onésiphore Pecqueur built one in 1828.

British inventor Walter Handcock built a series of steam carriages in the mid-1830s that were
used for the first omnibus service in London. By the mid-1800s England had an extensive
network of steam coach lines. Horse-drawn stagecoach companies and the new railroad
companies pressured the British Parliament to approve heavy tolls on steam-powered road
vehicles. The tolls quickly drove the steam coach operators out of business.

During the early 20th century steam cars were popular in the United States. Most famous was
the Stanley Steamer, built by American twin brothers Freelan and Francis Stanley. A Stanley
Steamer established a world land speed record in 1906 of 205.44 km/h (121.573 mph).
Manufacturers produced about 125 models of steam-powered automobiles, including the
Stanley, until 1932.

A Internal-Combustion Engine
Development of lighter steam cars during the 19th century coincided with major developments
in engines that ran on gasoline or other fuels. Because the newer engines burned fuel in
cylinders inside the engine, they were called internal-combustion engines.

In 1860 French inventor Jean-Joseph-Étienne Lenoir patented a one-cylinder engine that used
kerosene for fuel. Two years later, a vehicle powered by Lenoir’s engine reached a top speed
of about 6.4 km/h (about 4 mph). In 1864 Austrian inventor Siegfried Marcus built and drove
a carriage propelled by a two-cylinder gasoline engine. American George Brayton patented an
internal-combustion engine that was displayed at the 1876 Centennial Exhibition in
Philadelphia, Pennsylvania.

In 1876 German engineer Nikolaus August Otto built a four-stroke gas engine, the most direct
ancestor to today’s automobile engines. In a four-stroke engine the pistons move down to
draw fuel vapor into the cylinder during stroke one; in stroke two, the pistons move up to
compress the vapor; in stroke three the vapor explodes and the hot gases push the pistons
down the cylinders; and in stroke four the pistons move up to push exhaust gases out of the
cylinders. Engines with two or more cylinders are designed so combustion occurs in one
cylinder after the other instead of in all at once. Two-stroke engines accomplish the same
steps, but less efficiently and with more exhaust emissions.

Automobile manufacturing began in earnest in Europe by the late 1880s. German engineer
Gottlieb Daimler and German inventor Wilhelm Maybach mounted a gasoline-powered engine
onto a bicycle, creating a motorcycle, in 1885. In 1887 they manufactured their first car,
which included a steering tiller and a four-speed gearbox. Another German engineer, Karl
Benz, produced his first gasoline car in 1886. In 1890 Daimler and Maybach started a
successful car manufacturing company, The Daimler Motor Company, which eventually merged
with Benz’s manufacturing firm in 1926 to create Daimler-Benz. The joint company makes cars
today under the Mercedes-Benz nameplate (see DaimlerChrysler AG).

In France, a company called Panhard-Levassor began making cars in 1894 using Daimler’s
patents. Instead of installing the engine under the seats, as other car designers had done, the
company introduced the design of a front-mounted engine under the hood. Panhard-Levassor
also introduced a clutch and gears, and separate construction of the chassis, or underlying
structure of the car, and the car body. The company’s first model was a gasoline-powered
buggy steered by a tiller.

French bicycle manufacturer Armand Peugeot saw the Panhard-Levassor car and designed an
automobile using a similar Daimler engine. In 1891 this first Peugeot automobile paced a
1,046-km (650-mi) professional bicycle race between Paris and Brest. Other French
automobile manufacturers opened shop in the late 1800s, including Renault. In Italy, Fiat
(Fabbrica Italiana Automobili di Torino) began building cars in 1899.
American automobile builders were not far behind. Brothers Charles Edgar Duryea and James
Frank Duryea built several gas-powered vehicles between 1893 and 1895. The first Duryea, a
one-cylinder, four-horsepower model, looked much like a Panhard-Levassor model. In 1893
American industrialist Henry Ford built an internal-combustion engine from plans he saw in a
magazine. In 1896 he used an engine to power a vehicle mounted on bicycle wheels and
steered by a tiller.

B Early Electric Cars

For a few decades in the 1800s, electric engines enjoyed great popularity because they were
quiet and ran at slow speeds that were less likely to scare horses and people. By 1899 an
electric car designed and driven by Belgian inventor Camille Jenatzy set a record of 105.8810
km/h (65.79 mph).

Early electric cars featured a large bank of storage batteries under the hood. Heavy cables
connected the batteries to a motor between the front and rear axles. Most electric cars had
top speeds of 48 km/h (30 mph), but could go only 80 km (50 mi) before their batteries
needed recharging. Electric automobiles were manufactured in quantity in the United States
until 1930.


For many years after the introduction of automobiles, three kinds of power sources were in
common use: steam engines, gasoline engines, and electric motors. In 1900 more than 2,300
automobiles were registered in New York City; Boston, Massachusetts; and Chicago, Illinois.
Of these, 1,170 were steam cars, 800 were electric cars, and only 400 were gasoline cars.
Gasoline-powered engines eventually became the nearly universal choice for automobiles
because they allowed longer trips and faster speeds than engines powered by steam or

But development of gasoline cars in the early 1900s was hindered in the United States by legal
battles over a patent obtained by New York lawyer George B. Selden. Selden saw a gasoline
engine at the Philadelphia Centennial Exposition in 1876. He then designed a similar one and
obtained a broad patent that for many years was interpreted to apply to all gasoline engines
for automobiles. Although Selden did not manufacture engines or automobiles, he collected
royalties from those who did.

Henry Ford believed Selden’s patent was invalid. Selden sued when Ford refused to pay
royalties for Ford-manufactured engines. After eight years of court battles, the courts ruled in
1911 that Selden’s patent applied only to two-stroke engines. Ford and most other
manufacturers were using four-stroke engines, so Selden could not charge them royalties.
Improvements in the operating and riding qualities of gasoline automobiles developed quickly
after 1900. The 1902 Locomobile was the first American car with a four-cylinder, water-
cooled, front-mounted gasoline engine, very similar in design to most cars today. Built-in
baggage compartments appeared in 1906, along with weather resistant tops and side curtains.
An electric self-starter was introduced in 1911 to replace the hand crank used to start the
engine turning. Electric headlights were introduced at about the same time.

Most automobiles at the turn of the 20th century appeared more or less like horseless
carriages. In 1906 gasoline-powered cars were produced that had a style all their own. In
these new models, a hood covered the front-mounted engine. Two kerosene or acetylene
lamps mounted to the front served as headlights. Cars had fenders that covered the wheels
and step-up platforms called running boards, which helped passengers get in and out of the
vehicle. The passenger compartment was behind the engine. Although drivers of horse-drawn
vehicles usually sat on the right, automotive steering wheels were on the left in the United

In 1903 Henry Ford incorporated the Ford Motor Company, which introduced its first
automobile, the Model A, in that same year. It closely resembled the 1903 Cadillac, which was
hardly surprising since Ford had designed cars the previous year for the Cadillac Motor Car
Company. Ford’s company rolled out new car models each year, and each model was named
with a letter of the alphabet. By 1907, when models R and S appeared, Ford’s share of the
domestic automobile market had soared to 35 percent.

Ford’s famous Model T debuted in 1908 but was called a 1909 Ford. Ford built 17,771 Model
T’s and offered nine body styles. Popularly known as the Tin Lizzy, the Model T became one of
the biggest-selling automobiles of all time. Ford sold more than 15 million before stopping
production of the model in 1927. The company’s innovative assembly-line method of building
the cars was widely adopted in the automobile industry.

By 1920 more than 8 million Americans owned cars. Major reasons for the surge in automobile
ownership were Ford’s Model T, the assembly-line method of building it, and the affordability
of cars for the ordinary wage earner.

Improvements in engine-powered cars during the 1920s contributed to their popularity:

synchromesh transmissions for easier gear shifting; four-wheel hydraulic brake systems;
improved carburetors; shatterproof glass; balloon tires; heaters; and mechanically operated
windshield wipers.

From 1930 to 1937, automobile engines and bodies became large and luxurious. Many 12-
and 16-cylinder cars were built. Independent front suspension, which made the big cars more
comfortable, appeared in 1933. Also introduced during the 1930s were stronger, more reliable
braking systems, and higher-compression engines, which developed more horsepower.
Mercedes introduced the world’s first diesel car in 1936. Automobiles on both sides of the
Atlantic were styled with gracious proportions, long hoods, and pontoon-shaped fenders.
Creative artistry merged with industrial design to produce appealing, aerodynamic

Some of the first vehicles to fully incorporate the fender into the bodywork came along just
after World War II, but the majority of designs still had separate fenders with pontoon shapes
holding headlight assemblies. Three companies, Ford, Nash, and Hudson Motor Car Company,
offered postwar designs that merged fenders into the bodywork. The 1949 Ford was a
landmark in this respect, and its new styling was so well accepted the car continued in
production virtually unchanged for three years, selling more than 3 million. During the 1940s,
sealed-beam headlights, tubeless tires, and the automatic transmission were introduced.

Two schools of styling emerged in the 1950s, one on each side of the Atlantic. The Europeans
continued to produce small, light cars weighing less than 1,300 kg (2,800 lb). European sports
cars of that era featured hand-fashioned aluminum bodies over a steel chassis and framework.

In America, automobile designers borrowed features for their cars that were normally found
on aircraft and ships, including tailfins and portholes. Automobiles were produced that had
more space, more power, and smoother riding capability. Introduction of power steering and
power brakes made bigger cars easier to handle. The Buick Motor Car Company, Olds Motor
Vehicle Company (Oldsmobile), Cadillac Automobile Company, and Ford all built enormous
cars, some weighing as much as 2,495 kg (5,500 lb).

The first import by German manufacturer Volkswagen AG, advertised as the Beetle, arrived in
the United States in 1949. Only two were sold that year, but American consumers soon began
buying the Beetle and other small imports by the thousands. That prompted a downsizing of
some American-made vehicles. The first American car called a compact was the Nash
Rambler. Introduced in 1950, it did not attract buyers on a large scale until 1958. More
compacts, smaller in overall size than a standard car but with virtually the same interior body
dimensions, emerged from the factories of many major manufacturers. The first Japanese
imports, 16 compact trucks, arrived in the United States in 1956.

In the 1950s new automotive features were introduced, including air conditioning and
electrically operated car windows and seat adjusters. Manufacturers changed from the 6-volt
to the 12-volt ignition system, which gave better engine performance and more reliable
operation of the growing number of electrical accessories.

By 1960 sales of foreign and domestic compacts accounted for about one-third of all
passenger cars sold in the United States. American cars were built smaller, but with increased
engine size and horsepower. Heating and ventilating systems became standard equipment on
even the least expensive models. Automatic transmissions, power brakes, and power steering
became widespread. Styling sometimes prevailed over practicality—some cars were built in
which the engines had to be lifted to allow simple service operations, like changing the spark
plugs. Back seats were designed with no legroom.

In the 1970s American manufacturers continued to offer smaller, lighter models in addition to
the bigger sedans that led their product lines, but Japanese and European compacts continued
to sell well. Catalytic converters were introduced to help reduce exhaust emissions.

During this period, the auto industry was hurt by the energy crisis, created when the
Organization of Petroleum Exporting Countries (OPEC), a cartel of oil-producing countries, cut
back on sales to other countries. The price of crude oil skyrocketed, driving up the price of
gasoline. Large cars were getting as little as 8 miles per gallon (mpg), while imported
compacts were getting as much as 35 mpg. More buyers chose the smaller, more fuel-efficient

Digital speedometers and electronic prompts to service parts of the vehicle appeared in the
1980s. Japanese manufacturers opened plants in the United States. At the same time, sporty
cars and family minivans surged in popularity.

Advances in automobile technology in the 1980s included better engine control and the use of
innovative types of fuel. In 1981 Bayerische Motoren Werke AG (BMW) introduced an on-
board computer to monitor engine performance. A solar-powered vehicle, SunRaycer, traveled
3,000 km (1,864 mi) in Australia in six days.


Pollution-control laws adopted at the beginning of the 1990s in some of the United States and
in Europe called for automobiles that produced better gas mileage with lower emissions. The
California Air Resources Board required companies with the largest market shares to begin
selling vehicles that were pollution free—in other words, electric. In 1996 General Motors
became the first to begin selling an all-electric car, the EV1, to California buyers. The all-
electric cars introduced so far have been limited by low range, long recharges, and weak
consumer interest.

Engines that run on hydrogen have been tested. Hydrogen combustion produces only a trace
of harmful emissions, no carbon dioxide, and a water-vapor by-product. However, technical
problems related to the gas’s density and flammability remain to be solved.

Diesel engines burn fuel more efficiently, and produce fewer pollutants, but they are noisy.
Popular in trucks and heavy vehicles, diesel engines are only a small portion of the automobile
market. A redesigned, quieter diesel engine introduced by Volkswagen in 1996 may pave the
way for more diesels, and less pollution, in passenger cars.
While some developers searched for additional alternatives, others investigated ways to
combine electricity with liquid fuels to produce low-emissions power systems. Two automobiles
with such hybrid engines, the Toyota Prius and the Honda Insight, became available in the late
1990s. Prius hit automobile showrooms in Japan in 1997, selling 30,000 models in its first two
years of production. The Prius became available for sale in North America in 2000. The Honda
Insight debuted in North America in late 1999. Both vehicles, known as hybrid electric vehicles
(HEVs), promised to double the fuel efficiency of conventional gasoline-powered cars while
significantly reducing toxic emissions.

Computer control of automobile systems increased dramatically during the 1990s. The central
processing unit (CPU) in modern engines manages overall engine performance.
Microprocessors regulating other systems share data with the CPU. Computers manage fuel
and air mixture ratios, ignition timing, and exhaust-emission levels. They adjust the antilock
braking and traction control systems. In many models, computers also control the air
conditioning and heating, the sound system, and the information displayed in the vehicle’s

Expanded use of computer technology, development of stronger and lighter materials, and
research on pollution control will produce better, “smarter” automobiles. In the 1980s the
notion that a car would “talk” to its driver was science fiction; by the 1990s it had become

Onboard navigation was one of the new automotive technologies in the 1990s. By using the
satellite-aided global positioning system (GPS), a computer in the automobile can pinpoint the
vehicle’s location within a few meters. The onboard navigation system uses an electronic
compass, digitized maps, and a display screen showing where the vehicle is relative to the
destination the driver wants to reach. After being told the destination, the computer locates it
and directs the driver to it, offering alternative routes if needed.

Some cars now come equipped with GPS locator beacons, enabling a GPS system operator to
locate the vehicle, map its location, and if necessary, direct repair or emergency workers to
the scene.

Cars equipped with computers and cellular telephones can link to the Internet to obtain
constantly updated traffic reports, weather information, route directions, and other data.
Future built-in computer systems may be used to automatically obtain business information
over the Internet and manage personal affairs while the vehicle’s owner is driving.

During the 1980s and 1990s, manufacturers trimmed 450 kg (1,000 lb) from the weight of the
typical car by making cars smaller. Less weight, coupled with more efficient engines, doubled
the gas mileage obtained by the average new car between 1974 and 1995. Further reductions
in vehicle size are not practical, so the emphasis has shifted to using lighter materials, such as
plastics, aluminum alloys, and carbon composites, in the engine and the rest of the vehicle.
Looking ahead, engineers are devising ways to reduce driver errors and poor driving habits.
Systems already exist in some locales to prevent intoxicated drivers from starting their
vehicles. The technology may be expanded to new vehicles. Anticollision systems with sensors
and warning signals are being developed. In some, the car’s brakes automatically slow the
vehicle if it is following another vehicle too closely. New infrared sensors or radar systems may
warn drivers when another vehicle is in their “blind spot.”

Catalytic converters work only when they are warm, so most of the pollution they emit occurs
in the first few minutes of operation. Engineers are working on ways to keep the converters
warm for longer periods between drives, or heat the converters more rapidly.


Gear, toothed wheel or cylinder used to transmit rotary or reciprocating motion from one part
of a machine to another. Two or more gears, transmitting motion from one shaft to another,
constitute a gear train. At one time various mechanisms were collectively called gearing. Now,
however, the word gearing is used only to describe systems of wheels or cylinders with
meshing teeth. Gearing is chiefly used to transmit rotating motion, but can, with suitably
designed gears and flat-toothed sectors, be employed to transform reciprocating motion into
rotating motion, and vice versa.


The simplest gear is the spur gear, a wheel with teeth cut across its edge parallel to the axis.
Spur gears transmit rotating motion between two shafts or other parts with parallel axes. In
simple spur gearing, the driven shaft revolves in the opposite direction to the driving shaft. If
rotation in the same direction is desired, an idler gear is placed between the driving gear and
the driven gear. The idler revolves in the opposite direction to the driving gear and therefore
turns the driven gear in the same direction as the driving gear. In any form of gearing the
speed of the driven shaft depends on the number of teeth in each gear. A gear with 10 teeth
driving a gear with 20 teeth will revolve twice as fast as the gear it is driving, and a 20-tooth
gear driving a 10-tooth gear will revolve at half the speed. By using a train of several gears,
the ratio of driving to driven speed may be varied within wide limits.

Internal, or annular, gears are variations of the spur gear in which the teeth are cut on the
inside of a ring or flanged wheel rather than on the outside. Internal gears usually drive or are
driven by a pinion, a small gear with few teeth. A rack, a flat, toothed bar that moves in a
straight line, operates like a gear wheel with an infinite radius and can be used to transform
the rotation of a pinion to reciprocating motion, or vice versa.

Bevel gears are employed to transmit rotation between shafts that do not have parallel axes.
These gears have cone-shaped bodies and straight teeth. When the angle between the
rotating shafts is 90°, the bevel gears used are called miter gears.


These have teeth that are not parallel to the axis of the shaft but are spiraled around the shaft
in the form of a helix. Such gears are suitable for heavy loads because the gear teeth come
together at an acute angle rather than at 90° as in spur gearing. Simple helical gearing has
the disadvantage of producing a thrust that tends to move the gears along their respective
shafts. This thrust can be avoided by using double helical, or herringbone, gears, which have
V-shaped teeth composed of half a right-handed helical tooth and half a left-handed helical
tooth. Hypoid gears are helical bevel gears employed when the axes of the two shafts are
perpendicular but do not intersect. One of the most common uses of hypoid gearing is to
connect the drive shaft and the rear axle in automobiles. Helical gearing used to transmit
rotation between shafts that are not parallel is often incorrectly called spiral gearing.

Another variation of helical gearing is provided by the worm gear, also called the screw gear.
A worm gear is a long, thin cylinder that has one or more continuous helical teeth that mesh
with a helical gear. Worm gears differ from helical gears in that the teeth of the worm slide
across the teeth of the driven gear instead of exerting a direct rolling pressure. Worm gears
are used chiefly to transmit rotation, with a large reduction in speed, from one shaft to
another at a 90° angle.
Clocks and Watches

Clocks and Watches, devices used to measure or indicate the passage of time. A clock, which
is larger than a watch, is usually intended to be kept in one place; a watch is designed to be
carried or worn. Both types of timepieces require a source of power and a means of
transmitting and controlling it, as well as indicators to register the lapse of time units.

In a clock, the source of power may be produced by weight, a mainspring, or an electric
current. Except in electric or electronic clocks, periodic adjustments, such as lifting the weight
or tightening the spring, are needed. The motive force generated by the power source in a
mechanical clock is transmitted by a gear train and regulated by a pendulum or a balance
wheel. In such a clock, the time may be reported audibly by the striking of a gong or chime
and is registered visually by the rotation of wheels bearing numerals or by the position of
hands on a dial. In electric or electronic clocks, time may be shown by a display of numbers.

A mechanical watch uses a coiled spring as its power source. As in spring-powered clocks, the
watch conserves energy by means of a gear train, with a balance wheel regulating the motive
force. In self-winding watches, the mainspring is tightened automatically by means of a weight
on a rotor that responds to the arm movements of the wearer.


In the electric clocks used in homes today, a small motor runs in unison with the power-
station generator, which is regulated to deliver an alternating current of precisely 60 cycles
per second. Electric currents may also be used to keep the movements of several “slave”
clocks synchronized with the pendulum in a master clock.

The quartz-crystal clock developed in 1929 for precision timekeeping employs a ring of quartz
that is connected to an electrical circuit and made to oscillate between 10,000 and 100,000
hertz (cycles per second). The high-frequency oscillation is converted to an alternating
current, reduced to a frequency more convenient for time measurement, and thus made to
drive the motor of a synchronous clock or a digital display. The maximum error of the most
accurate quartz-crystal clocks is plus or minus one second in ten years.

The electric or electronic watch is powered by a small battery that functions for about one year
without replacement. The battery may drive the balance wheel of an otherwise mechanical
clock, or it may be used to drive the oscillations of either a small tuning fork or a quartz


Carefully constructed mechanical timepieces known as chronometers are precision devices

used by navigators in the determination of their longitude at sea and by astronomers and
jewelers for calibrating measuring devices. The first successful chronometer was constructed
in 1761 by English horologist John Harrison. These portable instruments are mounted on a box
on gimbals so as to maintain the delicate movements in a level position. The modern wrist
chronometer is a precision watch regulated in different positions and at various temperatures
and certified by testing bureaus in Switzerland.

Another precision timekeeper is the chronograph, which not only provides accurate time but
also registers elapsed time in fractions of a second. Various forms of chronographs exist,
including the telemeter, which measures the distance of an object from the observer; the
tachometer, which measures speed of rotation; the pulsometer, which determines pulse rate;
and the production counter, which indicates the number of products made in a given time. The
timer, or stopwatch, a form of chronograph used in athletic contests, shows elapsed time
without providing the time of day.

The most precise timekeeping devices are atomic clocks. Their uses include measuring the
rotation of the earth, which may vary by 4 to 5 milliseconds per day, and aiding navigational
systems such as the global positioning system in computing distances. Atomic clocks are
tuned to the frequency of the electromagnetic waves that are emitted or absorbed when
certain atoms or molecules make the transition between two closely spaced, or hyperfine,
energy states. Because the frequency of these waves is unaffected by external forces, the
corresponding period of the waves can be used as a standard to define time intervals.

The cesium-atom clock is used to define the second, the basic unit of time of the International
System of Units. In this clock, cesium-133 atoms in one hyperfine energy state are subjected
to microwave radiation that is near the resonant frequency of the transition to another
hyperfine energy state. The microwave frequency is adjusted, and when the correct frequency
is reached, many atoms make the transition to the new energy state. The frequency of the
microwave radiation is then used to determine the period of the microwave, or the time
interval between wave crests. The second is defined as the duration of 9,192,631,770 periods
of radiation. The cesium-atom clock is very accurate and remains stable over long periods of
time. The most stable cesium-atom clocks have an error of about plus or minus one second in
one million years.
The rubidium clock uses the transition of the rubidium-87 atom between two hyperfine energy
states. It employs the same basic principle as the cesium-atom clock. The rubidium atoms,
however, are first forced to change their hyperfine energy state and are then subjected to
microwave radiation to return them to their original state. When many atoms return to their
original state, the correct transition frequency has been reached and the period of the wave
can be used to measure time. Rubidium clocks are not as stable or as accurate as cesium-
atom clocks, but they are more compact and less expensive.

The hydrogen clock and the ammonia clock rely on the maser principle. In a hydrogen clock, a
focused magnetic field selects hydrogen atoms in a specific hyperfine energy state. These
atoms are forced to change to a lower energy state. When many atoms make the transition,
they begin to oscillate between the two states, emitting energy in the form of an
electromagnetic wave. The period of this emitted wave is used to measure time. The hydrogen
clock is very stable for several hours at a time.


Throughout history, time has been measured by the movement of the earth relative to the sun
and stars. The earliest type of timekeeper, dating from as far back as 3500 BC, was the
shadow clock, or gnomon, a vertical stick or obelisk that casts a shadow. An Egyptian shadow
clock of the 8th century BC is still in existence. The first hemispherical sundial was described
about the 3rd century BC by Chaldean astronomer Berossus. Ancient methods of measuring
hours in the absence of sunlight included the notched candle and the Chinese practice of
burning a knotted rope and noting the length of time required for the fire to travel from one
knot to the next. Devices almost as old as the shadow clock and sundial include the hourglass,
in which the flow of sand is used to measure time intervals, and the water clock, or clepsydra,
in which the flow of water indicates passage of time. Clepsydras became more complicated,
even to the inclusion of gearing in about 270 BC by Greek inventor Ctesibius of Alexandria.
Eventually, a weight falling under the force of gravity was substituted for the flow of water in
time devices, anticipating the mechanical clock.

A The Mechanical Clock

The historical origin of the mechanical clock is obscure. The first recorded examples are found
in the 14th century. Until that time, a time-measuring instrument was known as a horologium,
or hour teller. The name clock, which originally meant “bell,” was first applied in the present
sense to the huge, mechanical time indicators installed in bell towers in the late Middle Ages.

Clockworks were initially heavy, cumbersome devices. A clock built in the 14th century by
Henry De Vick of Württemberg for the royal palace (now the Palais de Justice) in Paris was
powered by a 227-kg (500-lb) weight that descended a distance of 9.8 m (32 ft). The
apparatus for controlling its rate of fall was crude and the clock inaccurate. Clocks of that
period had dials with only one hand, which indicated the nearest quarter hour.

B The Pendulum

A series of inventions in the 17th and 18th centuries increased the accuracy of clockworks and
reduced the weight and bulk of the mechanisms. Galileo had described late in the 16th century
the property of a pendulum, known as isochronism, stating that the period of the swing is
constant. In 1657 Dutch physicist Christiaan Huygens showed how a pendulum could be used
to regulate a clock. Ten years later English physicist Robert Hooke invented an escapement,
which permitted the use in clocks of a pendulum with a small arc of oscillation. British
clockmaker George Graham improved the escapement, and John Harrison developed a means
of compensating for variations in the length of a pendulum resulting from changes in

C Watches

Watchworks were developed when coiled springs were introduced as a source of power. This
type of spring was used in Italy about 1450. About 1500 Peter Henlein, a locksmith in
Nürnberg, Germany, began producing portable timepieces known popularly as Nürnberg eggs.
In 1525 another artisan, Jacob Zech of Prague, invented a fusee, or spiral pulley, to equalize
the uneven pull of the spring. Other improvements that increased the accuracy of watches
included a spiral hairspring, invented about 1660 by Robert Hooke, for the balance wheel, and
a lever escapement devised by British inventor Thomas Mudge about 1765.

Minute and second hands, and crystals to protect both the dial and hands, first appeared on
17th-century watches. Jeweled bearings to reduce friction and prolong the life of watchworks
were introduced in the 18th century.

In the centuries that preceded the introduction of machine-made parts, craftsmanship of a

high order was required to manufacture accurate, durable clocks and watches. Such local craft
organizations as the Paris Guild of Clockmakers (1544) were organized to control the art of
clockmaking and its apprenticeship. A guild known as the Clockmakers Company, founded in
London in 1630, is still in existence. The Netherlands, Germany, and Switzerland also
produced many fine artisans whose work was noted for beauty and a high degree of
mechanical perfection.

D Decorative Clocks

The clock was often a decorative as well as a useful instrument. Early clocks were highly
ornamented. Many bore sculptured figures, and clockworks were used in the towers of late
medieval Europe to set in motion huge statues of saints or allegorical figures. Cuckoo clocks,
containing carved wooden birds, which emerge and “sing” to tell the time, were made in the
Black Forest of Germany as early as 1730 and are still popular. Some early English clocks
were made in the form of lanterns or birdcages. The grandfather, or case, clock, which has the
pendulum and weight exposed beneath a gear housing at the top of a tall cabinet, was
designed before machine-cut gears were introduced, and it continues to be a popular
ornamental clock.

Watches were originally shaped like drums or balls and were worn suspended from a belt or
kept in a pocket. Wristwatches became popular as watchworks became smaller. Beginning in
the 18th century, Switzerland became the center of a watchmaking industry, particularly in
the villages of the Jura Mountains. At first a cottage industry, with families manufacturing
watch parts at home to be assembled and sold by a master watchmaker, Swiss watchmaking
by the 1850s had led to the development of a number of small factories and the foundation of
a major industry. Some modern Swiss watchworks are tiny enough to fit into pencil ends or in


European clockmakers and watchmakers brought their skills and mechanical ingenuity to
colonial America. In 1650, before the introduction of the pendulum clock, a clock could be
found in a Boston, Massachusetts, church tower. The first public clock in New York City was
built in 1716 for the City Hall at Nassau and Wall streets, and a clock was installed in
Independence Hall in Philadelphia, Pennsylvania, by 1753.

Mass production of clocks with interchangeable parts began in the United States after the
American Revolution (1775-1783). Because of the scarcity of metals, well-seasoned wood was
used for the movements. In the early 1800s, Simon Willard of Roxbury, Massachusetts,
patented the popular banjo clock, and Eli Terry of Connecticut evolved a shelf clock called the
pillar-and-scroll clock, which required winding only once a day. About the same time in
Plymouth Hollow (now Thomaston), Connecticut, Seth Thomas founded the Seth Thomas Clock
Company, which was, in the mid-20th century, one of the largest clock factories in the world.

Watches were not produced in significant volume in the United States until about 1800, when
Thomas Harland of Norwich, Connecticut, established a factory with a capacity of 200 units a
year. In 1836 the Pitkin brothers of East Hartford, Connecticut, produced the first American-
designed watch and the first containing a machine-made part. Despite a reputation for
accuracy and durability, the manufacture of this watch was discontinued as a result of the
depression of 1837, which temporarily crippled American industry.

During this period, however, Chauncey Jerome of Bristol, Connecticut, devised a rolled-brass
clock movement that could be sold at a low price. Such innovations, together with the
economies of mass production, soon made the United States the leading clock-making country
of the world. As production increased, competition reduced the price of a clock to $1 or less,
and for the first time most families could afford a clock.

Watches also became cheaper as production rose. American horologists Aaron Dennison and
Edward Howard, working in Massachusetts, invented and perfected automatic production
machinery in the 1850s. New designs reduced the number of parts required. Watches wound
with keys were replaced after 1875 by stem-wound types. The first Waterbury, a famous
American pocket watch, could be sold for only $4 because it used a stamped-out mechanism
without jewels. Later watches were even less expensive. The Ingersol and the Ingraham, for
example, became known as the dollar watches.


The electric clock was an American innovation of the early 1900s, invented by Henry E.
Warren, who induced producers of electric power to time the alternating-current cycles
carefully so that synchronous motors could be used for clocks. The invention by W. H. Shortt
in 1921 of the Shortt Free Pendulum, first installed in the Edinburgh Observatory, made
possible the most accurate timekeeper until the introduction of the quartz clock in the United
States in 1929. The first improvement over the quartz clock was the cesium atomic clock,
developed in England in 1955.
Electric wristwatches appeared on the market in 1957, followed in 1959 by an electronic watch
that substituted a small tuning fork for the usual escapement, with a battery to power the
transistorized oscillating circuit. More recent developments have been the LED (light-emitting
diode) and LCD (liquid crystal display) watches. The LED, developed in the 1960s, uses the
light-producing characteristics of certain semiconductors to illuminate its digital time display; a
quartz crystal provides the oscillations that are reduced to compute time. The LCD, produced
in the 1970s, uses liquid crystals, materials having optical properties similar to liquids and
solid crystals.

Scientific advances in metallurgy and other fields have led to many improvements in
timekeeping devices of all types. The mainsprings of present-day mechanical watches are
made from metals that resist breakage and rust, synthetics have replaced precious stones in
jeweled bearings, and cases have been perfected that seal out both dust and moisture. Other
special-purpose watches include the Braille watch for the blind, which has sturdy hands not
covered with a crystal, and raised dots on the dial to mark the hours; the alarm watch for the
pocket or wrist, which functions as a tiny, portable alarm clock; and the calendar watch, which
shows the day of month and the week. New sources of power, such as sunlight, body heat,
and atomic energy, are being investigated in current horological research.

Airplane, engine-driven vehicle that can fly through the air supported by the action of air
against its wings. Airplanes are heavier than air, in contrast to vehicles such as balloons and
airships, which are lighter than air. Airplanes also differ from other heavier-than-air craft, such
as helicopters, because they have rigid wings; control surfaces, such as movable parts of the
wings and tail, which make it possible to guide their flight; and power plants, or special
engines that permit level or climbing flight.

Modern airplanes range from ultralight aircraft weighing no more than 46 kg (100 lb) and
meant to carry a single pilot, to great jumbo jets, capable of carrying several hundred people,
several hundred tons of cargo, and weighing nearly 454 metric tons.

Airplanes are adapted to specialized uses. Today there are land planes (aircraft that take off
from and land on the ground), seaplanes (aircraft that take off from and land on water),
amphibians (aircraft that can operate on both land and sea), and airplanes that can leave the
ground using the jet thrust of their engines or rotors (rotating wings) and then switch to wing-
borne flight.


An airplane flies because its wings create lift, the upward force on the plane, as they interact
with the flow of air around them. The wings alter the direction of the flow of air as it passes.
The exact shape of the surface of a wing is critical to its ability to generate lift. The speed of
the airflow and the angle at which the wing meets the oncoming airstream also contribute to
the amount of lift generated.

An airplane’s wings push down on the air flowing past them, and in reaction, the air pushes up
on the wings. When an airplane is level or rising, the front edges of its wings ride higher than
the rear edges. The angle the wings make with the horizontal is called the angle of attack. As
the wings move through the air, this angle causes them to push air flowing under them
downward. Air flowing over the top of the wing is also deflected downward as it follows the
specially designed shape of the wing. A steeper angle of attack will cause the wings to push
more air downward. The third law of motion formulated by English physicist Isaac Newton
states that every action produces an equal and opposite reaction (see Mechanics: The Third
Law). In this case, the wings pushing air downward is the action, and the air pushing the
wings upward is the reaction. This causes lift, the upward force on the plane.

Lift is also often explained using Bernoulli’s principle, which states that, under certain
circumstances, a faster moving fluid (such as air) will have a lower pressure than a slower
moving fluid. The air on the top of an airplane wing moves faster and is at a lower pressure
than the air underneath the wing, and the lift generated by the wing can be modeled using
equations derived from Bernoulli’s principle.

Lift is one of the four primary forces acting upon an airplane. The others are weight, thrust,
and drag. Weight is the force that offsets lift, because it acts in the opposite direction. The
weight of the airplane must be overcome by the lift produced by the wings. If an airplane
weighs 4.5 metric tons, then the lift produced by its wings must be greater than 4.5 metric
tons in order for the airplane to leave the ground. Designing a wing that is powerful enough to
lift an airplane off the ground, and yet efficient enough to fly at high speeds over extremely
long distances, is one of the marvels of modern aircraft technology.

Thrust is the force that propels an airplane forward through the air. Thrust is provided by the
airplane’s propulsion system; either a propeller or jet engine or combination of the two.

A fourth force acting on all airplanes is drag. Drag is created because any object moving
through a fluid, such as an airplane through air, produces friction as it interacts with that fluid
and because it must move the fluid out of its way to do its work. A high-lift wing surface, for
example, may create a great deal of lift for an airplane, but because of its large size, it is also
creating a significant amount of drag. That is why high-speed fighters and missiles have such
thin wings—they need to minimize drag created by lift. Conversely, a crop duster, which flies
at relatively slow speeds, may have a big, thick wing because high lift is more important than
the amount of drag associated with it. Drag is also minimized by designing sleek, aerodynamic
airplanes, with shapes that slip easily through the air.

Managing the balance between these four forces is the challenge of flight. When thrust is
greater than drag, an airplane will accelerate. When lift is greater than weight, it will climb.
Using various control surfaces and propulsion systems, a pilot can manipulate the balance of
the four forces to change the direction or speed. A pilot can reduce thrust in order to slow
down or descend. The pilot can lower the landing gear into the airstream and deploy the
landing flaps on the wings to increase drag, which has the same effect as reducing thrust. The
pilot can add thrust either to speed up or climb. Or, by retracting the landing gear and flaps,
and thereby reducing drag, the pilot can accelerate or climb.


In addition to balancing lift, weight, thrust, and drag, modern airplanes have to contend with
another phenomenon. The sound barrier is not a physical barrier but a speed at which the
behavior of the airflow around an airplane changes dramatically. Fighter pilots in World War II
(1939-1945) first ran up against this so-called barrier in high-speed dives during air combat.
In some cases, pilots lost control of the aircraft as shock waves built up on control surfaces,
effectively locking the controls and leaving the crews helpless. After World War II, designers
tackled the realm of supersonic flight, primarily for military airplanes, but with commercial
applications as well.
Supersonic flight is defined as flight at a speed greater than that of the local speed of sound.
At sea level, sound travels through air at approximately 1,220 km/h (760 mph). At the speed
of sound, a shock wave consisting of highly compressed air forms at the nose of the plane.
This shock wave moves back at a sharp angle as the speed increases.
Supersonic flight was achieved in 1947 for the first time by the Bell X-1 rocket plane, flown by
Air Force test pilot Chuck Yeager. Speeds at or near supersonic flight are measured in units
called Mach numbers, which represent the ratio of the speed of the airplane to the speed of
sound as it moves air. An airplane traveling at less than Mach 1 is traveling below the speed of
sound (subsonic); at Mach 1, an airplane is traveling at the speed of sound (transonic); at
Mach 2, an airplane is traveling at twice the speed of sound (supersonic flight). Speeds of
Mach 1 to 5 are referred to as supersonic; speeds of Mach 5 and above are called hypersonic.
Designers in Europe and the United States developed succeeding generations of military
aircraft, culminating in the 1960s and 1970s with Mach 3+ speedsters such as the Soviet MiG-
25 Foxbat interceptor, the XB-70 Valkyrie bomber, and the SR-71 spy plane.

The shock wave created by an airplane moving at supersonic and hypersonic speeds
represents a rather abrupt change in air pressure and is perceived on the ground as a sonic
boom, the exact nature of which varies depending upon how far away the aircraft is and the
distance of the observer from the flight path. Sonic booms at low altitudes over populated
areas are generally considered a significant problem and have prevented most supersonic
airplanes from efficiently utilizing overland routes. For example, the Anglo-French Concorde, a
commercial supersonic aircraft, was generally limited to over-water routes, or to those over
sparsely populated regions of the world. This limitation impacted the commercial viability of
the Concorde, which ended its regular passenger service in October 2003. Designers today
believe they can help lessen the impact of sonic booms created by supersonic airliners but
probably cannot eliminate them.

One of the most difficult practical barriers to supersonic flight is the fact that high-speed flight
produces heat through friction. At such high speeds, enormous temperatures are reached at
the surface of the craft. For example, the Concorde was forced to fly a flight profile dictated by
temperature requirements; if the aircraft moved too fast, then the temperature rose above
safe limits for the aluminum structure of the airplane. Titanium and other relatively exotic, and
expensive, metals are more heat-resistant, but harder to manufacture and maintain. Airplane
designers have concluded that a speed of Mach 2.7 is about the limit for conventional,
relatively inexpensive materials and fuels. Above that speed, an airplane would need to be
constructed of more temperature-resistant materials, and would most likely have to find a way
to cool its fuel.

Airplanes generally share the same basic configuration—each usually has a fuselage, wings,
tail, landing gear, and a set of specialized control surfaces mounted on the wings and tail.

A Fuselage
The fuselage is the main cabin, or body of the airplane. Generally the fuselage has a cockpit
section at the front end, where the pilot controls the airplane, and a cabin section. The cabin
section may be designed to carry passengers, cargo, or both. In a military fighter plane, the
fuselage may house the engines, fuel, electronics, and some weapons. In some of the sleekest
of gliders and ultralight airplanes, the fuselage may be nothing more than a minimal structure
connecting the wings, tail, cockpit, and engines.

B Wings

All airplanes, by definition, have wings. Some are nearly all wing with a very small cockpit.
Others have minimal wings, or wings that seem to be merely extensions of a blended,
aerodynamic fuselage, such as the space shuttle.

Before the 20th century, wings were made of wooden ribs and spars (or beams), covered with
fabric that was sewn tightly and varnished to be extremely stiff. A conventional wing has one
or more spars that run from one end of the wing to the other. Perpendicular to the spar are a
series of ribs, which run from the front, or leading edge, to the rear, or trailing edge, of the
wing. These are carefully constructed to shape the wing in a manner that determines its lifting
properties. Wood and fabric wings often used spruce for the structure, because of that
material’s relatively light weight and high strength, and linen for the cloth covering.
Early airplanes were usually biplanes—craft with two wings on each side of the fuselage,
usually one mounted about 1.5 m (about 5 to 6 ft) above the other. Aircraft pioneers found
they could build such wings relatively easily and brace them together using wires to connect
the upper and lower wing to create a strong structure with substantial lift. In pushing the
many cables, wood, and fabric through the air, these designs created a great deal of drag, so
aircraft engineers eventually pursued the monoplane, or single-wing airplane. A monoplane’s
single wing gives it great advantages in speed, simplicity, and visibility for the pilot.

After World War I (1914-1918), designers began moving toward wings made of steel and
aluminum, and, combined with new construction techniques, these materials enabled the
development of modern all-metal wings capable not only of developing lift but of housing
landing gear, weapons, and fuel.

Over the years, many airplane designers have postulated that the ideal airplane would, in fact,
be nothing but wing. Flying wings, as they are called, were first developed in the 1930s and
1940s. American aerospace manufacturer Northrop Grumman Corporation’s flying wing, the B-
2 bomber, or stealth bomber, developed in the 1980s, has been a great success as a flying
machine, benefiting from modern computer-aided design (CAD), advanced materials, and
computerized flight controls. Popular magazines routinely show artists’ concepts of flying-wing
airliners, but airline and airport managers have been unable to integrate these unusual shapes
into conventional airline and airport facilities.

C Tail Assembly

Most airplanes, except for flying wings, have a tail assembly attached to the rear of the
fuselage, consisting of vertical and horizontal stabilizers, which look like small wings; a
rudder; and elevators. The components of the tail assembly are collectively referred to as the

The stabilizers serve to help keep the airplane stable while in flight. The rudder is at the
trailing edge of the vertical stabilizer and is used by the airplane to help control turns. An
airplane actually turns by banking, or moving, its wings laterally, but the rudder helps keep
the turn coordinated by serving much like a boat’s rudder to move the nose of the airplane left
or right. Moving an airplane’s nose left or right is known as a yaw motion. Rudder motion is
usually controlled by two pedals on the floor of the cockpit, which are pushed by the pilot.

Elevators are control surfaces at the trailing edge of horizontal stabilizers. The elevators
control the up-and-down motion, or pitch, of the airplane’s nose. Moving the elevators up into
the airstream will cause the tail to go down and the nose to pitch up. A pilot controls pitch by
moving a control column or stick.
D Landing Gear

All airplanes must have some type of landing gear. Modern aircraft employ brakes, wheels,
and tires designed specifically for the demands of flight. Tires must be capable of going from a
standstill to nearly 322 km/h (200 mph) at landing, as well as carrying nearly 454 metric tons.
Brakes, often incorporating special heat-resistant materials, must be able to handle
emergencies, such as a 400-metric-ton airliner aborting a takeoff at the last possible moment.
Antiskid braking systems, common on automobiles today, were originally developed for
aircraft and are used to gain maximum possible braking power on wet or icy runways.

Larger and more complex aircraft typically have retractable landing gear—so called because
they can be pulled up into the wing or fuselage after takeoff. Having retractable gear greatly
reduces the drag generated by the wheel structures that would otherwise hang out in the

E Control Components

An airplane is capable of three types of motion that revolve around three separate axes. The
plane may fly steadily in one direction and at one altitude—or it may turn, climb, or descend.
An airplane may roll, banking its wings either left or right, about the longitudinal axis, which
runs the length of the craft. The airplane may yaw its nose either left or right about the
vertical axis, which runs straight down through the middle of the airplane. Finally, a plane may
pitch its nose up or down, moving about its lateral axis, which may be thought of as a straight
line running from wingtip to wingtip.

An airplane relies on the movement of air across its wings for lift, and it makes use of this
same airflow to move in any way about the three axes. To do so, the pilot will manipulate
controls in the cockpit that direct control surfaces on the wings and tail to move into the
airstream. The airplane will yaw, pitch, or roll, depending on which control surfaces or
combination of surfaces are moved, or deflected, by the pilot.

In order to bank and begin a turn, a conventional airplane will deflect control surfaces on the
trailing edge of the wings known as ailerons. In order to bank left, the left aileron is lifted up
into the airstream over the left wing, creating a small amount of drag and decreasing the lift
produced by that wing. At the same time, the right aileron is pushed down into the airstream,
thereby increasing slightly the lift produced by the right wing. The right wing then comes up,
the left wing goes down, and the airplane banks to the left. To bank to the right, the ailerons
are moved in exactly the opposite fashion.

In order to yaw, or turn the airplane’s nose left or right, the pilot must press upon rudder
pedals on the floor of the cockpit. Push down on the left pedal, and the rudder at the trailing
edge of the vertical stabilizer moves to the left. As in a boat, the left rudder moves the nose of
the plane to the left. A push on the right pedal causes the airplane to yaw to the right.

In order to pitch the nose up or down, the pilot usually pulls or pushes on a control wheel or
stick, thereby moving the elevators at the trailing edge of the horizontal stabilizer. Pulling
back on the wheel deflects the elevators upward into the airstream, pushing the tail down and
the nose up. Pushing forward on the wheel causes the elevators to drop down, lifting the tail
and forcing the nose down.

Airplanes that are more complex also have a set of secondary control surfaces that may
include devices such as flaps, slats, trim tabs, spoilers, and speed brakes. Flaps and slats are
generally used during takeoff and landing to increase the amount of lift produced by the wing
at low speeds. Flaps usually droop down from the trailing edge of the wing, although some
jets have leading-edge flaps as well. On some airplanes, they also can be extended back
beyond the normal trailing edge of the wing to increase the surface area of the wing as well as
change its shape. Leading-edge slats usually extend from the front of the wing at low speeds
to change the way the air flows over the wing, thereby increasing lift. Flaps also often serve to
increase drag and slow the approach of a landing airplane.

Trim tabs are miniature control surfaces incorporated into larger control surfaces. For
example, an aileron tab acts like a miniature aileron within the larger aileron. These kinds of
controls are used to adjust more precisely the flight path of an airplane that may be slightly
out of balance or alignment. Elevator trim tabs are usually used to help set the pitch attitude
(the angle of the airplane in relation to the Earth) for a given speed through the air. On some
airplanes, the entire horizontal stabilizer moves in small increments to serve the same
function as a trim tab.

F Instruments

Airplane pilots rely on a set of instruments in the cockpit to monitor airplane systems, to
control the flight of the aircraft, and to navigate.

Systems instruments will tell a pilot about the condition of the airplane’s engines and
electrical, hydraulic, and fuel systems. Piston-engine instruments monitor engine and exhaust-
gas temperatures, and oil pressures and temperatures. Jet-engine instruments measure the
rotational speeds of the rotating blades in the turbines, as well as gas temperatures and fuel

Flight instruments are those used to tell a pilot the course, speed, altitude, and attitude of the
airplane. They may include an airspeed indicator, an artificial horizon, an altimeter, and a
compass. These instruments have many variations, depending on the complexity and
performance of the airplane. For example, high-speed jet aircraft have airspeed indicators that
may indicate speeds both in nautical miles per hour (slightly faster than miles per hour used
with ground vehicles) and in Mach number. The artificial horizon indicates whether the
airplane is banking, climbing, or diving, in relation to the Earth. An airplane with its nose up
may or may not be climbing, depending on its airspeed and momentum.

General-aviation (private aircraft), military, and commercial airplanes also have instruments
that aid in navigation. The compass is the simplest of these, but many airplanes now employ
satellite navigation systems and computers to navigate from any point on the globe to another
without any help from the ground. The Global Positioning System (GPS), developed for the
United States military but now used by many civilian pilots, provides an airplane with its
position to within a few meters. Many airplanes still employ radio receivers that tune to a
ground-based radio-beacon system in order to navigate cross-country. Specially equipped
airplanes can use ultraprecise radio beacons and receivers, known as Instrument Landing
Systems (ILS) and Microwave Landing Systems (MLS), combined with special cockpit displays,
to land during conditions of poor visibility.


Airplanes use either piston or turbine (rotating blades) engines to provide propulsion. In
smaller airplanes, a conventional gas-powered piston engine turns a propeller, which either
pulls or pushes an airplane through the air. In larger airplanes, a turbine engine either turns a
propeller through a gearbox, or uses its jet thrust directly to move an airplane through the air.
In either case, the engine must provide enough power to move the weight of the airplane
forward through the airstream.

The earliest powered airplanes relied on crude steam or gas engines. These piston engines are
examples of internal-combustion engines. Aircraft designers throughout the 20th century
pushed their engineering colleagues constantly for engines with more power, lighter weight,
and greater reliability. Piston engines, however, are still relatively complicated pieces of
machinery, with many precision-machined parts moving through large ranges and in complex
motions. Although enormously improved over the past 90 years of flight and still suitable for
many smaller general aviation aircraft, they fall short of the higher performance possible with
modern jet propulsion and required for commercial and military aviation.

The turbine or jet engine operates on the principle of Newton’s third law of motion, which
states that for every action, there is an opposite but equal reaction. A jet sucks air into the
front, squeezes the air by pulling it through a series of spinning compressors, mixes it with
fuel and ignites the mixture, which then explodes with great force rearward through the
exhaust nozzle. The rearward force is balanced with an equal force that pushes forward the jet
engine and the airplane attached to it. A rocket engine operates on the same principle, except
that, in order to operate in the airless vacuum of space, the rocket must carry along its own
air, in the form of solid propellant or liquid oxidizer, for combustion.

There are several different types of jet engines. The simplest is the ramjet, which takes
advantage of high speed to ram or force the air into the engine, eliminating the need for the
spinning compressor section. This elegant simplicity is offset by the need to boost a ramjet to
several hundred miles an hour before ram-air compression is sufficient to operate the engine.

The turbojet is based on the jet-propulsion system of the ramjet, but with the addition of a
compressor section, a combustion chamber, a turbine to take some power out of the exhaust
and spin the compressor, and an exhaust nozzle. In a turbojet, all of the air taken into the
compressor at the front of the engine is sent through the core of the engine, burned, and
released. Thrust from the engine is derived purely from the acceleration of the released
exhaust gases out the rear.

A modern derivative known as the turbofan, or fan-jet, adds a large fan in front of the
compressor section. This fan pulls an enormous amount of air into the engine case, only a
relatively small fraction of which is sent through the core for combustion. The rest runs along
the outside of the core case and inside the engine casing. This fan flow is mixed with the hot
jet exhaust at the rear of the engine, where it cools and quiets the exhaust noise. In addition,
this high-volume mass of air, accelerated rearward by the fan, produces a great deal of thrust
by itself, even though it is never burned, acting much like a propeller.

In fact, some smaller jet engines are used to turn propellers. Known as turboprops, these
engines produce most of their thrust through the propeller, which is usually driven by the jet
engine through a set of gears. As a power source for a propeller, a turbine engine is extremely
efficient, and many smaller airliners in the 19- to 70-passenger-capacity range use
turboprops. They are particularly efficient at lower altitudes and medium speeds up to 640
km/h (400 mph).


There are a wide variety of types of airplanes. Land planes, carrier-based airplanes, seaplanes,
amphibians, vertical takeoff and landing (VTOL), short takeoff and landing (STOL), and space
shuttles all take advantage of the same basic technology, but their capabilities and uses make
them seem only distantly related.

A Land Planes

Land planes are designed to operate from a hard surface, typically a paved runway. Some land
planes are specially equipped to operate from grass or other unfinished surfaces. A land plane
usually has wheels to taxi, take off, and land, although some specialized aircraft operating in
the Arctic or Antarctic regions have skis in place of wheels. The wheels are sometimes referred
to as the undercarriage, although they are often called, together with the associated brakes,
the landing gear. Landing gear may be fixed, as in some general-aviation airplanes, or
retractable, usually into the fuselage or wings, as in more-sophisticated airplanes in general
and commercial aviation.

B Carrier-Based Aircraft
Carrier-based airplanes are a specially modified type of land plane designed for takeoff from
and landing aboard naval aircraft carriers. Carrier airplanes have a strengthened structure,
including their landing gear, to handle the stresses of catapult-assisted takeoff, in which the
craft is launched by a steam-driven catapult; and arrested landings, made by using a hook
attached to the underside of the aircraft’s tail to catch one of four wires strung across the
flight deck of the carrier.

C Seaplanes
Seaplanes, sometimes called floatplanes or pontoon planes, are often ordinary land planes
modified with floats instead of wheels so they can operate from water. A number of seaplanes
have been designed from scratch to operate only from water bases. Such seaplanes have
fuselages that resemble and perform like ship hulls. Known as flying boats, they may have
small floats attached to their outer wing panels to help steady them at low speeds on the
water, but the weight of the airplane is borne by the floating hull.

D Amphibians

Amphibians, like their animal namesakes, operate from both water and land bases. In many
cases, an amphibian is a true seaplane, with a boat hull and the addition of specially designed
landing gear that can be extended to allow the airplane to taxi right out of the water onto
land. Historically, some flying boats were fitted with so-called beaching gear, a system of
cradles on wheels positioned under the floating aircraft, which then allowed the aircraft to be
rolled onto land.

E Vertical Takeoff and Landing Airplanes

Vertical Takeoff and Landing (VTOL) airplanes typically use the jet thrust from their engines,
pointed down at the Earth, to take off and land straight up and down. After taking off, a VTOL
airplane usually transitions to wing-borne flight in order to cover a longer distance or carry a
significant load. A helicopter is a type of VTOL aircraft, but there are very few VTOL airplanes.
One unique type of VTOL aircraft is the tilt-rotor, which has large, propeller-like rotating wings
or rotors driven by jet engines at the wingtips. For takeoff and landing, the engines and rotors
are positioned vertically, much like a helicopter. After takeoff, however, the engine/rotor
combination tilts forward, and the wing takes on the load of the craft.

The most prominent example of a true VTOL airplane flying today is the AV-8B Harrier II, a
military attack plane that uses rotating nozzles attached to its jet engine to direct the engine
exhaust in the appropriate direction. Flown in the United States by the Marine Corps, as well
as in Spain, Italy, India, and United Kingdom, where it was originally developed, the Harrier
can take off vertically from smaller ships, or it can be flown to operating areas near the
ground troops it supports in its ground-attack role.

F Short Takeoff and Landing Airplanes

Short Takeoff and Landing (STOL) airplanes are designed to be able to function on relatively
short runways. Their designs usually employ wings and high-lift devices on the wings
optimized for best performance during takeoff and landing, as distinguished from an airplane
that has a wing optimized for high-speed cruise at high altitude. STOL airplanes are usually
cargo airplanes, although some serve in a passenger-carrying capacity as well.

G Space Shuttle

The space shuttle, flown by the National Aeronautics and Space Administration (NASA), is an
aircraft unlike any other because it flies as a fixed-wing airplane within the atmosphere and as
a spacecraft outside Earth’s atmosphere. When the space shuttle takes off, it flies like a rocket
with wings, relying on the 3,175 metric tons of thrust generated by its solid-fuel rocket
boosters and liquid-fueled main engines to power its way up, through, and out of the
atmosphere. During landing, the shuttle becomes the world’s most sophisticated glider,
landing without propulsion.


Airplanes can be grouped into a handful of major classes, such as commercial, military, and
general-aviation airplanes, all of which fall under different government-mandated certification
and operating rules.

A Commercial Airplanes

Commercial aircraft are those used for profit making, usually by carrying cargo or passengers
for hire (see Air Transport Industry). They are strictly regulated—in the United States, by the
Federal Aviation Administration (FAA); in Canada, by Transport Canada; and in other
countries, by other national aviation authorities.

Modern large commercial-airplane manufacturers—such The Boeing Company in the United

States and Airbus in Europe—offer a wide variety of aircraft with different capabilities. Today’s
jet airliners carry anywhere from 100 passengers to more than 500 over short and long
Beginning in 1976 the British-French Concorde supersonic transport (SST) carried passengers
at twice the speed of sound. The Concorde flew for British Airways and Air France, flag carriers
of the two nations that funded its development during the late 1960s and 1970s. The United
States had an SST program, but it was ended because of budget and environmental concerns
in 1971. The Concorde ended its regular passenger service in October 2003 due to its lack of
profitability. Declining ticket sales for the high-priced service, which cost about $9,000 and up
for a round-trip fare, combined with higher costs led to the Concorde’s demise. A fatal air
crash in 2000 grounded the Concorde for a full year. It returned to service only to witness a
sharp decline in airline travel following the September 11 terrorist attacks.

B Military Airplanes

Military aircraft are usually grouped into four categories: combat, cargo, training, and
observation (see Military Aviation). Combat airplanes are generally either fighters or bombers,
although some airplanes have both capabilities. Fighters are designed to engage in air combat
with other airplanes, in either defensive or offensive situations. Since the 1950s many fighters
have been capable of Mach 2+ flight (a Mach number represents the ratio of the speed of an
airplane to the speed of sound as it travels through air). Some fighters have a ground-attack
role as well and are designed to carry both air-to-air weapons, such as missiles, and air-to-
ground weapons, such as bombs. Fighters include aircraft such as the Panavia Tornado, the
Boeing F-15 Eagle, the Lockheed-Martin F-16 Falcon, the MiG-29 Fulcrum, and the Su-27

Bombers are designed to carry large air-to-ground-weapons loads and either penetrate or
avoid enemy air defenses in order to deliver those weapons. Some well-known bombers
include the Boeing B-52, the Boeing B-1, and the Northrop-Grumman B-2 stealth bomber.
Bombers such as the B-52 are designed to fly fast at low altitudes, following the terrain, in
order to fly under enemy radar defenses, while others, such as the B-2, may use sophisticated
radar-defeating technologies to fly virtually unobserved.
Today’s military cargo airplanes are capable of carrying enormous tanks, armored personnel
carriers, artillery pieces, and even smaller aircraft. Cargo planes such as the giant Lockheed C-
5B and Boeing C-17 were designed expressly for such roles. Some cargo planes can serve a
dual role as aerial gas stations, refueling different types of military airplanes while in flight.
Such tankers include the Boeing KC-135 and KC-10.
All military pilots go through rigorous training and education programs using military training
airplanes to prepare them to fly the high-performance aircraft of the armed forces. They
typically begin the flight training in relatively simple, propeller airplanes and move into basic
jets before specializing in a career path involving fighters, bombers, or transports. Some
military trainers include the T-34 Mentor, the T-37 and T-38, and the Boeing T-45 Goshawk.

A final category of military airplane is the observation, or reconnaissance, aircraft. With the
advent of the Lockheed U-2 spy plane in the 1950s, observation airplanes were developed
solely for highly specialized missions. Lockheed’s SR-71, a two-seat airplane, uses specialized
engines and fuel to reach altitudes greater than 25,000 m (80,000 ft) and speeds well over
Mach 3.

Unmanned aerial vehicles (UAVs) also were developed for reconnaissance in situations
considered too dangerous for piloted aircraft or in instances where pilot fatigue would be a
factor. UAVs include the Predator drone, made by General Atomics Aeronautical Systems, Inc.,
based in San Diego, California. These unpiloted aircraft are flown by software programs
containing navigational instructions and operated from the ground. They relay video and
infrared images in real time to military commanders, providing instantaneous views of
battlegrounds during the day or at night. Some UAVs, known as Unmanned Combat Aerial
Vehicles (UCAVs), also carry weapons that can be fired by ground operators using the
aircraft’s video and infrared cameras to locate their targets.

C General-Aviation Aircraft

General-aviation aircraft are certified for and intended primarily for noncommercial or private

Pleasure aircraft range from simple single-seat, ultralight airplanes to sleek twin turboprops
capable of carrying eight people. Business aircraft transport business executives to
appointments. Most business airplanes require more reliable performance and more range and
all-weather capability.

Another class of general-aviation airplanes is used in agriculture. Large farms require efficient
ways to spread fertilizer and insecticides over a large area. A very specialized type of airplane,
crop dusters are rugged, highly maneuverable, and capable of hauling several hundred pounds
of chemicals. They can be seen swooping low over farm fields. Not intended for serious cross-
country navigation, crop dusters lack sophisticated navigation aids and complex systems.

Before the end of the 18th century, few people had applied themselves to the study of flight.
One was the Italian Renaissance artist Leonardo da Vinci, during the 15th century. Leonardo
was preoccupied chiefly with bird flight and with flapping-wing machines, called ornithopters.
His aeronautical work lay unknown until late in the 19th century, when it could furnish little of
technical value to experimenters but was a source of inspiration to aspiring engineers. Apart
from Leonardo’s efforts, three devices important to aviation had been invented in Europe in
the Middle Ages and had reached a high stage of development by Leonardo’s time—the
windmill, an early propeller; the kite, an early airplane wing; and the model helicopter.

A The First Airplanes

Between 1799 and 1809 English baronet Sir George Cayley created the concept of the modern
airplane. Cayley abandoned the ornithopter tradition, in which both lift and thrust are provided
by the wings, and designed airplanes with rigid wings to provide lift, and with separate
propelling devices to provide thrust. Through his published works, Cayley laid the foundations
of aerodynamics. He demonstrated, both with models and with full-size gliders, the use of the
inclined plane to provide lift, pitch, and roll stability; flight control by means of a single
rudder-elevator unit mounted on a universal joint; streamlining; and other devices and
practices. In 1853, in his third full-size machine, Cayley sent his unwilling coachman on the
first gliding flight in history.
In 1843 British inventor William Samuel Henson published his patented design for an Aerial
Steam Carriage. Henson’s design did more than any other to establish the form of the modern
airplane—a fixed-wing monoplane with propellers, fuselage, and wheeled landing gear, and
with flight control by means of rear elevator and rudder. Steam-powered models made by
Henson in 1847 were promising but unsuccessful.

In 1890 French engineer Clément Ader built a steam-powered airplane and made the first
actual flight of a piloted, heavier-than-air craft. However, the flight was not sustained, and the
airplane brushed the ground over a distance of 50 m (160 ft). Inventors continued to pursue
the dream of sustained flight. Between 1891 and 1896 German aeronautical engineer Otto
Lilienthal made thousands of successful flights in hang gliders of his own design. Lilienthal
hung in a frame between the wings and controlled his gliders entirely by swinging his torso
and legs in the direction he wished to go. While successful as gliders, his designs lacked a
control system and a reliable method for powering the craft. He was killed in a gliding accident
in 1896.

American inventor Samuel Pierpont Langley had been working for several years on flying
machines. Langley began experimenting in 1892 with a steam-powered, unpiloted aircraft, and
in 1896 made the first sustained flight of any mechanically propelled heavier-than-air craft.
Launched by catapult from a houseboat on the Potomac River near Quantico, Virginia, the
unpiloted Aerodrome, as Langley called it, suffered from design faults. The Aerodrome never
successfully carried a person, and thus prevented Langley from earning the place in history
claimed by the Wright brothers.

B The First Airplane Flight

American aviators Orville Wright and Wilbur Wright of Dayton, Ohio, are considered the
fathers of the first successful piloted heavier-than-air flying machine. Through the disciplines
of sound scientific research and engineering, the Wright brothers put together the combination
of critical characteristics that other designs of the day lacked—a relatively lightweight (337
kg/750 lb), powerful engine; a reliable transmission and efficient propellers; an effective
system for controlling the aircraft; and a wing and structure that were both strong and

At Kitty Hawk, North Carolina, on December 17, 1903, Orville Wright made the first successful
flight of a piloted, heavier-than-air, self-propelled craft, called the Flyer. That first flight
traveled a distance of about 37 m (120 ft). The distance was less than the wingspan of many
modern airliners, but it represented the beginning of a new age in technology and human
achievement. Their fourth and final flight of the day lasted 59 seconds and covered only 260
m (852 ft). The third Flyer, which the Wrights constructed in 1905, was the world’s first fully
practical airplane. It could bank, turn, circle, make figure eights, and remain in the air for as
long as the fuel lasted, up to half an hour on occasion.
C Early Military and Public Interest

The airplane, like many other milestone inventions throughout history, was not immediately
recognized for its potential. During the very early 1900s, prior to World War I (1914-1918),
the airplane was relegated mostly to the county-fair circuit, where daredevil pilots drew large
crowds but few investors. One exception was the United States War Department, which had
long been using balloons to observe the battlefield and expressed an interest in heavier-than-
air craft as early as 1898. In 1908 the Wrights demonstrated their airplane to the U.S. Army’s
Signal Corps at Fort Myer, Virginia. In September of that year, while circling the field at Fort
Myer, Orville crashed while carrying an army observer, Lieutenant Thomas Selfridge. Selfridge
died from his injuries and became the first fatality from the crash of a powered airplane.

On July 25, 1909, French engineer Louis Blériot crossed the English channel in a Blériot XI, a
monoplane of his own design. Blériot’s channel crossing made clear to the world the airplane’s
wartime potential, and this potential was further demonstrated in 1910 and 1911, when
American pilot Eugene Ely took off from and landed on warships. In 1911 the U.S. Army used
a Wright brothers’ biplane to make the first live bomb test from an airplane. That same year,
the airplane was used in its first wartime operation when an Italian captain flew over and
observed Turkish positions during the Italo-Turkish War of 1911 to 1912. Also in 1911,
American inventor and aviator Glenn Curtiss introduced the first practical seaplane. This was a
biplane with a large float beneath the center of the lower wing and two smaller floats beneath
the tips of the lower wing.

The year 1913 became known as the “glorious year of flying.” Aerobatics, or acrobatic flying,
was introduced, and upside-down flying, loops, and other stunts proved the maneuverability of
airplanes. Long-distance flights made in 1913 included a 4,000-km (2,500-mi) flight from
France to Egypt, with many stops, and the first nonstop flight across the Mediterranean Sea,
from France to Tunisia. In Britain, a modified Farnborough B.E. 2 proved itself to be the first
naturally stable airplane in the world. The B.E. 2c version of this airplane was so successful
that nearly 2,000 were subsequently built.

D Planes of World War I

During World War I, the development of the airplane accelerated dramatically. European
designers such as Louis Blériot and Dutch-American engineer Anthony Herman Fokker
exploited basic concepts created by the Wrights and developed ever faster, more capable, and
deadlier combat airplanes. Fokker’s biplanes, such as the D-VII and D-VIII flown by German
pilots, were considered superior to their Allied competition. In 1915 Fokker mounted a
machine gun with a timing gear so that the gun could fire between the rotating propellers. The
resulting Fokker Eindecker monoplane fighter was, for a time, the most successful fighter in
the skies.
The concentrated research and development made necessary by wartime pressures produced
great progress in airplane design and construction. During World War I, outstanding early
British fighters included the Sopwith Pup (1916) and the Sopwith Camel (1917), which flew as
high as 5,800 m (19,000 ft) and had a top speed of 190 km/h (120 mph). Notable French
fighters included the Spad (1916) and the Nieuport 28 (1918). By the end of World War I in
1918, both warring sides had fighters that could fly at altitudes of 7,600 m (25,000 ft) and
speeds up to 250 km/h (155 mph).

E Development of Commercial Aviation

Commercial aviation began in January 1914, just 10 years after the Wrights pioneered the
skies. The first regularly scheduled passenger line in the world operated between Saint
Petersburg and Tampa, Florida. Commercial aviation developed slowly during the next 30
years, driven by the two world wars and service demands of the U.S. Post Office for airmail.

In the early 1920s the air-cooled engine was perfected, along with its streamlined cowling, or
engine casing. Light and powerful, these engines gave strong competition to the older, liquid-
cooled engines. In the mid-1920s light airplanes were produced in great numbers, and club
and private pleasure flying became popular. The inexpensive DeHavilland Moth biplane,
introduced in 1925, put flying within the financial reach of many enthusiasts. The Moth could
travel at 145 km/h (90 mph) and was light, strong, and easy to handle.

Instrument flying became practical in 1929, when the American inventor Elmer Sperry
perfected the artificial horizon and directional gyro. On September 24, 1929, James Doolittle,
an American pilot and army officer, proved the value of Sperry’s instruments by taking off,
flying over a predetermined course, and landing, all without visual reference to the Earth.

Introduced in 1933, Boeing’s Model 247 was considered the first truly modern airliner. It was
an all-metal, low-wing monoplane, with retractable landing gear, an insulated cabin, and room
for ten passengers. An order from United Air Lines for 60 planes of this type tied up Boeing’s
production line and led indirectly to the development of perhaps the most successful propeller
airliner in history, the Douglas DC-3. Trans World Airlines, not willing to wait for Boeing to
finish the order from United, approached airplane manufacturer Donald Douglas in Long
Beach, California, for an alternative, which became, in quick succession, the DC-1, the DC-2,
and the DC-3.

The DC-3 carried 21 passengers, used powerful, 1,000-horsepower engines, and could travel
across the country in less than 24 hours of travel time, although it had to stop many times for
fuel. The DC-3 quickly came to dominate commercial aviation in the late 1930s, and some DC-
3s are still in service today.
Boeing provided the next major breakthrough with its Model 307 Stratoliner, a pressurized
derivative of the famous B-17 bomber, entering service in 1940. With its regulated cabin air
pressure, the Stratoliner could carry 33 passengers at altitudes up to 6,000 m (20,000 ft) and
at speeds of 320 km/h (200 mph).

F Aircraft Developments of World War II

It was not until after World War II (1939-1945), when comfortable, pressurized air transports
became available in large numbers, that the airline industry really prospered. When the United
States entered World War II in 1941, there were fewer than 300 planes in airline service.
Airplane production concentrated mainly on fighters and bombers, and reached a rate of
nearly 50,000 a year by the end of the war. A large number of sophisticated new transports,
used in wartime for troop and cargo carriage, became available to commercial operators after
the war ended. Pressurized propeller planes such as the Douglas DC-6 and Lockheed
Constellation, early versions of which carried troops and VIPs during the war, now carried
paying passengers on transcontinental and transatlantic flights.

Wartime technology efforts also brought to aviation critical new developments, such as the jet
engine. Jet transportation in the commercial-aviation arena arrived in 1952 with Britain’s
DeHavilland Comet, an 885-km/h (550-mph), four-engine jet. The Comet quickly suffered two
fatal crashes due to structural problems and was grounded. This complication gave American
manufacturers Boeing and Douglas time to bring the 707 and DC-8 to the market. Pan
American World Airways inaugurated Boeing 707 jet service in October of 1958, and air travel
changed dramatically almost overnight. Transatlantic jet service enabled travelers to fly from
New York City to London, England, in less than eight hours, half the propeller-airplane time.
Boeing’s new 707 carried 112 passengers at high speed and quickly brought an end to the
propeller era for large commercial airplanes.

After the big, four-engine 707s and DC-8s had established themselves, airlines clamored for
smaller, shorter-range jets, and Boeing and Douglas delivered. Douglas produced the DC-9
and Boeing both the 737 and the trijet 727.

G The Jumbo Jet Era

The next frontier, pioneered in the late 1960s, was the age of the jumbo jet. Boeing,
McDonnell Douglas, and Lockheed all produced wide-body airliners, sometimes called jumbo
jets. Boeing developed and still builds the 747. McDonnell Douglas built a somewhat smaller,
three-engine jet called the DC-10, produced later in an updated version known as the MD-11.
Lockheed built the L-1011 Tristar, a trijet that competed with the DC-10. The L-1011 is no
longer in production, and Lockheed-Martin no longer builds commercial airliners.

In the 1980s McDonnell Douglas introduced the twin-engine MD-80 family, and Boeing
brought online the narrow-body 757 and wide-body 767 twin jets. Airbus had developed the
A300 wide-body twin during the 1970s. During the 1980s and 1990s Airbus expanded its
family of aircraft by introducing the slightly smaller A310 twin jet and the narrow-body A320
twin, a unique, so-called fly-by-wire aircraft with sidestick controllers for the pilots rather than
conventional control columns and wheels. Airbus also introduced the larger A330 twin and the
A340, a four-engine airplane for longer routes, on which passenger loads are somewhat
lighter. In 2000 the company launched production of the A380, a superjumbo jet that will seat
555 passengers on two decks, both of which extend the entire length of the fuselage.
Scheduled to enter service in 2006, the jet will be the world’s largest passenger airliner.

Boeing introduced the 777, a wide-body jumbo jet that can hold up to 400 passengers, in
1995. In 1997 Boeing acquired longtime rival McDonnell Douglas, and a year later the
company announced its intention to halt production of the passenger workhorses MD-11, MD-
80, and MD-90. The company ceded the superjumbo jet market to Airbus and instead focused
its efforts on developing a midsize passenger airplane.

Computer, machine that performs tasks, such as calculations or electronic communication,

under the control of a set of instructions called a program. Programs usually reside within the
computer and are retrieved and processed by the computer’s electronics. The program results
are stored or routed to output devices, such as video display monitors or printers. Computers
perform a wide variety of activities reliably, accurately, and quickly.

People use computers in many ways. In business, computers track inventories with bar codes
and scanners, check the credit status of customers, and transfer funds electronically. In
homes, tiny computers embedded in the electronic circuitry of most appliances control the
indoor temperature, operate home security systems, tell the time, and turn videocassette
recorders (VCRs) on and off. Computers in automobiles regulate the flow of fuel, thereby
increasing gas mileage. Computers also entertain, creating digitized sound on stereo systems
or computer-animated features from a digitally encoded laser disc. Computer programs, or
applications, exist to aid every level of education, from programs that teach simple addition or
sentence construction to programs that teach advanced calculus. Educators use computers to
track grades and communicate with students; with computer-controlled projection units, they
can add graphics, sound, and animation to their communications (see Computer-Aided
Instruction). Computers are used extensively in scientific research to solve mathematical
problems, investigate complicated data, or model systems that are too costly or impractical to
build, such as testing the air flow around the next generation of aircraft. The military employs
computers in sophisticated communications to encode and unscramble messages, and to keep
track of personnel and supplies.


The physical computer and its components are known as hardware. Computer hardware
includes the memory that stores data and program instructions; the central processing unit
(CPU) that carries out program instructions; the input devices, such as a keyboard or mouse,
that allow the user to communicate with the computer; the output devices, such as printers
and video display monitors, that enable the computer to present information to the user; and
buses (hardware lines or wires) that connect these and other computer components. The
programs that run the computer are called software. Software generally is designed to
perform a particular type of task—for example, to control the arm of a robot to weld a car’s
body, to write a letter, to display and modify a photograph, or to direct the general operation
of the computer.

A The Operating System

When a computer is turned on it searches for instructions in its memory. These instructions
tell the computer how to start up. Usually, one of the first sets of these instructions is a
special program called the operating system, which is the software that makes the computer
work. It prompts the user (or other machines) for input and commands, reports the results of
these commands and other operations, stores and manages data, and controls the sequence
of the software and hardware actions. When the user requests that a program run, the
operating system loads the program in the computer’s memory and runs the program. Popular
operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have
graphical user interfaces (GUIs)—that use tiny pictures, or icons, to represent various files and
commands. To access these files or commands, the user clicks the mouse on the icon or
presses a combination of keys on the keyboard. Some operating systems allow the user to
carry out these tasks via voice, touch, or other input methods.

B Computer Memory
To process information electronically, data are stored in a computer in the form of binary
digits, or bits, each having two possible representations (0 or 1). If a second bit is added to a
single bit of information, the number of representations is doubled, resulting in four possible
combinations: 00, 01, 10, or 11. A third bit added to this two-bit representation again doubles
the number of combinations, resulting in eight possibilities: 000, 001, 010, 011, 100, 101,
110, or 111. Each time a bit is added, the number of possible patterns is doubled. Eight bits is
called a byte; a byte has 256 possible combinations of 0s and 1s. See also Expanded Memory;
Extended Memory.

A byte is a useful quantity in which to store information because it provides enough possible
patterns to represent the entire alphabet, in lower and upper cases, as well as numeric digits,
punctuation marks, and several character-sized graphics symbols, including non-English
characters such as . A byte also can be interpreted as a pattern that represents a number
between 0 and 255. A kilobyte—1,024 bytes—can store about 1,000 characters; a megabyte
can store about 1 million characters; a gigabyte can store about 1 billion characters; and a
terabyte can store about 1 trillion characters. Computer programmers usually decide how a
given byte should be interpreted—that is, as a single character, a character within a string of
text, a single number, or part of a larger number. Numbers can represent anything from
chemical bonds to dollar figures to colors to sounds.

The physical memory of a computer is either random access memory (RAM), which can be
read or changed by the user or computer, or read-only memory (ROM), which can be read by
the computer but not altered in any way. One way to store memory is within the circuitry of
the computer, usually in tiny computer chips that hold millions of bytes of information. The
memory within these computer chips is RAM. Memory also can be stored outside the circuitry
of the computer on external storage devices, such as magnetic floppy disks, which can store
about 2 megabytes of information; hard drives, which can store gigabytes of information;
compact discs (CDs), which can store up to 680 megabytes of information; and digital video
discs (DVDs), which can store 8.5 gigabytes of information. A single CD can store nearly as
much information as several hundred floppy disks, and some DVDs can hold more than 12
times as much data as a CD.

C The Bus

The bus enables the components in a computer, such as the CPU and the memory circuits, to
communicate as program instructions are being carried out. The bus is usually a flat cable with
numerous parallel wires. Each wire can carry one bit, so the bus can transmit many bits along
the cable at the same time. For example, a 16-bit bus, with 16 parallel wires, allows the
simultaneous transmission of 16 bits (2 bytes) of information from one component to another.
Early computer designs utilized a single or very few buses. Modern designs typically use many
buses, some of them specialized to carry particular forms of data, such as graphics.

D Input Devices

Input devices, such as a keyboard or mouse, permit the computer user to communicate with
the computer. Other input devices include a joystick, a rodlike device often used by people
who play computer games; a scanner, which converts images such as photographs into digital
images that the computer can manipulate; a touch panel, which senses the placement of a
user’s finger and can be used to execute commands or access files; and a microphone, used to
input sounds such as the human voice which can activate computer commands in conjunction
with voice recognition software. “Tablet” computers are being developed that will allow users
to interact with their screens using a penlike device.

E The Central Processing Unit

Information from an input device or from the computer’s memory is communicated via the bus
to the central processing unit (CPU), which is the part of the computer that translates
commands and runs programs. The CPU is a microprocessor chip—that is, a single piece of
silicon containing millions of tiny, microscopically wired electrical components. Information is
stored in a CPU memory location called a register. Registers can be thought of as the CPU’s
tiny scratchpad, temporarily storing instructions or data. When a program is running, one
special register called the program counter keeps track of which program instruction comes
next by maintaining the memory location of the next program instruction to be executed. The
CPU’s control unit coordinates and times the CPU’s functions, and it uses the program counter
to locate and retrieve the next instruction from memory.

In a typical sequence, the CPU locates the next instruction in the appropriate memory device.
The instruction then travels along the bus from the computer’s memory to the CPU, where it is
stored in a special instruction register. Meanwhile, the program counter changes—usually
increasing a small amount—so that it contains the location of the instruction that will be
executed next. The current instruction is analyzed by a decoder, which determines what the
instruction will do. Any data the instruction needs are retrieved via the bus and placed in the
CPU’s registers. The CPU executes the instruction, and the results are stored in another
register or copied to specific memory locations via a bus. This entire sequence of steps is
called an instruction cycle. Frequently, several instructions may be in process simultaneously,
each at a different stage in its instruction cycle. This is called pipeline processing.

F Output Devices

Once the CPU has executed the program instruction, the program may request that the
information be communicated to an output device, such as a video display monitor or a flat
liquid crystal display. Other output devices are printers, overhead projectors, videocassette
recorders (VCRs), and speakers. See also Input/Output Devices.


Programming languages contain the series of commands that create software. A CPU has a
limited set of instructions known as machine code that it is capable of understanding. The CPU
can understand only this language. All other programming languages must be converted to
machine code for them to be understood. Computer programmers, however, prefer to use
other computer languages that use words or other commands because they are easier to use.
These other languages are slower because the language must be translated first so that the
computer can understand it. The translation can lead to code that may be less efficient to run
than code written directly in the machine’s language.

A Machine Language

Computer programs that can be run by a computer’s operating system are called executables.
An executable program is a sequence of extremely simple instructions known as machine
code. These instructions are specific to the individual computer’s CPU and associated
hardware; for example, Intel Pentium and Power PC microprocessor chips each have different
machine languages and require different sets of codes to perform the same task. Machine
code instructions are few in number (roughly 20 to 200, depending on the computer and the
CPU). Typical instructions are for copying data from a memory location or for adding the
contents of two memory locations (usually registers in the CPU). Complex tasks require a
sequence of these simple instructions. Machine code instructions are binary—that is,
sequences of bits (0s and 1s). Because these sequences are long strings of 0s and 1s and are
usually not easy to understand, computer instructions usually are not written in machine code.
Instead, computer programmers write code in languages known as an assembly language or a
high-level language.

B Assembly Language

Assembly language uses easy-to-remember commands that are more understandable to

programmers than machine-language commands. Each machine language instruction has an
equivalent command in assembly language. For example, in one Intel assembly language, the
statement “MOV A, B” instructs the computer to copy data from location A to location B. The
same instruction in machine code is a string of 16 0s and 1s. Once an assembly-language
program is written, it is converted to a machine-language program by another program called
an assembler.

Assembly language is fast and powerful because of its correspondence with machine language.
It is still difficult to use, however, because assembly-language instructions are a series of
abstract codes and each instruction carries out a relatively simple task. In addition, different
CPUs use different machine languages and therefore require different programs and different
assembly languages. Assembly language is sometimes inserted into a high-level language
program to carry out specific hardware tasks or to speed up parts of the high-level program
that are executed frequently.

C High-Level Languages

High-level languages were developed because of the difficulty of programming using assembly
languages. High-level languages are easier to use than machine and assembly languages
because their commands are closer to natural human language. In addition, these languages
are not CPU-specific. Instead, they contain general commands that work on different CPUs.
For example, a programmer writing in the high-level C++ programming language who wants
to display a greeting need include only the following command:

cout << ‘Hello, Encarta User!’ << endl;

This command directs the computer’s CPU to display the greeting, and it will work no matter
what type of CPU the computer uses. When this statement is executed, the text that appears
between the quotes will be displayed. Although the “cout” and “endl” parts of the above
statement appear cryptic, programmers quickly become accustomed to their meanings. For
example, “cout” sends the greeting message to the “standard output” (usually the computer
user’s screen) and “endl” is how to tell the computer (when using the C++ language) to go to
a new line after it outputs the message. Like assembly-language instructions, high-level
languages also must be translated. This is the task of a special program called a compiler. A
compiler turns a high-level program into a CPU-specific machine language. For example, a
programmer may write a program in a high-level language such as C++ or Java and then
prepare it for different machines, such as a Sun Microsystems work station or a personal
computer (PC), using compilers designed for those machines. This simplifies the programmer’s
task and makes the software more portable to different users and machines.


American naval officer and mathematician Grace Murray Hopper helped develop the first
commercially available high-level software language, FLOW-MATIC, in 1957. Hopper is
credited for inventing the term bug, which indicates a computer malfunction; in 1945 she
discovered a hardware failure in the Mark II computer caused by a moth trapped between its
mechanical relays. She documented the event in her laboratory notebook, and the term
eventually came to represent any computer error, including one based strictly on incorrect
instructions in software. Hopper taped the moth into her notebook and wrote, “First actual
case of a bug being found.”


From 1954 to 1958 American computer scientist John Backus of International Business
Machines, Inc. (IBM) developed Fortran, an acronym for Formula Translation. It became a
standard programming language because it could process mathematical formulas. Fortran and
its variations are still in use today, especially in physics.


Hungarian-American mathematician John Kemeny and American mathematician Thomas

Kurtz at Dartmouth College in Hanover, New Hampshire, developed BASIC (Beginner’s All-
purpose Symbolic Instruction Code) in 1964. The language was easier to learn than its
predecessors and became popular due to its friendly, interactive nature and its inclusion on
early personal computers. Unlike languages that require all their instructions to be translated
into machine code first, BASIC is turned into machine language line by line as the program
runs. BASIC commands typify high-level languages because of their simplicity and their
closeness to natural human language. For example, a program that divides a number in half
can be written as


20 Y=X/2

The numbers that precede each line are chosen by the programmer to indicate the sequence
of the commands. The first line prints “ENTER A NUMBER” on the computer screen followed by
a question mark to prompt the user to type in the number labeled “X.” In the next line, that
number is divided by two and stored as “Y.” In the third line, the result of the operation is
displayed on the computer screen. Even though BASIC is rarely used today, this simple
program demonstrates how data are stored and manipulated in most high-level programming


Other high-level languages in use today include C, C++, Ada, Pascal, LISP, Prolog, COBOL,
Visual Basic, and Java. Some languages, such as the “markup languages” known as HTML,
XML, and their variants, are intended to display data, graphics, and media selections,
especially for users of the World Wide Web. Markup languages are often not considered
programming languages, but they have become increasingly sophisticated.

A Object-Oriented Programming Languages

Object-oriented programming (OOP) languages, such as C++ and Java, are based on
traditional high-level languages, but they enable a programmer to think in terms of collections
of cooperating objects instead of lists of commands. Objects, such as a circle, have properties
such as the radius of the circle and the command that draws it on the computer screen.
Classes of objects can inherit features from other classes of objects. For example, a class
defining squares can inherit features such as right angles from a class defining rectangles. This
set of programming classes simplifies the programmer’s task, resulting in more “reusable”
computer code. Reusable code allows a programmer to use code that has already been
designed, written, and tested. This makes the programmer’s task easier, and it results in more
reliable and efficient programs.


A Digital and Analog

Computers can be either digital or analog. Virtually all modern computers are digital. Digital
refers to the processes in computers that manipulate binary numbers (0s or 1s), which
represent switches that are turned on or off by electrical current. A bit can have the value 0 or
the value 1, but nothing in between 0 and 1. Analog refers to circuits or numerical values that
have a continuous range. Both 0 and 1 can be represented by analog computers, but so can
0.5, 1.5, or a number like  (approximately 3.14).
A desk lamp can serve as an example of the difference between analog and digital. If the lamp
has a simple on/off switch, then the lamp system is digital, because the lamp either produces
light at a given moment or it does not. If a dimmer replaces the on/off switch, then the lamp
is analog, because the amount of light can vary continuously from on to off and all intensities
in between.

Analog computer systems were the first type to be produced. A popular analog computer used
in the 20th century was the slide rule. To perform calculations with a slide rule, the user slides
a narrow, gauged wooden strip inside a rulerlike holder. Because the sliding is continuous and
there is no mechanism to stop at any exact values, the slide rule is analog. New interest has
been shown recently in analog computers, particularly in areas such as neural networks. These
are specialized computer designs that attempt to mimic neurons of the brain. They can be
built to respond to continuous electrical signals. Most modern computers, however, are digital
machines whose components have a finite number of states—for example, the 0 or 1, or on or
off bits. These bits can be combined to denote information such as numbers, letters, graphics,
sound, and program instructions.

B Range of Computer Ability

Computers exist in a wide range of sizes and power. The smallest are embedded within the
circuitry of appliances, such as televisions and wristwatches. These computers are typically
preprogrammed for a specific task, such as tuning to a particular television frequency,
delivering doses of medicine, or keeping accurate time. They generally are “hard-wired”—that
is, their programs are represented as circuits that cannot be reprogrammed.

Programmable computers vary enormously in their computational power, speed, memory, and
physical size. Some small computers can be held in one hand and are called personal digital
assistants (PDAs). They are used as notepads, scheduling systems, and address books; if
equipped with a cellular phone, they can connect to worldwide computer networks to
exchange information regardless of location. Hand-held game devices are also examples of
small computers.

Portable laptop and notebook computers and desktop PCs are typically used in businesses and
at home to communicate on computer networks, for word processing, to track finances, and
for entertainment. They have large amounts of internal memory to store hundreds of
programs and documents. They are equipped with a keyboard; a mouse, trackball, or other
pointing device; and a video display monitor or liquid crystal display (LCD) to display
information. Laptop and notebook computers usually have hardware and software similar to
PCs, but they are more compact and have flat, lightweight LCDs instead of television-like
video display monitors. Most sources consider the terms “laptop” and “notebook” synonymous.
Workstations are similar to personal computers but have greater memory and more extensive
mathematical abilities, and they are connected to other workstations or personal computers to
exchange data. They are typically found in scientific, industrial, and business environments—
especially financial ones, such as stock exchanges—that require complex and fast

Mainframe computers have more memory, speed, and capabilities than workstations and are
usually shared by multiple users through a series of interconnected computers. They control
businesses and industrial facilities and are used for scientific research. The most powerful
mainframe computers, called supercomputers, process complex and time-consuming
calculations, such as those used to create weather predictions. Large businesses, scientific
institutions, and the military use them. Some supercomputers have many sets of CPUs. These
computers break a task into small pieces, and each CPU processes a portion of the task to
increase overall speed and efficiency. Such computers are called parallel processors. As
computers have increased in sophistication, the boundaries between the various types have
become less rigid. The performance of various tasks and types of computing have also moved
from one type of computer to another. For example, networked PCs can work together on a
given task in a version of parallel processing known as distributed computing.


Computers can communicate with other computers through a series of connections and
associated hardware called a network. The advantage of a network is that data can be
exchanged rapidly, and software and hardware resources, such as hard-disk space or printers,
can be shared. Networks also allow remote use of a computer by a user who cannot physically
access the computer.

One type of network, a local area network (LAN), consists of several PCs or workstations
connected to a special computer called a server, often within the same building or office
complex. The server stores and manages programs and data. A server often contains all of a
networked group’s data and enables LAN workstations or PCs to be set up without large
storage capabilities. In this scenario, each PC may have “local” memory (for example, a hard
drive) specific to itself, but the bulk of storage resides on the server. This reduces the cost of
the workstation or PC because less expensive computers can be purchased, and it simplifies
the maintenance of software because the software resides only on the server rather than on
each individual workstation or PC.

Mainframe computers and supercomputers commonly are networked. They may be connected
to PCs, workstations, or terminals that have no computational abilities of their own. These
“dumb” terminals are used only to enter data into, or receive output from, the central
Wide area networks (WANs) are networks that span large geographical areas. Computers can
connect to these networks to use facilities in another city or country. For example, a person in
Los Angeles can browse through the computerized archives of the Library of Congress in
Washington, D.C. The largest WAN is the Internet, a global consortium of networks linked by
common communication programs and protocols (a set of established standards that enable
computers to communicate with each other). The Internet is a mammoth resource of data,
programs, and utilities. American computer scientist Vinton Cerf was largely responsible for
creating the Internet in 1973 as part of the United States Department of Defense Advanced
Research Projects Agency (DARPA). In 1984 the development of Internet technology was
turned over to private, government, and scientific agencies. The World Wide Web, developed
in the 1980s by British physicist Timothy Berners-Lee, is a system of information resources
accessed primarily through the Internet. Users can obtain a variety of information in the form
of text, graphics, sounds, or video. These data are extensively cross-indexed, enabling users
to browse (transfer their attention from one information site to another) via buttons,
highlighted text, or sophisticated searching software known as search engines.


A Beginnings

The history of computing began with an analog machine. In 1623 German scientist Wilhelm
Schikard invented a machine that used 11 complete and 6 incomplete sprocketed wheels that
could add, and with the aid of logarithm tables, multiply and divide.

French philosopher, mathematician, and physicist Blaise Pascal invented a machine in 1642
that added and subtracted, automatically carrying and borrowing digits from column to
column. Pascal built 50 copies of his machine, but most served as curiosities in parlors of the
wealthy. Seventeenth-century German mathematician Gottfried Leibniz designed a special
gearing system to enable multiplication on Pascal’s machine.

B First Punch Cards

In the early 19th century French inventor Joseph-Marie Jacquard devised a specialized type of
computer: a silk loom. Jacquard’s loom used punched cards to program patterns that helped
the loom create woven fabrics. Although Jacquard was rewarded and admired by French
emperor Napoleon I for his work, he fled for his life from the city of Lyon pursued by weavers
who feared their jobs were in jeopardy due to Jacquard’s invention. The loom prevailed,
however: When Jacquard died, more than 30,000 of his looms existed in Lyon. The looms are
still used today, especially in the manufacture of fine furniture fabrics.

C Precursor to Modern Computer

Another early mechanical computer was the Difference Engine, designed in the early 1820s by
British mathematician and scientist Charles Babbage. Although never completed by Babbage,
the Difference Engine was intended to be a machine with a 20-decimal capacity that could
solve mathematical problems. Babbage also made plans for another machine, the Analytical
Engine, considered the mechanical precursor of the modern computer. The Analytical Engine
was designed to perform all arithmetic operations efficiently; however, Babbage’s lack of
political skills kept him from obtaining the approval and funds to build it.

Augusta Ada Byron, countess of Lovelace, was a personal friend and student of Babbage. She
was the daughter of the famous poet Lord Byron and one of only a few woman
mathematicians of her time. She prepared extensive notes concerning Babbage’s ideas and
the Analytical Engine. Lovelace’s conceptual programs for the machine led to the naming of a
programming language (Ada) in her honor. Although the Analytical Engine was never built, its
key concepts, such as the capacity to store instructions, the use of punched cards as a
primitive memory, and the ability to print, can be found in many modern computers.


A Early Electronic Calculators

Herman Hollerith, an American inventor, used an idea similar to Jacquard’s loom when he
combined the use of punched cards with devices that created and electronically read the
cards. Hollerith’s tabulator was used for the 1890 U.S. census, and it made the computational
time three to four times shorter than the time previously needed for hand counts. Hollerith’s
Tabulating Machine Company eventually merged with two companies to form the Computing-
Tabulating-Recording Company. In 1924 the company changed its name to International
Business Machines (IBM).

In 1936 British mathematician Alan Turing proposed the idea of a machine that could process
equations without human direction. The machine (now known as a Turing machine) resembled
an automatic typewriter that used symbols for math and logic instead of letters. Turing
intended the device to be a “universal machine” that could be used to duplicate or represent
the function of any other existing machine. Turing’s machine was the theoretical precursor to
the modern digital computer. The Turing machine model is still used by modern computational

In the 1930s American mathematician Howard Aiken developed the Mark I calculating
machine, which was built by IBM. This electronic calculating machine used relays and
electromagnetic components to replace mechanical components. In later machines, Aiken used
vacuum tubes and solid state transistors (tiny electrical switches) to manipulate the binary
numbers. Aiken also introduced computers to universities by establishing the first computer
science program at Harvard University in Cambridge, Massachusetts. Aiken obsessively
mistrusted the concept of storing a program within the computer, insisting that the integrity of
the machine could be maintained only through a strict separation of program instructions from
data. His computer had to read instructions from punched cards, which could be stored away
from the computer. He also urged the National Bureau of Standards not to support the
development of computers, insisting that there would never be a need for more than five or
six of them nationwide.


At the Institute for Advanced Study in Princeton, New Jersey, Hungarian-American

mathematician John von Neumann developed one of the first computers used to solve
problems in mathematics, meteorology, economics, and hydrodynamics. Von Neumann's 1945
design for the Electronic Discrete Variable Automatic Computer (EDVAC)—in stark contrast to
the designs of Aiken, his contemporary—was the first electronic computer design to
incorporate a program stored entirely within its memory. This machine led to several others,
some with clever names like ILLIAC, JOHNNIAC, and MANIAC.

American physicist John Mauchly proposed the electronic digital computer called ENIAC, the
Electronic Numerical Integrator And Computer. He helped build it along with American
engineer John Presper Eckert, Jr., at the Moore School of Engineering at the University of
Pennsylvania in Philadelphia. ENIAC was operational in 1945 and introduced to the public in
1946. It is regarded as the first successful, general digital computer. It occupied 167 sq m
(1,800 sq ft), weighed more than 27,000 kg (60,000 lb), and contained more than 18,000
vacuum tubes. Roughly 2,000 of the computer’s vacuum tubes were replaced each month by a
team of six technicians. Many of ENIAC’s first tasks were for military purposes, such as
calculating ballistic firing tables and designing atomic weapons. Since ENIAC was initially not a
stored program machine, it had to be reprogrammed for each task.

Eckert and Mauchly eventually formed their own company, which was then bought by the
Rand Corporation. They produced the Universal Automatic Computer (UNIVAC), which was
used for a broader variety of commercial applications. The first UNIVAC was delivered to the
United States Census Bureau in 1951. By 1957, there were 46 UNIVACs in use.

Between 1937 and 1939, while teaching at Iowa State College, American physicist John
Vincent Atanasoff built a prototype computing device called the Atanasoff-Berry Computer, or
ABC, with the help of his assistant, Clifford Berry. Atanasoff developed the concepts that were
later used in the design of the ENIAC. Atanasoff’s device was the first computer to separate
data processing from memory, but it is not clear whether a functional version was ever built.
Atanasoff did not receive credit for his contributions until 1973, when a lawsuit regarding the
patent on ENIAC was settled.


In 1948, at Bell Telephone Laboratories, American physicists Walter Houser Brattain, John
Bardeen, and William Bradford Shockley developed the transistor, a device that can act as an
electric switch. The transistor had a tremendous impact on computer design, replacing costly,
energy-inefficient, and unreliable vacuum tubes.

In the late 1960s integrated circuits (tiny transistors and other electrical components arranged
on a single chip of silicon) replaced individual transistors in computers. Integrated circuits
resulted from the simultaneous, independent work of Jack Kilby at Texas Instruments and
Robert Noyce of the Fairchild Semiconductor Corporation in the late 1950s. As integrated
circuits became miniaturized, more components could be designed into a single computer
circuit. In the 1970s refinements in integrated circuit technology led to the development of the
modern microprocessor, integrated circuits that contained thousands of transistors. Modern
microprocessors can contain more than 40 million transistors.

Manufacturers used integrated circuit technology to build smaller and cheaper computers. The
first of these so-called personal computers (PCs)—the Altair 8800—appeared in 1975, sold by
Micro Instrumentation Telemetry Systems (MITS). The Altair used an 8-bit Intel 8080
microprocessor, had 256 bytes of RAM, received input through switches on the front panel,
and displayed output on rows of light-emitting diodes (LEDs). Refinements in the PC continued
with the inclusion of video displays, better storage devices, and CPUs with more computational
abilities. Graphical user interfaces were first designed by the Xerox Corporation, then later
used successfully by Apple Computer, Inc.. Today the development of sophisticated operating
systems such as Windows, the Mac OS, and Linux enables computer users to run programs
and manipulate data in ways that were unimaginable in the mid-20th century.
Several researchers claim the “record” for the largest single calculation ever performed. One
large single calculation was accomplished by physicists at IBM in 1995. They solved one
million trillion mathematical subproblems by continuously running 448 computers for two
years. Their analysis demonstrated the existence of a previously hypothetical subatomic
particle called a glueball. Japan, Italy, and the United States are collaborating to develop new
supercomputers that will run these types of calculations 100 times faster.

In 1996 IBM challenged Garry Kasparov, the reigning world chess champion, to a chess match
with a supercomputer called Deep Blue. The computer had the ability to compute more than
100 million chess positions per second. In a 1997 rematch Deep Blue defeated Kasparov,
becoming the first computer to win a match against a reigning world chess champion with
regulation time controls. Many experts predict these types of parallel processing machines will
soon surpass human chess playing ability, and some speculate that massive calculating power
will one day replace intelligence. Deep Blue serves as a prototype for future computers that
will be required to solve complex problems. At issue, however, is whether a computer can be
developed with the ability to learn to solve problems on its own, rather than one programmed
to solve a specific set of tasks.


In 1965 semiconductor pioneer Gordon Moore predicted that the number of transistors
contained on a computer chip would double every year. This is now known as Moore’s Law,
and it has proven to be somewhat accurate. The number of transistors and the computational
speed of microprocessors currently doubles approximately every 18 months. Components
continue to shrink in size and are becoming faster, cheaper, and more versatile.

With their increasing power and versatility, computers simplify day-to-day life. Unfortunately,
as computer use becomes more widespread, so do the opportunities for misuse. Computer
hackers—people who illegally gain access to computer systems—often violate privacy and can
tamper with or destroy records. Programs called viruses or worms can replicate and spread
from computer to computer, erasing information or causing malfunctions. Other individuals
have used computers to electronically embezzle funds and alter credit histories (see Computer
Security). New ethical issues also have arisen, such as how to regulate material on the
Internet and the World Wide Web. Long-standing issues, such as privacy and freedom of
expression, are being reexamined in light of the digital revolution. Individuals, companies, and
governments are working to solve these problems through informed conversation,
compromise, better computer security, and regulatory legislation.

Computers will become more advanced and they will also become easier to use. Improved
speech recognition will make the operation of a computer easier. Virtual reality, the
technology of interacting with a computer using all of the human senses, will also contribute to
better human and computer interfaces. Standards for virtual-reality program languages—for
example, Virtual Reality Modeling language (VRML)—are currently in use or are being
developed for the World Wide Web.

Other, exotic models of computation are being developed, including biological computing that
uses living organisms, molecular computing that uses molecules with particular properties,
and computing that uses deoxyribonucleic acid (DNA), the basic unit of heredity, to store data
and carry out operations. These are examples of possible future computational platforms that,
so far, are limited in abilities or are strictly theoretical. Scientists investigate them because of
the physical limitations of miniaturizing circuits embedded in silicon. There are also limitations
related to heat generated by even the tiniest of transistors.

Intriguing breakthroughs occurred in the area of quantum computing in the late 1990s.
Quantum computers under development use components of a chloroform molecule (a
combination of chlorine and hydrogen atoms) and a variation of a medical procedure called
magnetic resonance imaging (MRI) to compute at a molecular level. Scientists use a branch of
physics called quantum mechanics, which describes the behavior of subatomic particles
(particles that make up atoms), as the basis for quantum computing. Quantum computers
may one day be thousands to millions of times faster than current computers, because they
take advantage of the laws that govern the behavior of subatomic particles. These laws allow
quantum computers to examine all possible answers to a query simultaneously. Future uses of
quantum computers could include code breaking (see cryptography) and large database
queries. Theorists of chemistry, computer science, mathematics, and physics are now working
to determine the possibilities and limitations of quantum computing.

Communications between computer users and networks will benefit from new technologies
such as broadband communication systems that can carry significantly more data faster or
more conveniently to and from the vast interconnected databases that continue to grow in
number and type.

Laser, a device that produces and amplifies light. The word laser is an
acronym for Light Amplification by Stimulated Emission of Radiation. Laser light is very pure in
color, can be extremely intense, and can be directed with great accuracy. Lasers are used in
many modern technological devices including bar code readers, compact disc (CD) players,
and laser printers. Lasers can generate light beyond the range visible to the human eye, from
the infrared through the X-ray range. Masers are similar devices that produce and amplify

Lasers generate light by storing energy in particles called electrons inside atoms and then
inducing the electrons to emit the absorbed energy as light. Atoms are the building blocks of
all matter on Earth and are a thousand times smaller than viruses. Electrons are the
underlying source of almost all light.

Light is composed of tiny packets of energy called photons. Lasers produce coherent light:
light that is monochromatic (one color) and whose photons are “in step” with one another.

A Excited Atoms

At the heart of an atom is a tightly bound cluster of particles called the nucleus. This cluster is
made up of two types of particles: protons, which have a positive charge, and neutrons, which
have no charge. The nucleus makes up more than 99.9 percent of the atom’s mass but
occupies only a tiny part of the atom’s space. Enlarge an atom up to the size of Yankee
Stadium and the equally magnified nucleus is only the size of a baseball.
Electrons, tiny particles that have a negative charge, whirl through the rest of the space inside
atoms. Electrons travel in complex orbits and exist only in certain specific energy states or
levels (see Quantum Theory). Electrons can move from a low to a high energy level by
absorbing energy. An atom with at least one electron that occupies a higher energy level than
it normally would is said to be excited. An atom can become excited by absorbing a photon
whose energy equals the difference between the two energy levels. A photon’s energy, color,
frequency, and wavelength are directly related: All photons of a given energy are the same
color and have the same frequency and wavelength.

Usually, electrons quickly jump back to the low energy level, giving off the extra energy as
light (see Photoelectric Effect). Neon signs and fluorescent lamps glow with this kind of light as
many electrons independently emit photons of different colors in all directions.

B Stimulated Emission

Lasers are different from more familiar sources of light. Excited atoms in lasers collectively
emit photons of a single color, all traveling in the same direction and all in step with one
another. When two photons are in step, the peaks and troughs of their waves line up. The
electrons in the atoms of a laser are first pumped, or energized, to an excited state by an
energy source. An excited atom can then be “stimulated” by a photon of exactly the same
color (or, equivalently, the same wavelength) as the photon this atom is about to emit
spontaneously. If the photon approaches closely enough, the photon can stimulate the excited
atom to immediately emit light that has the same wavelength and is in step with the photon
that interacted with it. This stimulated emission is the key to laser operation. The new light
adds to the existing light, and the two photons go on to stimulate other excited atoms to give
up their extra energy, again in step. The phenomenon snowballs into an amplified, coherent
beam of light: laser light.

In a gas laser, for example, the photons usually zip back and forth in a gas-filled tube with
highly reflective mirrors facing inward at each end. As the photons bounce between the two
parallel mirrors, they trigger further stimulated emissions and the light gets brighter and
brighter with each pass through the excited atoms. One of the mirrors is only partially
silvered, allowing a small amount of light to pass through rather than reflecting it all. The
intense, directional, and single-colored laser light finally escapes through this slightly
transparent mirror. The escaped light forms the laser beam.

Albert Einstein first proposed stimulated emission, the underlying process for laser action, in
1917. Translating the idea of stimulated emission into a working model, however, required
more than four decades. The working principles of lasers were outlined by the American
physicists Charles Hard Townes and Arthur Leonard Schawlow in a 1958 patent application.
(Both men won Nobel Prizes in physics for their work, Townes in 1964 and Schawlow in 1981).
The patent for the laser was granted to Townes and Schawlow, but it was later challenged by
the American physicist and engineer Gordon Gould, who had written down some ideas and
coined the word laser in 1957. Gould eventually won a partial patent covering several types of
laser. In 1960 American physicist Theodore Maiman of Hughes Aircraft Corporation
constructed the first working laser from a ruby rod.


Lasers are generally classified according to the material, called the medium, they use to
produce the laser light. Solid-state, gas, liquid, semiconductor, and free electron are all
common types of lasers.

A Solid-State Lasers

Solid-state lasers produce light by means of a solid medium. The most common solid laser
media are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the
rods are fashioned into two parallel surfaces coated with a highly reflecting nonmetallic film.
Solid-state lasers offer the highest power output. They are usually pulsed to generate a very
brief burst of light. Bursts as short as 12 × 10-15 sec have been achieved. These short bursts
are useful for studying physical phenomena of very brief duration.

One method of exciting the atoms in lasers is to illuminate the solid laser material with higher-
energy light than the laser produces. This procedure, called pumping, is achieved with brilliant
strobe light from xenon flash tubes, arc lamps, or metal-vapor lamps.
B Gas Lasers

The lasing medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapor.
The medium is usually contained in a cylindrical glass or quartz tube. Two mirrors are located
outside the ends of the tube to form the laser cavity. Gas lasers can be pumped by ultraviolet
light, electron beams, electric current, or chemical reactions. The helium-neon laser is known
for its color purity and minimal beam spread. Carbon dioxide lasers are very efficient at
turning the energy used to excite their atoms into laser light. Consequently, they are the most
powerful continuous wave (CW) lasers—that is, lasers that emit light continuously rather than
in pulses.

C Liquid Lasers

The most common liquid laser media are inorganic dyes contained in glass vessels. They are
pumped by intense flash lamps in a pulse mode or by a separate gas laser in the continuous
wave mode. Some dye lasers are tunable, meaning that the color of the laser light they emit
can be adjusted with the help of a prism located inside the laser cavity.

D Semiconductor Lasers

Semiconductor lasers are the most compact lasers. Gallium arsenide is the most common
semiconductor used. A typical semiconductor laser consists of a junction between two flat
layers of gallium arsenide. One layer is treated with an impurity whose atoms provide an extra
electron, and the other with an impurity whose atoms are one electron short. Semiconductor
lasers are pumped by the direct application of electric current across the junction. They can be
operated in the continuous wave mode with better than 50 percent efficiency. Only a small
percentage of the energy used to excite most other lasers is converted into light.

Scientists have developed extremely tiny semiconductor lasers, called quantum-dot vertical-
cavity surface-emitting lasers. These lasers are so tiny that more than a million of them can fit
on a chip the size of a fingernail.

Common uses for semiconductor lasers include compact disc (CD) players and laser printers.
Semiconductor lasers also form the heart of fiber-optics communication systems (see Fiber

E Free Electron Lasers.

Free electron lasers employ an array of magnets to excite free electrons (electrons not bound
to atoms). First developed in 1977, they are now becoming important research instruments.
Free electron lasers are tunable over a broader range of energies than dye lasers. The devices
become more difficult to operate at higher energies but generally work successfully from
infrared through ultraviolet wavelengths. Theoretically, electron lasers can function even in the
X-ray range.

The free electron laser facility at the University of California at Santa Barbara uses intense far-
infrared light to investigate mutations in DNA molecules and to study the properties of
semiconductor materials. Free electron lasers should also eventually become capable of
producing very high-power radiation that is currently too expensive to produce. At high power,
near-infrared beams from a free electron laser could defend against a missile attack.


The use of lasers is restricted only by imagination. Lasers have become valuable tools in
industry, scientific research, communications, medicine, the military, and the arts.

A Industry

Powerful laser beams can be focused on a small spot to generate enormous temperatures.
Consequently, the focused beams can readily and precisely heat, melt, or vaporize material.
Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to
trim microelectronics, to cut fashion patterns, to synthesize new material, and to attempt to
induce controlled nuclear fusion (see Nuclear Energy).
Highly directional laser beams are used for alignment in construction. Perfectly straight and
uniformly sized tunnels, for example, may be dug using lasers for guidance. Powerful, short
laser pulses also make high-speed photography with exposure times of only several trillionths
of a second possible.

B Scientific Research

Because laser light is highly directional and monochromatic, extremely small amounts of light
scattering and small shifts in color caused by the interaction between laser light and matter
can easily be detected. By measuring the scattering and color shifts, scientists can study
molecular structures of matter. Chemical reactions can be selectively induced, and the
existence of trace substances in samples can be detected. Lasers are also the most effective
detectors of certain types of air pollution. (see Chemical Analysis; Photochemistry).

Scientists use lasers to make extremely accurate measurements. Lasers are used in this way
for monitoring small movements associated with plate tectonics and for geographic surveys.
Lasers have been used for precise determination (to within one inch) of the distance between
Earth and the Moon, and in precise tests to confirm Einstein’s theory of relativity. Scientists
also have used lasers to determine the speed of light to an unprecedented accuracy.
Very fast laser-activated switches are being developed for use in particle accelerators.
Scientists also use lasers to trap single atoms and subatomic particles in order to study these
tiny bits of matter (see Particle Trap).

C Communications

Laser light can travel a large distance in outer space with little reduction in signal strength. In
addition, high-energy laser light can carry 1,000 times the television channels today carried by
microwave signals. Lasers are therefore ideal for space communications. Low-loss optical
fibers have been developed to transmit laser light for earthbound communication in telephone
and computer systems. Laser techniques have also been used for high-density information
recording. For instance, laser light simplifies the recording of a hologram, from which a three-
dimensional image can be reconstructed with a laser beam. Lasers are also used to play audio
CDs and videodiscs (see Sound Recording and Reproduction).

D Medicine

Lasers have a wide range of medical uses. Intense, narrow beams of laser light can cut and
cauterize certain body tissues in a small fraction of a second without damaging surrounding
healthy tissues. Lasers have been used to “weld” the retina, bore holes in the skull, vaporize
lesions, and cauterize blood vessels. Laser surgery has virtually replaced older surgical
procedures for eye disorders. Laser techniques have also been developed for lab tests of small
biological samples.

E Military Applications
Laser guidance systems for missiles, aircraft, and satellites have been constructed. Guns can
be fitted with laser sights and range finders. The use of laser beams to destroy hostile ballistic
missiles has been proposed, as in the Strategic Defense Initiative urged by U.S. president
Ronald Reagan and the Ballistic Missile Defense program supported by President George W.
Bush. The ability of tunable dye lasers to selectively excite an atom or molecule may open up
more efficient ways to separate isotopes for construction of nuclear weapons.


Because the eye focuses laser light just as it does other light, the chief danger in working with
lasers is eye damage. Therefore, laser light should not be viewed either directly or reflected.

Lasers sold and used commercially in the United States must comply with a strict set of laws
enforced by the Center for Devices and Radiological Health (CDRH), a department of the Food
and Drug Administration. The CDRH has divided lasers into six groups, depending on their
power output, their emission duration, and the energy of the photons they emit. The
classification is then attached to the laser as a sticker. The higher the laser’s energy, the
higher its potential to injure. High-powered lasers of the Class IV type (the highest
classification) generate a beam of energy that can start fires, burn flesh, and cause permanent
eye damage whether the light is direct, reflected, or diffused. Canada uses the same
classification system, and laser use in Canada is overseen by Health Canada’s Radiation
Protection Bureau.
Goggles blocking the specific color of photons that a laser produces are mandatory for the safe
use of lasers. Even with goggles, direct exposure to laser light should be avoided.

Materials Science and Technology


Materials Science and Technology, the study of materials, nonmetallic as well as metallic, and
how they can be adapted and fabricated to meet the needs of modern technology. Using the
laboratory techniques and research tools of physics, chemistry, and metallurgy, scientists are
finding new ways of using plastics, ceramics, and other nonmetals in applications formerly
reserved for metals.


The rapid development of semiconductors (see Semiconductor) for the electronics industry,
beginning in the early 1960s, gave materials science its first major impetus. Having
discovered that nonmetallic materials such as silicon could be made to conduct electricity in
ways that metals could not, scientists and engineers devised ways of fashioning thousands of
tiny integrated circuits (see Integrated Circuit) on a small chip of silicon. This then made it
possible to miniaturize the components of electronic devices such as computers.

In the late 1980s, materials science research was given renewed emphasis with the discovery
of ceramics that display superconductivity at higher temperatures than metals do. If the
temperature at which these new materials become superconductive can be raised high
enough, new applications, including levitating trains and superfast computers, are possible.
Although the latest developments in materials science have tended to focus on electrical
properties, mechanical properties are also of major, continuing importance. For the aircraft
industry, for instance, scientists have been developing, and engineers testing, nonmetallic
composite materials that are lighter, stronger, and easier to fabricate than the aluminum and
other metals currently used to form the outer skin of aircraft.


Engineers must know how solid materials respond to external forces, such as tension,
compression, torsion, bending, and shear. Solid materials respond to these forces by elastic
deformation (that is, the material returns to its original size and form when the external force
is lifted), permanent deformation, or fracture. Time-dependent effects of external forces are
creep and fatigue, which are defined below.

Tension is a pulling force that acts in one direction; an example is the force in a cable holding
a weight. Under tension, a material usually stretches, returning to its original length if the
force does not exceed the material's elastic limit (see Elasticity). Under larger tensions, the
material does not return completely to its original condition, and under even greater forces the
material ruptures.

Compression is the decrease in volume that results from the application of pressure. When a
material is subjected to a bending, shearing, or torsional (twisting) force, both tensile and
compressive forces are simultaneously at work. When a rod is bent, for example, one side of it
is stretched and subjected to a tensional force, and the other side is compressed.

Creep is a slowly progressing, permanent deformation that results from a steady force acting
on a material. Materials subjected to high temperatures are especially susceptible to this
deformation. The gradual loosening of bolts, the sagging of long-span cables, and the
deformation of components of machines and engines are all noticeable examples of creep. In
many cases the slow deformation stops because the force causing the creep is eliminated by
the deformation itself. Creep extended over a long time eventually leads to the rupture of the

Fatigue can be defined as progressive fracture. It occurs when a mechanical part is subjected
to a repeated or cyclic stress, such as vibration. Even when the maximum stress never
exceeds the elastic limit, failure of the material can occur even after a short time. With some
metals, such as titanium alloys, fatigue can be avoided by keeping the cyclic force below a
certain level. No deformation is apparent during fatigue, but small localized cracks develop
and propagate through the material until the remaining cross-sectional area cannot support
the maximum stress of the cyclic force. Knowledge of tensile stress, elastic limits, and the
resistance of materials to creep and fatigue are of basic importance in engineering. See also

Refrigeration, process of lowering the temperature and maintaining it in a given space for the
purpose of chilling foods, preserving certain substances, or providing an atmosphere
conducive to bodily comfort. Storing perishable foods, furs, pharmaceuticals, or other items
under refrigeration is commonly known as cold storage. Such refrigeration checks both
bacterial growth and adverse chemical reactions that occur in the normal atmosphere.

The use of natural or manufactured ice for refrigeration was widespread until shortly before
World War I, when mechanical or electric refrigerators became available. Ice owes its
effectiveness as a cooling agent to the fact that it has a constant fusion temperature of 0° C
(32° F). In order to melt, ice must absorb heat amounting to 333.1 kJ/kg (143.3 Btu/lb).
Melting ice in the presence of a dissolving salt lowers its melting point by several degrees.
Foodstuffs maintained at this temperature or slightly above have an increased storage life.
Solid carbon dioxide, known as dry ice, is used also as a refrigerant. Having no liquid phase at
normal atmospheric pressure, it sublimes directly from the solid to vapor phase at a
temperature of -78.5° C (-109.3° F). Dry ice is effective for maintaining products at low
temperatures during the period of sublimation.

In mechanical refrigeration, constant cooling is achieved by the circulation of a refrigerant in a

closed system, in which it evaporates to a gas and then condenses back again to a liquid in a
continuous cycle. If no leakage occurs, the refrigerant lasts indefinitely throughout the entire
life of the system. All that is required to maintain cooling is a constant supply of energy, or
power, and a method of dissipating waste heat. The two main types of mechanical
refrigeration systems used are the compression system, used in domestic units for large cold-
storage applications and for most air conditioning; and the absorption system, now employed
largely for heat-operated air-conditioning units but formerly also used for heat-operated
domestic units.


Compression systems employ four elements in the refrigeration cycle: compressor, condenser,
expansion valve, and evaporator. In the evaporator the refrigerant is vaporized and heat is
absorbed from the material contents or the space being cooled. The vapor next is drawn into a
motor-driven compressor and elevated to high pressure, which raises its temperature. The
resulting superheated, high-pressure gas is then condensed to liquid in an air- or water-cooled
condenser. From the condenser the liquid flows through an expansion valve, in which its
pressure and temperature are reduced to the conditions that are maintained in the evaporator.


For every refrigerant there is a specific boiling, or vaporization, temperature associated with
each pressure, so that it is only necessary to control the pressure in the evaporator to obtain a
desired temperature. A similar pressure-temperature relationship holds in the condenser. One
of the most widely used refrigerants for many years has been dichlorodifluoromethane, known
popularly as Refrigerant-12. This synthetic chlorofluorocarbon (CFC) when used as a
refrigerant would, for example, vaporize at -6.7° C (20° F) in its evaporator under a pressure
of 246.2 kPa (35.7 psi), and after compression to 909.2 kPa (131.9 psi) would condense at
37.8° C (100° F) in the condenser. The resulting condensed liquid would then enter the
expansion valve to drop to evaporator pressure and repeat the cycle of absorbing heat at low
temperature and low pressure and dissipating heat at the much higher condenser pressure
and temperature. In small domestic refrigerators used for food storage, the condenser heat is
dissipated into the kitchen or other room housing the refrigerator. With air-conditioning units
the condenser heat must be dissipated out of doors or directly into cooling water.

In a domestic refrigeration system the evaporator, called the freezer, is always placed in an
insulated space. In some cases this space constitutes the whole refrigerator cabinet. The
compressor is usually oversized, so that if it ran continuously it would produce progressively
lower temperatures. In order to maintain the interior of the box within the desired
temperature range, the motor driving the compressor is controlled by a thermostatic switch.

A frozen-food refrigerator resembles the household refrigerator except that its compressor and
motor must be of sufficient size to handle the larger gas volume of the refrigerant at its lower
evaporator pressure. For example, to maintain a temperature of -23.3° C (-10° F) an
evaporator pressure of 132.3 kPa (19.2 psi) is required with Refrigerant-12.


A few household units, called gas refrigerators, operate on the absorption principle. In such
gas refrigerators a strong solution of ammonia in water is heated by a gas flame in a container
called a generator, and the ammonia is driven off as a vapor, which passes into a condenser.
Changed to a liquid state in the condenser, the ammonia flows to the evaporator as in the
compression system. Instead of the gas being inducted into a compressor on exit from the
evaporator, however, the ammonia gas is reabsorbed in the partially cooled, weak solution
returning from the generator, to form the strong ammonia solution. This process of
reabsorption occurs in a container called the absorber, from which the enriched liquid flows
back to the generator to complete the cycle.

Increasing use of absorption refrigeration now occurs in refrigeration units for comfort space
cooling, for which purpose refrigerant temperatures of 45° to 50° F (7.2° to 10° C) are
suitable. In this temperature range, water can be used as a refrigerant with an aqueous salt
solution, usually lithium bromide, as the absorbent material. The very cold boiling water from
the evaporator is absorbed in concentrated salt solution. This solution is then pumped into the
generator, where, at elevated temperature, the surplus water is boiled off to increase the salt
concentration of the solution; this solution, after cooling, recirculates back to the absorber to
complete the cycle. The system operates at high vacuum at an evaporator pressure of about
1.0 kPa (0.145 psi); the generator and condenser operate at about 10.0 kPa (1.45 psi). The
units are usually direct-fired or use steam generated in a boiler.


Refrigerant-12 and related CFCs, Refrigerant-11 and Refrigerant-22, are currently the major
compounds used in the cooling and insulation systems of home refrigeration units. It has been
found, however, that CFCs are posing a major threat to the global environment through their
role in the destruction of the ozone layer. A search has therefore begun for replacements, and
some manufacturers of CFCs have already pledged to phase out these products by the end of
the century.
Heating, Ventilating, and Air
Conditioning (HVAC)

Heating, Ventilating, and Air Conditioning (HVAC), related processes designed to regulate
ambient conditions within buildings for comfort or for industrial purposes. Heating an area
raises temperature in a given space to a more satisfactory level than that of the atmosphere.
Ventilation, either separately or in combination with the heating or air-conditioning system,
controls both the supply and exhaust of air within given areas in order to provide sufficient
oxygen to the occupants and to eliminate odors. Air conditioning designates control of the
indoor environment year-round to create and maintain desirable temperature, humidity, air
circulation, and purity for the occupants of that space or for the industrial materials that are
handled or stored there.


The heating process may be direct, as from a fireplace or stove in an individual room, or
indirect, as in a central system in which steam, heated water, or heated air passing through
pipes or other ducts transports thermal energy to all the rooms of a building. The earliest
heating system was the open fire with which people warmed their dwellings. Stoves and
braziers of various types that were developed by the ancient Romans are still employed in
some parts of the world.
A Fireplaces

The fireplace was developed as a method of heating rooms by means of an open fire. The first
fireplaces were hearths, recessed into the walls of buildings, with short flues that
communicated with the open air. Fireplaces with chimneys sufficiently high above the roof of
the building to provide adequate draft for the fire were introduced during the 12th century.

Ordinary fireplaces consist of a hearth enclosed on three sides with brick and surmounted by a
completely enclosed chimney or flue that carries away the smoke and other combustion
products of the fire. On the hearth is either a metal grate, raised on legs, or a pair of metal
supports called firedogs or andirons. Grates are used for such fuels as coal, coke, and
charcoal, and andirons are used for wood. These devices promote combustion by permitting
the circulation of air under the fuel.

The useful heat given off by a fireplace consists of both direct radiation from the burning fuel
and indirect radiation from the hot sidewalls and back wall. From 85 to 90 percent of the heat
from the burning fuel is lost in the combustion gases that go up the chimney. Fireplaces are
included in modern houses mainly for aesthetic reasons rather than thermal efficiency. To
improve heating efficiency, however, some modern fireplaces are built with an arrangement of
interior ducts in which cold air from the room is warmed and then recirculated through the

B Stoves

The stove, an enclosure of metal or ceramic materials in which fuel is burned, is an

improvement over the fireplace because its surfaces are in contact with the air of the room
and by convection deliver heat to the air passing over them. An efficient stove delivers about
75 percent of the energy of the burning fuel. The fuels used include wood, coal, coke, peat,
gas, and kerosene.
C Central Heating

Central-heating systems, in which one centrally located heating unit is used to warm several
rooms or an entire house, were developed in the 1800s. A type of centralized heating, using
hot water, was used to a limited extent in Britain about 1816, but the first successful central
system, introduced in 1835, used warm air. This system subsequently came into extensive use
in the U.S. Steam heating was developed about 1850.

Present-day central-heating systems provide heat from a central furnace for a single building
or a group of buildings. In large systems steam or hot water is usually employed to distribute
the heat. Most dwellings are provided with central heat, as are office buildings, hotels, and
even groups of buildings, such as those in shopping malls. The term district heating is applied
to systems in which a large number of buildings are supplied with steam from central boiler
rooms operated by a public utility.

Furnaces for heating systems conventionally are fired with such fuels as oil, gas, or coal. As
the fuel burns, it heats metal surfaces that in turn transfer the heat to water, steam, or even
air in some residential furnaces.

Most furnaces, large and small, are automatically responsive to remote thermostats that
control their operation. Oil- or gas-fired furnaces only need the control of burners to regulate
heat. Furnaces that use solid fuels, however, require the admission of additional fuel to the
system. The removal of ashes from the stoker or grates is also essential. The combustion
firebox and the associated boiler are customarily enclosed in an insulated casing.
The devices generally employed to transfer heat from the heating system to the area to be
warmed are known commonly as radiators and convectors. Ordinary radiators consist of a
series of cast-iron grids or coils having a comparatively large total surface area. The convector
consists of a network of finned steel or nonferrous-metal tubes. These units are placed in
enclosures designed to permit air circulation; thus, heat is provided largely by convection,
rather than by radiation. Stores, warehouses, and factories are often equipped with so-called
unit heaters in which an electric fan or blower forces air through heating coils.

Although heat is provided in part by radiation in all forms of direct heating, the term radiant
heating is applied popularly to systems in which floors, walls, or ceilings are used as the
radiating units. Steam or hot-water pipes are placed in the walls or floors during construction
of the building. If electricity is used for heating, the panels containing heating elements are
mounted on a wall, baseboard, or the ceiling of the room. Radiant heating provides uniform
heat and has a comparatively low cost of operation. Efficiency is high because radiant heat
raises the inside-surface temperature, thereby providing comfort at a lower room-air
temperature than other systems.

C1 Warm-Air Systems

The simplest warm-air heating system consists of a firebox and waste-gas passage set within
a sheet-metal casing, and ducts leading to the various rooms. To ensure natural circulation of
the warm air, which tends to rise, the furnace usually is situated below the first floor of the
house. Cold air, either from within the house or from outdoors, is admitted between the
firebox and the casing and is heated by contact with the hot surfaces of the furnace. Often the
furnace is arranged so the warm air passes over a water pan in the furnace for humidification
before circulating through the house. As the air is heated, it passes through the ducts to
individual grills or registers in each room of the upper floors. The grills or registers can be
opened or closed to control the temperature of the rooms.

The chief problem in this type of system lies in obtaining adequate air circulation. Unless the
warm-air ducts are comparatively large in diameter, slanted upward from the furnace, and
properly insulated to prevent heat losses, the system may not heat a house adequately.

In a forced-circulation system a fan or blower is placed in the furnace casing; such a system
ensures the circulation of a large amount of air even under unfavorable conditions. Dust filters
may be included in the system to ensure the cleanliness of the air. When combined with
cooling, humidifying, and dehumidifying units, forced-circulation systems may be used
effectively for heating and cooling. Forced-circulation warm-air systems are popular for
residential installations, primarily because the same equipment can provide air conditioning
through the year.
C2 Hot-Water Systems

In the first hot-water heating systems the waters of natural hot springs reputedly were used
as a source of heat. Modern systems of this type employ a boiler, in which water is heated to a
temperature of from 60° to 83° C (140° to 180° F). The water is then circulated by means of
pipes to radiators located in the various rooms. Circulation of the hot water can be
accomplished by pressure and gravity, but forced circulation using a pump is more efficient
because it provides flexibility and control.

Either one- or two-pipe systems may be used. In the one-pipe system, water is admitted to
each radiator from the supply side of the main pipe, circulates through the radiator, and flows
back into the same pipe. The disadvantage of this arrangement is that the water becomes
increasingly cool as it flows away from the furnace, and hence the radiators farthest from the
furnace must be larger than those nearer the furnace in order to deliver the same amount of
heat. In the two-pipe system all radiators are supplied with hot water at the same
temperature from a single supply pipe, and the water from all the radiators flows back to the
furnace through a common return pipe. The two-pipe system is thus more efficient and easier
to control than the one-pipe system. In both systems an expansion tank is required to
compensate for variations in the volume of water in the system. Closed expansion tanks
contain about 50 percent air, which compresses and expands to compensate for volume
changes in the water.

C3 Steam Systems

Steam-heating systems closely resemble hot-water systems except that steam rather than hot
water is circulated through the pipes to the radiators. The steam condenses in the radiators,
giving up its latent heat (see Heat: Latent Heat). Both one-pipe and two-pipe arrangements
are employed for circulating the steam and for returning to the boiler the water formed by
condensation. Three main types of steam systems are used: air-vent systems, vapor systems,
and vacuum, or mechanical-pump, systems; a subatmospheric type is less used.

The one-pipe air-vent system is an arrangement in which the force of gravity causes the
condensate to flow from the radiator to the boiler in the same pipe through which steam
reaches the radiator. This is the least expensive system to install, but the pipes must be large
to accommodate both the steam and the condensate. Air vents on each radiator permit air to
be forced out of the radiator by the steam during the warm-up period and also during

The vapor system is a two-pipe arrangement in which steam passes into the radiator through
an inlet valve, and air and condensate are delivered to the return pipe by means of a steam
trap on the radiator. The condensate is returned to the boiler, and the air is discharged either
through one central air vent in the basement or, in larger installations, through a vent for each
zone heated by the system. If the system is constructed with light joints, the rate at which air
reenters the system is so reduced that minimal pressure is required to propel the steam. A
vapor system, although more expensive to install than the one-pipe system, is more
economical because it can be operated on the low-firing cycle of the furnace and thus requires
less fuel.

Vacuum systems resemble vapor systems in that each radiator is equipped with an inlet valve
and a steam trap, but they differ in having a vacuum pump installed in the return piping. With
the pump a partial vacuum is maintained in the system so that the steam, air, and condensate
circulate more readily. The condensate and air return to a central point from which the
condensate is pumped back into the boiler and the air is expelled into the atmosphere. With a
full vacuum system the condensate does not have to be returned by gravity, so radiators can
be situated either above or below the boiler.

C4 Electric Heating

The practice of using electric energy for heating is increasing not only in residences but in
public buildings as well. Electric heating generally costs more than energy obtained from
combustion of a fuel, but the convenience, cleanliness, and reduced space needs of electric
heat can often justify its use. The heat can be provided from electric coils or strips used in
varying patterns—for example, convectors in or on the walls, under windows, or as baseboard
radiation in part or all of a room. Heating elements or wires can even be incorporated in
ceilings or floors to radiate low-temperature heat into a space. The overall cost of electric
heating can be substantially reduced through the use of a heat-pump system.

C5 Heat pump

A heat pump is a system designed to provide useful heating and cooling, and its actions are
essentially the same for either process. Instead of creating heat, as does a furnace, the heat
pump transfers heat from one place to another. In heating season, a liquid refrigerant, such as
Freon, is pumped through a coil that is outside the area to be heated. The refrigerant is cold,
so it absorbs heat from the outside air, the ground, well water, or some other source. It then
flows first to a compressor, which raises its temperature and pressure so that it becomes
vapor before it flows to an indoor coil. There the warmth is radiated or blown into the room or
other space to be heated. The refrigerant, having given up much of its heat, then flows
through a valve where its pressure and temperature are lowered further before it liquefies and
is pumped into the outdoor coil to continue the cycle. To air condition a space, valves reverse
the flow so that the refrigerant picks up heat from inside and discharges it outside. Like
furnaces, most heat pumps are controlled by thermostats.
Most heat pumps use atmospheric air as their heat source. This presents a problem in areas
where winter temperatures frequently drop below freezing, making it difficult to raise the
temperature and pressure of the refrigerant. For economical heating performance, the
delivered heat should amount to more than twice the heat purchased from the power source.
Heat-pump systems are now being used extensively not only in residences but also in
commercial buildings and schools.

C6 Solar heating

During each sunlight hour of the day approximately 0.9 kw per sq m (280 Btu per hour per sq
ft) of solar energy reaches the surface of the earth. The actual energy received varies with
time of day, time of year, latitude, clarity of the atmosphere, and the direction relative to the
sun that an absorbing surface faces at any given time. This energy can often be more than
enough to heat a well-designed building, provided enough solar absorbing surface can be
installed and enough heat storage is made available to carry the building during periods of
darkness and inclement weather. A common method employed uses roof panels with built-in
water circuits. The water, heated by the sun, then flows into insulated tanks or pools located
elsewhere in the house; this water becomes a source of heating for the house. In colder
climates, a supplementary heat source for the water is usually provided. A number of such
systems are in successful operation, particularly in areas where the weather is not severely
cold. Proper placement of the glazing in any house can also greatly reduce the heating need
from fuels or electric power in winter.

D Portable Heating Units

Houses lacking central-heating systems are equipped with various types of portable and
semiportable heating devices, many of which can be moved from room to room as needed.
The most common types are kerosene stoves and electric heaters. The usual kerosene stove is
made of sheet metal and contains one or more wick burners that heat metal flues within the
stove. Kerosene stoves heat both by radiation and by convection, drawing in cool air through
vents in the bottom of the stove and emitting heated air from top vents. Known generally as
space heaters, large stoves of this general pattern can provide adequate heat for several
rooms. Kerosene stoves should be used with adequate outside ventilation because combustion
gases can be harmful. The simplest electric heaters are radiant heaters having a resistance-
heating unit in front of a reflector, which concentrates the radiant heat into a narrow beam.
Some radiant heaters include a fan, which circulates air around the heating unit, thus warming
by convection as well as by radiation. Another type consists of a plate or tube of heat-resistant
glass or quartz in which resistance wires are embedded. The entire plate or tube is warmed by
the wires and gives off radiant heat. Because the heater has no incandescent wires, it is safer
to use.
Electric-steam radiators are used to supplement other heating systems. These radiators are
miniature steam-heating systems in which an electrical-heating unit generates enough steam
to warm a small conventional radiator partially filled with water. No pipe connections are
necessary, and the units can be moved from place to place and plugged into any electrical
outlet. Radiators filled with oil that is heated electrically are also available.


Buildings in which people live and work must be ventilated to replenish oxygen, dilute the
concentration of carbon dioxide and water vapor, and minimize unpleasant odors. A certain
amount of air movement or ventilation ordinarily is provided by air leakage through small
crevices in the building's walls, especially around windows and doors. Such haphazard
ventilation may suffice for homes, but not for public buildings such as offices and theaters, or
for factories.

Factory ventilation systems must remove hazardous airborne contaminants from the
workplace. Nearly all chemical processes generate hazardous waste gases and vapors, and
these must be removed from the workplace environment in a cost-effective manner. Chemical
engineers, in particular, are involved in ventilation design for factories and refineries.

Engineers estimate that for adequate ventilation the air in a room should be changed
completely from one and a half to three times each hour, or that about 280 to 850 liters
(about 10 to 30 cu ft) of outside air per minute should be supplied for each occupant.
Providing this amount of ventilation usually requires mechanical devices to augment the
natural flow of air.

Simple ventilation devices include fans or blowers that are arranged either to exhaust the stale
air from the building or to force fresh air into the building, or both. Ventilating systems may be
combined with heaters, filters, humidity controls, or cooling devices. Many systems include
heat exchangers. These use outgoing air to heat or cool incoming air, thereby increasing the
efficiency of the system by reducing the amount of energy needed to operate it.


Theoretically, an air-conditioning system consists of centralized equipment that provides an

atmosphere with controlled temperature, humidity, and purity at all times, regardless of
weather conditions. In popular usage, however, the term air conditioning often is applied
improperly to air cooling. Many so-called air-conditioning units consist merely of blower-
equipped refrigerating units that provide only a flow of cool, filtered air.
A number of manufacturing processes, such as those used in the production of paper, textiles,
and printed matter, require air conditioning for the control of conditions during manufacture.
Air conditioning of this kind usually is based on adjusting the humidity of the circulated air.
When dry air is required, it is usually dehumidified by cooling or by dehydration. In the latter
process it is passed through chambers containing adsorptive chemicals such as silica gel. Air is
humidified by circulation through water baths or sprays. When air must be completely free of
dust, as is necessary in the manufacture of certain drugs and medical supplies, the air-
conditioning system is designed to include some type of filter. The air is passed through water
sprays or, in some filters, through a labyrinth of oil-covered plates; in others, dust is removed
electrostatically by means of precipitators (see Electrostatic Precipitator).

Centralized air-conditioning systems, providing fully controlled heating, cooling, and

ventilation, as required, are employed widely in theaters, stores, restaurants, and other public
buildings. Such systems, being complex, generally must be installed when the building is
constructed; in recent years, these systems have increasingly been automated by computer
technology for purposes of energy conservation. In older buildings, single apartments or suites
of offices may be equipped with a refrigerating unit, blowers, air ducts, and a plenum chamber
in which air from the interior of the building is mixed with outside air. Such installations are
used for cooling and dehumidifying during the summer months, and the regular heating
system is used during the winter. A smaller apparatus for cooling single rooms consists of a
refrigerating unit and blower in a compact cabinet that can be mounted in a window.

The design of an air-conditioning system depends on the type of structure in which the system
is to be placed, the amount of space to be cooled, the number of occupants, and the nature of
their activity. A room or building with large windows exposed to the sun, or an indoor office
space with many heat-producing lights, requires a system with a larger cooling capacity than
an almost windowless room in which cool fluorescent lighting is used. The circulation of air
must be greater in a space in which the occupants are allowed to smoke than in a space of
equal capacity in which smoking is prohibited. In homes or apartments, most of the cooled or
heated air can be recirculated without discomfort to the occupants; but in laboratories or
factories employing processes that generate noxious fumes, no air can be recirculated, and a
constant supply of cooled or heated fresh air must be supplied.

Air-conditioning units are rated in terms of effective cooling capacity, which properly should be
expressed in kilowatt units. Usage still supports the term ton of refrigeration, which implies
the amount of heat that would have to be absorbed to melt a ton of water-ice in 24 hours, or
12,000 Btu/hour equal to 3.5 kw—a Btu is the amount of heat removed from 1 lb (0.45 kg) of
water when its temperature is lowered by 1° F (5/9° C).

Horsepower ratings were formerly used for small air conditioners, but the term is misleading
because a horsepower (or 0.746 kw) represents work power and not cooling. It came into use
because under usual summer conditions a motor of one horsepower could support 3.5 kw of
cooling, the equivalent of a ton of refrigeration.

Microsoft ® Encarta ® Reference Library 2005. © 1993-2004 Microsoft Corporation. All rights

Photography, method of picture making developed in the early 19th century, based on
principles of light, optics, and chemistry. The word photography comes from Greek words and
means “drawing with light.” Photographs serve as scientific evidence, conveyers of news,
historical documents, works of art, and records of family life. Millions of people around the
world own cameras and enjoy taking pictures; every year more than 10 billion exposures are
made with still cameras.

This article discusses how photographs are produced using film, cameras, and lenses. It also
outlines techniques of modern photography, such as filtration and electronic flash, and surveys
how photographic technologies have evolved since the medium's invention. For information on
the history of photography and its artistic practice, see History of Photography. For
information on motion picture technology and history, see Motion Pictures; History of Motion


Light is the most essential ingredient in photography. Nearly all forms of photography are
based on the fact that certain chemicals are photosensitive—that is, they change in some way
when exposed to light. Photosensitive materials abound in nature; plants that close their
blooms at night are one example. The films used in photography depend on a limited number
of chemical compounds that darken when exposed to light. The compounds most widely used
today are silver halide crystals, which are salts consisting of silver and chemicals called
halogens (usually bromine, chlorine, or iodine).

For the purpose of producing a photograph, these silver salts are distributed in gelatin to make
a mixture called an emulsion, which is applied to film or another supporting material in a thin
layer. When the emulsion is exposed to light, the silver halide crystals undergo chemical
changes and, after further processing, an image becomes visible. The stronger the light that
strikes the crystals, the denser or more opaque that part of the film becomes. Most types of
film produce a negative image, from which a positive final copy can be printed on sensitized
paper. The dense (or dark) areas of the negative translate into light areas on the final
photograph. Almost all modern photography relies on this negative-to-positive process.

In most cases the camera and its lens determine the appearance of the photographic image.
Cameras work on the basic principle of the camera obscura, a device that artists once used to
project a temporary image of something they wanted to draw. In both the camera obscura
and the modern camera, light passes through a lens fitted into an otherwise lightproof box.
Light passing through the lens casts an image of the camera’s subject—the object, person, or
scene in front of the camera—onto the inside of the box, which in a modern camera contains
film. The camera and lens control how much light strikes the film in what is called an

The purpose of the lens is refraction, the bending of light. The camera’s glass or plastic lens
bends the light rays reflected from the subject so that these rays cross and reappear upside-
down on the other side of the lens. The area where they re-form an image of the subject
inside the camera is called the plane of focus. The photographer, or an automatic mechanism
in some cameras, must adjust the distance between the lens and the film so that the plane of
focus falls exactly where the film lies, making the resulting image appear in focus.

Various types of lenses admit different amounts of light and permit different angles of view.
Lenses that take in a wide angle of view make the subject seem farther away; lenses that take
in a narrow angle make the subject seem magnified. The photographer can switch a modern
zoom lens from wide to narrow angles of view by turning a collar or pressing a button.

The amount of light that a lens allows to fall on the film is controlled by a lens diaphragm, a
mechanism built of overlapping metal blades. The diaphragm controls the size of the aperture,
or circular opening of the lens. A device called a shutter controls how long light strikes the
film; the shutter speed can range from a small fraction of a second (1/1000 or less) to
minutes or even hours.

The combination of choices that a photographer makes—film type, camera size, focus, angle
of view, lens aperture, shutter speed—influences the appearance of the photograph as much
as the choice of subject and the time of day. To take one example, thousands of people have
stood in the same spot to take photographs of the Grand Canyon over the years, but their
photographs look different because the photographers made different choices with these


Modern film consists of a transparent material, usually acetate, which has been coated with
one or more light-sensitive emulsions. It is available in a variety of shapes and sizes
determined by the format of the camera. Typical formats are 35-millimeter and 6-centimeter
roll films, 4-by-5 and 8-by-10 inch sheet films, and most recently, Advanced Photo System
(APS), a type of roll film that incorporates various conveniences for amateur photographers.
Within each film format there are a range of film types (black and white, color print, or color
transparency) and sensitivity levels, called film speeds, that are appropriate for different
lighting conditions.

A A Brief History of Film

Scientists recognized the photosensitivity of certain silver compounds, particularly silver

nitrate and silver chloride, during the 18th century. In the early 19th century English scientists
Thomas Wedgwood and Sir Humphry Davy used silver nitrate in an attempt to transfer a
painted image onto leather or paper. While they succeeded in producing a negative image, it
was not permanent; the entire surface blackened after continued exposure to light.

A French inventor, Joseph Nicéphore Niépce, is credited with having made the first successful
photograph in 1826. He achieved this by placing a pewter plate coated with bitumen, another
light-sensitive material, in the back of a camera obscura. Niépce later switched from pewter to
copper plates and from bitumen to silver chloride. French painter Louis Jacques Mandé
Daguerre continued Niépce’s pioneering work and in 1839, after Niépce's death, announced an
improved version of the process, which he called the daguerreotype.

The daguerreotype process produced a detailed, positive image on a shiny copper plate small
enough to be held in the hand. Daguerreotypes remained popular through the 1850s, but were
eventually replaced by a negative/positive process. English inventor William Henry Fox Talbot
devised this process and perfected it in the 1840s. Talbot’s process produced a paper
negative, from which he could produce any number of paper positives. He exposed silver-
sensitized paper briefly to light and then treated it with other chemicals to produce a visible
image. Beginning in 1850 glass replaced paper as a support for the negative, and the silver
salts were suspended in collodion, a thick liquid. The smooth glass negatives could produce
sharper images than paper ones, because the details were no longer lost in the texture of the
paper. This refinement became known as the wet collodion process.

Because the wet collodion (or wet plate) process required photographers to coat the glass
support just before taking a picture, experimenters sought a dry version of the same process.
Dry plates, pieces of glass coated in advance with an emulsion of gelatin and silver bromide,
were invented in 1878. A few years later American inventor George Eastman devised a flexible
version of this system, a long paper strip that could replace the glass plate. In 1889 he
improved on this by using a type of plastic called celluloid instead of paper, producing the first
photographic film. Eastman's invention paved the way for all modern films, which are made of
acetate or polyester, plastics that are less flammable than celluloid.

Except for some isolated experiments, color films were not invented until the 20th century.
The first commercially successful material for making color photographs, called Autochrome,
became available in 1907 and was based on a process devised by French inventors Auguste
and Louis Lumière. But the era of color photography did not really begin until the advent of
Kodachrome color film in 1935 and Agfacolor in 1936. Both of these films produced positive
color transparencies, or slides. The Kodak company introduced Kodacolor film for color
negatives in 1942, which gave amateurs the same negative/positive process they had long
enjoyed in black and white.

B How Film Works

To understand how film works, it is first necessary to understand a few things about light.
Light is the visible portion of a broad range of energy called electromagnetic radiation, which
also includes invisible energy in the form of radio waves, gamma rays, X rays, and infrared
and ultraviolet radiation. The narrow band of electromagnetic waves that the human eye can
detect is called the visible spectrum, which we see as colors. Our eyes perceive the longest
wavelengths as red, the shortest as violet, with orange, yellow, green, and blue in between.
(A rainbow or a prism shows all the colors of the visible spectrum.)

B1 Dyes and Emulsions

Photographic films vary in the way they react to different wavelengths of visible light. Early
black-and-white films were sensitive to only the shorter wavelengths of the visible spectrum,
primarily to light perceived as blue. So, for example, in a picture of blue, red, and orange
flowers, the blue flowers would appear too light, whereas the red and orange flowers would
look unrealistically dark. To correct this, specialized compounds called dye sensitizers were
incorporated into the emulsion.

Today, with a few specialized exceptions, films are sensitive to all colors of the visible
spectrum. Even black-and-white films record colors as different shades of gray. Most color
films are coated with three emulsions, typically cyan (a greenish blue), yellow, and magenta
(a purplish red). Each emulsion responds to only one color of light and is coupled with a dye
layer, which produces the actual color that resembles what the eye sees. Other layers act as
filters to screen the light these emulsions receive, and to prevent light from scattering within
the film.

Color transparency films produce a positive color image for viewing with the help of a slide
projector or an illuminated surface called a light table. These films are also known as reversal
films because the initial developed image is chemically reversed during processing, turning
what would otherwise be a negative image into a positive one. The dyes in some brands of
transparency film are added during development; in others, they are built into the film itself.

Color negative films, also known as print films, produce positive prints. As with some
transparency films, dyes built into the emulsion chemically react with the silver salts that form
the image. The colors on the processed negative are the complements of the colors in the
original scene. For instance, if you took a picture of the Ecuadorian flag, which is red, yellow,
and blue, the colored dyes on the negative would be a blue-green color called cyan (the
complement of red), blue (the complement of yellow), and yellow (the complement of blue).
When light shines through this negative onto color-sensitive print paper, the colors return to
positive form, and the print shows a flag properly striped with red, yellow, and blue.

One type of black-and-white film, called chromogenic film, makes use of color-film technology
to produce a negative that has just a single dye layer. When exposed to conventional black-
and-white photo paper, the negative provides an image nearly identical to that of conventional
black-and-white film. But when exposed to paper for printing color photographs it produces an
image composed of different shades of a single color.

B2 Positive/Negative Development

When film is processed in a chemical agent called a developer, large particles of metallic silver
form in areas of the film that were exposed to light. Exposure to lots of light causes many
particles to form, while exposure to dim light or exposure for a very short time causes just a
few particles to form. The resulting image produced on the film is called a negative because
the tonal values of the subject photographed are reversed; areas in the subject that were dark
appear light on the negative, and areas that were bright appear dark. The tonal values of the
negative are reversed again in the printing process—or in the case of transparencies (slides),
during the development of the film—creating a positive image.

Chromogenic color films, in which the dye is built in, exhibit dye images rather than silver
images, although silver is also essential to the process. During processing, the chemical action
of the developer creates initial images in metallic silver, just as in black-and-white processing.
But in color processing the developer also stimulates dye couplers (chemicals that react to a
specific color of light and cause corresponding dyes to be released) to form cyan, magenta,
and yellow dye images. The silver is then removed, leaving a negative image in the three
colors. Different combinations of those colors create the more complex colors visible on the
final print. In color transparency films, unexposed silver halide crystals that are not converted
to metallic silver and washed away during the initial development remain to be converted
during a second development. As these remaining silver halides are converted to metal, they
again combine with dye couplers to form the final color image, before the second layer of
metallic silver is also washed away.
Photographic print papers are constructed much like films, but generally require fewer layers.
The so-called paper support (today, commonly made of plastic or paper) is coated with a light-
sensitive emulsion, just as films are. Black-and-white papers have a single layer of emulsion;
color papers have at least three layers. When these papers are exposed to light shone through
a negative, the end result is a positive.

C Film Characteristics

Certain characteristics help people determine which film will work best in a particular situation.
Films may vary in their sensitivity to different kinds of light and in their ability to record fine
details or quickly moving subjects.

C1 Sensitivity and Color Balance

Most films now in use are panchromatic, meaning that they respond to all colors of light and
can record each color’s relative strength with a fair degree of accuracy. Color films also must
be designed to respond to the specific quality or energy of light illuminating the scene, which
may be outdoor sunlight, incandescent lamps, or electronic flash.

Each of these kinds of light has a distinct characteristic referred to as color temperature. While
the theory of color temperature is complicated, the practical concept is simple: color films are
balanced to perform best in specific lighting conditions. So-called daylight films, the most
widely used, are designed for both outdoor photography and pictures taken indoors with
electronic flash. Tungsten films are designed to be used indoors without flash, specifically with
certain types of bulbs manufactured for such situations called photofloods.

Distinguishing between daylight and tungsten film types is important mainly with transparency
or slide films, which produce direct positive images that cannot be altered. The color in print
films, which produce negatives, can be adjusted during printing to compensate for different
lighting conditions. Nonetheless, because print films are balanced for daylight, pictures from
them often have an orange cast when taken indoors without flash. All color films will produce
pictures with unpleasant green or purple casts when taken indoors under fluorescent light, as
in an office. For more information about eliminating color casts, see the Filtration section of
this article.

C2 Exposure Latitude

In any lighting situation there is an optimal exposure that will produce a perfect image on film.
Film exposed to light for a longer than optimal time is said to be overexposed and produces
prints that look bleached out and blurred. Too short an exposure and the image is
underexposed, which shows most visibly as insufficient contrast between dark and light.
Every film has a characteristic exposure latitude, a range of settings within which it can
accurately render the color and tonal values (contrasts of light and dark) of the subject
photographed. With films that have a narrow exposure latitude, the margin for error is small;
an exposure adjusted for a shady area is likely to result in overexposure of adjacent sunny
areas. The wider a film's latitude, the greater its ability to provide satisfactory prints or slides
in a range of lighting conditions.

Films that produce negatives generally offer much greater latitude than transparency films. In
addition, many high-speed films have a greater exposure latitude than slower films. Staying
within a given film’s exposure latitude can ensure an acceptable range of tones in the picture.
But to achieve the best-possible image quality, including full detail throughout the picture, the
exposure time and aperture size need to be precisely set to fit the lighting conditions.

C3 Speed and Grain

Film is also classified by speed, a rating that provides a measure of the film’s sensitivity to
light. For each film, this rating determines the amount of exposure required to photograph a
subject under a given lighting condition. The manufacturer of the film assigns it a standardized
numerical rating known as the ISO number (ISO stands for the International Standards
Organization). High ISO numbers correspond to highly light-sensitive, fast films, and low
numbers to less sensitive, slow films.

Today, slow-speed films typically have a rating between ISO 25 and ISO 100, but films that
are even slower exist. Films in the ISO 125 to ISO 200 range are considered medium speed,
while films above ISO 200 are considered fast. A photographer can push the limits of a film by
overriding the recommended exposure for that film speed and shortening the exposure time.
With some cameras the photographer will need to manually adjust the ISO number; with other
cameras, setting an exposure compensation dial will trick the camera into making this
adjustment for you. The photographer must then make sure that the development time is
lengthened to compensate for the underexposure.

Whether fast or slow, all films exhibit a pattern called grain. Film grain is the visible trace of
the metallic silver that forms the image. The individual grains of silver are generally larger and
more obvious in faster film than in slower film. For this reason, photographs taken with slow-
speed film appear less grainy, especially when enlarged. Because of the small size of its silver
halide grains, slow-speed film generally has a higher resolution—that is, it renders fine details
with greater sharpness. Slow-speed film also produces a smoother range of tones and more
intense colors than fast film. Despite these advantages, slow films are not as desirable as fast
films in certain situations, such as when photographing a rapidly moving subject.

C4 DX Coding
DX coding is a recent innovation in film and camera technology that eliminates the need to set
the film speed by hand in the camera's built-in exposure meter. On cartridges of 35-millimeter
film, manufacturers print a checkerboard pattern that corresponds to an electronic code. This
code tells the camera’s computer the ISO rating of the film as well as the number of frames on
the roll. Most cameras with electronic controls are equipped with DX sensors that can read this
information and automatically adjust exposures accordingly. The DX code is also placed on the
film itself to inform the developing laboratory of this information.

D Color Films in Use Today

A range of color film types is available to photographers. These types include color print films;
reversal films, used to make color slides and larger transparencies; Polaroid films, which
develop into prints without additional processing; and a number of specialty films such as X-
ray and infrared.

D1 Print Films

Color print films, which produce prints through the classic negative-to-positive process,
include such brand names as Kodacolor, Fujicolor, and Agfacolor. Ideal for amateur use, they
are designed to provide excellent color rendition out of doors and with electronic flash. Each
manufacturer supplies its brand in several speeds: ISO 100, 200, and 400 are the most
common. Films are available in several sizes, or formats, including the popular 35-millimeter
format (in which a single frame of the film is 35 millimeters wide). Manufacturers also offer
premium films in most formats, which provide better color and smaller grain size.

D2 Slide Films

Kodachrome, Ektachrome, Fujichrome, and Agfachrome are examples of films that produce
35-millimeter slides and larger transparencies. Both daylight and tungsten versions of these
films are generally available. Manufacturers also design films for such specific tasks as slide
duplication. Film speeds of slide films commonly range from a very slow ISO 25 to a very fast
ISO 3200.

D3 Polaroid

In 1947 American physicist Edwin Herbert Land invented the Polaroid process, a type of
photography that produces prints almost immediately after exposure. Although the process
takes one or more minutes, it was quickly dubbed instant photography. Today Polaroid films
are available in both black-and-white and color, for both special Polaroid cameras and for
standard-format cameras (see Polaroid Corporation).
The processing chemicals and conventional silver halide emulsions in instant film are combined
in a self-contained paper envelope or within the print itself. A chemical diffusing agent
transfers the negative image to the paper, producing a print. Older Polaroid films use a system
in which the negative peels away from the final print. Polaroid SX-70 film, on the other hand,
has no separate negative, and users can watch the image develop before their eyes.

D4 Infrared, X-ray, and Special Films

Some special-purpose films are sensitive to wavelengths beyond the visible spectrum of light.
Infrared film responds to the invisible, infrared portion of the spectrum in addition to visible
light. Film manufacturers also design specialized emulsions for medical and scientific films that
respond to X rays and other forms of electromagnetic radiation.


The most important tool of photography is the camera itself. Basically, a camera is a lighttight
box with a lens on one side and light-sensitive film on the other. Improvements in camera
technology over the years have given photographers more control over the quality of their

A A Brief History of Cameras

Today’s cameras all derive from the 16th-century camera obscura. The earliest form of this
device was a darkened room with a tiny hole in one wall. Light entered the room through this
hole and projected an upside-down image of the subject onto the opposite wall. Over the
course of three centuries the camera obscura evolved into a handheld box with a lens
replacing the pinhole and an angled mirror at the back. The mirror reflected an image onto a
ground-glass viewing screen on the top of the box. Long before film was invented artists used
this device to help them draw more accurately. They placed thin paper onto the viewing
screen and could easily trace the reflected image.

The inventors of photography in the early 19th century adapted the camera obscura by adding
a device for holding sensitized plates in the back of the box. This kind of camera, with some
improvements, was used throughout the 19th century. One notable enhancement for the box,
pleated leather sides called bellows, allowed the photographer to easily adjust the distance
between the lens and the plane of focus. Professional photographers still use a similar camera
today, a large-format camera known as the view camera.

In the 1880s the invention of more sensitive emulsions and better lenses led to the
development of lens shutters, devices that could limit the time of exposure to a fraction of a
second. At first the shutter was simply a blind dropped in front of the lens by the force of
gravity, or by a spring. Later designs featured a set of blades just behind the optical lens. In
1888 George Eastman introduced the first Kodak camera, which used a cylindrical shutter that
the photographer turned by pulling a string on the front of the camera. The Kodak was one of
the earliest handheld cameras. It made photography available to amateurs for the first time
and created a snapshot craze at the turn of the 20th century.

In 1925 the Leitz Company in Germany introduced the Leica, one of the first cameras to use
35-millimeter film, a small-sized film initially designed for motion pictures. Because of its
compactness and economy, the Leica and other 35-millimeter cameras became popular with
both amateur and professional photographers. All but the earliest Leicas used a focal-plane
shutter, located just in front of the film. Because it blocks light from the film even when the
lens is removed, the focal-plane shutter allows photographers to switch lenses safely in the
middle of a film roll.

B Modern Camera Types

Cameras come in a variety of forms. Whereas cameras once required many decisions on the
part of photographers, most of today’s cameras offer a range of automated features that
greatly simplify picture taking and reduce the likelihood of error.

B1 Box Cameras

The Eastman Kodak Company introduced one of the first box cameras in 1888, and the
simplicity of this easy-to-use design has assured its popularity ever since. Box cameras consist
of a rigid box or body; a fixed, simple lens; a viewfinder window, through which the
photographer looks to frame the scene; and a shutter with one or possibly two speeds. On
most box cameras, the lens is set to an aperture and focus that produce reasonably sharp
pictures of a subject at least 2 m (about 6 ft) away, when the camera is used outdoors in the
sun. But because these settings are not adjustable, the photographer can do little to control
the results.

The modern-day equivalents of the old Kodak box cameras are the disposable cameras now
sold at drugstores and tourist shops. These cardboard-covered, plastic cameras come loaded
with 35-millimeter color print film. After taking a roll of pictures, the user turns over the entire
camera to a processing lab for development. Manufacturers now reuse or recycle many of the
parts inside these cameras. Single-use cameras are also available in several advanced
models—offering built-in flash, a waterproof body, or the ability to show panoramic views in
extra-wide prints.

B2 View Cameras
View cameras are larger and heavier than most amateur cameras but allow for maximum
precision in focus, aperture, and framing. They use large-format films, which are able to
capture far greater detail than 35-millimeter films. The body configuration of the view camera,
unlike that of most general-purpose cameras, is extremely adjustable. It has two
independently moveable elements that ride on a track: The front element holds the lens and
shutter, the rear holds a ground-glass panel, and the space in between is enclosed in an
expandable leather bellows. The photographer frames and focuses the scene that appears in
the glass panel at the back, then inserts a film holder in front of the glass, and takes the
picture. The gap in time between framing and exposure makes the view camera useless for
action shots, but it is ideal for carefully arranged studio shots, landscapes, or architectural
photography. The photographer can shift, tilt, raise, or swing the front and rear elements
separately, allowing for great variation in perspective and focus.

B3 Rangefinder Cameras

Rangefinder cameras were the first cameras to have an optical viewfinder—that is, a separate,
window-like lens through which the photographer sees and frames the subject. The viewfinder
is paired with an adjacent window called a rangefinder. To focus the camera, the photographer
adjusts a ring or collar until the two views appear as one, at which point the camera has set
the focus to precisely match the distance of the subject. Since the viewfinder window does not
show the scene through the lens, but only one that closely approximates it, rangefinder
cameras can be inaccurate for framing close-up shots.

Rangefinder cameras were once very popular with amateur photographers, but today’s point-
and-shoot cameras have largely replaced them. Nevertheless, the modern rangefinder camera
works well under certain circumstances, and some professionals still use it. Rangefinders are
available in two formats, for use with either 35-millimeter film or the larger format 6-
centimeter film. Unlike point-and-shoot cameras, modern rangefinders feature lenses that can
be removed from the camera body so that photographers can choose a lens specifically suited
to the subject.

B4 Point-and-Shoot Cameras

The most popular camera type today is the point-and-shoot camera. It has a number of
automatic features that make it practically foolproof to operate while producing pictures of
high quality. Point-and-shoot cameras feature battery-operated electronic systems that may
include automatic controls for exposure, focusing, flash, film winding, and film rewinding. They
are available with a fixed single-focal-length lens or a zoom lens; the lenses cannot be
removed from the body. The cameras work with all types of 35-millimeter film; some also use
a newer film type called Advanced Photo System (APS). (For more information, see the Recent
Developments: APS section of this article.)
B5 Single-Lens-Reflex Cameras

With the single-lens-reflex (SLR) camera, the photographer uses a single lens for both viewing
the scene and taking the picture. Light comes through the lens onto a mirror, which then
reflects it through a five-sided prism into the viewfinder. The mirror is hinged; at the moment
the photographer snaps the picture, a spring automatically pulls the mirror out of the path
between lens and film. Because of this system, the image recorded on the film is almost
exactly what the photographer sees in the viewfinder, a great advantage in many picture-
taking situations.

Most SLRs are precision electronic instruments equipped with fast focal-plane shutters, precise
automatic exposure systems, and built-in flash controls. Increasingly, camera manufacturers
are producing SLRs with automatic focusing, an innovation originally reserved for less
sophisticated cameras.

C Modern Camera Features

Modern cameras feature several components to help photographers control their results under
widely varying conditions. In today’s cameras many of these features are automated.

C1 Viewfinders

A viewfinder enables photographers to frame their subject the way they would like it to appear
in the finished photograph. Some viewfinders consist of a simple window on top of the camera
that only approximates the view through the lens. A more complex and more accurate
viewfinding system is the single-lens-reflex system, described above.

C2 Shutters

The shutter, a spring-activated mechanical device, keeps light from entering the camera
except during the interval of exposure. Most modern cameras have focal-plane or leaf
shutters. The focal-plane shutter consists of a black shade with a variable-size slit across its
width. When released, the shade moves quickly across the film, exposing it progressively as
the slit moves. In the leaf shutter, at the moment of exposure, a cluster of meshed blades
springs apart to uncover the full lens aperture and then springs shut.

C3 Built-in Meters and Automatic Exposure

For early photographers, setting the correct aperture and shutter speed for an exposure was
essentially an educated guess. But with the development of handheld photoelectric exposure
meters in the 1930s, photographers were able to take precise readings of the light level and
adjust the exposure accordingly. By the 1960s camera companies had begun to build exposure
meters right into the camera body; such systems typically required the user to center a needle
over a pointer inside the viewfinder. In the 1980s this process became automated: With built-
in electronics, the camera could adjust itself to produce an appropriate exposure. Today all but
the most inexpensive cameras feature such a system of automatic exposure.

C4 Autofocusing

Autofocus cameras use electronics and a small computer processor to automatically sample
the distance between camera and subject and from this determine the exact plane of focus.
The computer then signals a small mechanism that turns the lens barrel to this point.

There are two widely used methods for determining the focus automatically, called active and
passive. An active autofocus system, used in most point-and-shoot cameras, emits either an
infrared light beam or high-energy (ultrasonic) sound waves. When the light or sound waves
bounce off the subject and return to the camera, they give an accurate reading of the distance
to that subject. Passive systems, used in more sophisticated cameras, automatically adjust the
focus of the lens until sensors detect that maximum contrast has been reached inside a
rectangular target at the center of the focusing screen. The point of maximum contrast
corresponds to the point of greatest sharpness.

Neither method is foolproof. If the primary subject is off to one side of the frame, for example,
most autofocusing systems will ignore it. Active systems can be fooled by window glass, which
interrupts their beams. Passive systems require a certain amount of detail—usually there must
be discernable lines present in the target zone for this system to determine maximum contrast
in the subject. A passive system would have trouble setting the correct focus, for instance, for
a photograph in which the plain white sails of a boat took up the center of the frame.

C5 Film Loading and Transport

Most people today buy film in the form of lighttight cartridges or cassettes that they can insert
into the camera in daylight; only professional photographers using sheet films still need to
load their cameras in the dark. With 35-millimeter film, the user attaches a leader extending
from the cartridge to a spool at one side of the camera, then drops the cartridge into a slot on
the other side. Automatic cameras wind the film into position when the back is closed and
rewind the exposed film into the cartridge when all exposures have been taken. With older
cameras, the user must use a crank to rewind the film.
Most cameras now automatically advance the film to the next frame after an exposure has
been made. Some cameras come with a motor drive, a more rapid way of advancing the film.
Motor drives allow the photographer to snap a sequence of exposures in rapid succession while
holding a finger on the shutter-release button; as many as three to five pictures per second
can be taken this way.


The lens is the eye of the camera. Its function is to bring light from the subject into focus on
the film. A camera can have a single lens or a complex set of lenses. Together with the
shutter, the lens controls the amount of light that enters the camera.

A A Brief History of Lenses

The modern camera’s predecessor, the camera obscura, consisted of a simple pinhole in the
side of a room or box. In the 17th century people discovered they could produce a brighter,
sharper image by fitting a camera obscura with a convex (outward-curving) lens. The first
such lens came from a pair of eyeglasses. Over the next 300 years, interest in telescopes and
microscopes led to the development of better and brighter lenses.

With the invention of photography in the 19th century, the need for camera-specific lenses
increased, leading to rapid developments in the field of lens making. These developments took
place along two fronts: The first was the invention of new types of glass that refracted light
more effectively; and the second was the discovery of ways to combine several pieces of
glass, or elements, to control optical distortion.

Quality modern lenses are made of many individual elements of ground and polished glass (6
to 14 elements is common). These elements, each of a different shape and purpose, are
cemented into groups; each group is then assembled in what is called a lens barrel. On a
manually controlled camera, the lens barrel incorporates an aperture ring and a focusing ring.
By turning the aperture ring, the photographer adjusts the opening of the lens diaphragm,
which determines how much light reaches the film. The focusing ring is used to focus the
image on the film plane by changing the distance between the element groups.

B Focal Lengths

Camera lenses are categorized according to their focal lengths and maximum apertures. The
longer the focal length, the larger the image inside the camera will be. The greater the size of
the aperture, the more light the lens will admit. Focal length is the distance from the optical
center of the lens to the image formed inside the camera. Because this distance varies
depending on how the camera is focused, focal length ratings are defined by measuring the
distance when the focusing ring is set for photographing a distant subject (indicated on the
focusing ring with the symbol ∞, called infinity). A lens with a short focal length is commonly
called a wide-angle lens; with a long focal length, a telephoto lens. Lenses that approximate
the angle of view of the human eye are called normal lenses.

Focal length determines the magnification and angle of view of the image. With the camera in
a fixed position, objects photographed with a wide-angle lens will seem farther away than with
a normal lens; seen through a telephoto lens, the same objects will seem closer (and closer
together). The wide-angle can take in a broader angle of view than the eye can see, while the
telephoto narrows this view.

The zoom lens offers a range of focal lengths, and is one of the most popular types of lenses
today. The user can change the focal length by simply pushing a button or turning a ring on
the lens barrel. So-called true zooms maintain focus while changing the focal length; this
allows photographers with single-lens-reflex cameras to focus precisely at high magnification
before framing the picture at a different focal length. Another type of zoom lens, the varifocal
lens, must refocus as the focal length changes—a disadvantage only if the camera does not
offer automatic focusing.

C Macro Lenses

Some photographic subjects require task-specific optics. The most common specialized task is
close-up photography, for subjects ranging from flowers to coins. To cope with these small
subjects, macro lenses were developed for single-lens-reflex cameras. Macro lenses for 35-
millimeter cameras extend the focusing range to a matter of inches. On their own they can
reproduce an object on film at one-half its actual size; with the addition of an extension ring,
the camera can picture an object at life size.

Many modern zoom lenses come with a macro setting that allows a limited range of close-up
focusing. However, these are no substitute for a true macro lens because, at best, they only
reproduce an object at one-fifth its actual size. Extension rings or simple close-up lenses also
can attach to a normal lens to allow close-ups. Magnification of a subject to greater than its
actual size calls for more specialized equipment, such as a microscope, and is called

D Aperture

The lens diaphragm controls the size of the aperture, or lens opening, and thus the amount of
light that passes through the lens. It operates in conjunction with the shutter. The aperture
size is measured by numerical settings called f-stops. On a traditional, manually controlled
camera the f-stops are inscribed on an adjustable ring that fits around the lens. Typical f-stops
are f/2, f/2.8, f/4, f/5.6, f/8, f/11, and f/16. The setting f/2 represents a large aperture, f/16 a
small aperture. With simple automatic-exposure cameras, a computer sets the aperture size;
thus the aperture ring has disappeared from many of today's lenses.

Lenses come with a rating for their maximum aperture, indicating how much light can reach
the film when the lens diaphragm is wide open. With single-lens-reflex cameras, the maximum
aperture also influences how bright the image appears in the viewfinder. Within lens types, a
lens with a large maximum aperture will have a larger diameter and weigh more than a lens
with a smaller aperture. A telephoto lens requires a larger lens diameter and greater length to
let in the same amount of light as a normal or wide-angle lens. Like telephoto lenses, zoom
lenses are also physically large. To reduce their bulkiness and complexity, many
manufacturers now design zoom lenses with a variable maximum aperture: The size of the
aperture changes as the focal length of the lens goes from wide-angle to telephoto settings.

E Focusing

Technically, film captures only one plane of a picture in perfect focus. However, in practice we
call a picture “in focus” when it appears reasonably sharp at a given magnification and viewing
distance. Until recently photographers had to bring an image into focus manually, by turning a
ring or a focusing collar on the camera lens. But most of today's cameras with built-in lenses
will adjust the lens automatically, through use of a mechanism connected to an autofocusing
sensor. Cameras with interchangeable lenses still have focusing collars to allow for manual
adjustment. Most lenses will focus from a few feet in front of the camera to a point in the far
distance, called infinity.

F Depth of Field

To help determine what will appear in focus in a picture, photographers make use of a concept
called depth of field. This term refers to a zone of focus—that is, the area between the closest
and farthest objects that will appear sharply focused in the photograph. A picture with a
deeper zone of focus might be a landscape in which both the trees in the foreground and the
mountains in the background appear in sharp focus. A picture with a shallow depth of field
might be a close-up portrait, in which objects in the background are purposely blurred.

The factors that determine depth of field are lens aperture, focusing distance, and focal length.
All other factors being equal, depth of field will be greatest when photographing a distant
subject, using a short focal length (wide-angle) lens, and a small aperture. Conversely, depth
of field will be most shallow when photographing a subject at close range, using a long focal
length (telephoto) lens, with a wide aperture.
A photographer using a single-lens-reflex camera or view camera can judge the approximate
depth of field by looking directly through the lens with the aperture set to the desired f-stop.
In cameras with removable, manually adjusted lenses, a depth-of-field scale shows the
approximate sharp-focus zone for the different aperture settings.

Automatic cameras are designed to focus precisely on a single subject at the center of the
frame or, in more sophisticated designs, to focus on a band of details across the central
picture area. In most cases, the photographer locks in the focus by pressing the shutter
button part way. For capturing the image of a moving subject, certain cameras with motor
drives will adjust the focus continuously while the photographer tracks the subject.

Focusing precisely on a central subject, however, does not necessarily provide the greatest
depth of field. With manual focusing, photographers can obtain the maximum depth of field by
turning the focusing collar until the infinity sign aligns with the outside depth-of-field mark for
the f/stop they have chosen. A variant of this manual-focusing technique is called zone
focusing: The photographer chooses an aperture and a focusing distance that together cover
the range of distances at which the subject is likely to appear. Zone focusing is especially
useful for candid photography.

G Lens Hoods and Coatings

One of the worst enemies of photographers is flare, unwanted light that enters the lens and
causes strange reflections and a loss of contrast on the film. Flare is especially obvious when
photographing with the sun in front of or just to the side of the lens. To decrease the incidence
of flare, photographers can shade the front of the lens with a collar called a lens hood that
prevents sunlight from striking the glass surfaces. Hoods for zoom lenses are less effective
because they must angle away from the lens enough to accommodate the lens's widest angle
of view.

Lens makers also combat the more subtle effects of flare by coating the exterior and interior
surfaces of the lens’s glass elements with thin layers of reflection-absorbing material. These
coatings enhance the contrast of the film image and account for the characteristic green and
purple hues visible when one looks into the front of a modern lens.


All light-sensitive photographic materials—film or photographic print paper—produce their

finest results when given the optimum exposure. Precise exposure, coupled with consistent
development, is the technical key to excellent photographs.

A photographer can change the amount of exposure the film receives by adjusting either the
shutter speed or the aperture setting. A one-stop change in shutter speed is equivalent to an
aperture change of one f-stop, and vice versa. Thus, for a given lighting situation several
different combinations of f-stop and shutter speed result in the same amount of light hitting
the film.

For example, an exposure of f/5.6 at 1/15 second allows the same amount of light to strike
the film as an exposure of f/2.8 at 1/60 second—the aperture is two stops larger, but the
speed is two stops faster. The exposures are thus comparable, but they produce different
pictorial results. If the photographer is holding the camera by hand the second option is
preferable, because at speeds below 1/60 second, movement of the camera or of the subject
is likely to blur the image. If the photographer is using a tripod to hold the camera still and
photographing a still subject, the first option may be preferable because the smaller aperture
provides greater depth of field.

When film is developed according to the manufacturer's specifications, every stop of increase
in the exposure (one step up in either f-stop or shutter speed) effectively doubles the density
of the negative. For example, an exposure at f/5.6 for 1/15 second produces twice the density
of an exposure at the same f-stop for 1/30 second, and therefore a print made from it will be
twice as light, unless the print exposure time is doubled. However, there are limits to this
relationship, called reciprocity, between exposure and density. At the extremes of very little
and very great amounts of exposure, this rule is less consistent and the resulting images will
be noticeably underexposed.

A Light Metering

To help photographers determine the ideal exposure, and to help them avoid the problems
associated with extremely high or extremely low exposure levels, manufacturers introduced
photoelectric exposure meters in the 1930s. At first these meters were independent, handheld
devices; later they were incorporated into the camera itself, with a sensor measuring the light
as it came through the lens. The final development was automatic exposure, in which the
camera uses data from its built-in exposure meter to automatically adjust the shutter speed
and lens aperture.

All metering systems share one principle: They respond to the world as if it were a uniform
shade of gray. This shade (called 18 percent gray for its ratio of reflection) represents the
average amount of light reflected by an average outdoor subject. In most situations, basing
the exposure on this average reading produces ideal results: the negative receives just the
right amount of light.

The meters built into modern cameras are called reflected light meters: They measure the
amount of light reflected into the lens by the subject. (Another type, the incident-light meter,
measures the light that is falling on the scene or subject.) Most of these devices are also
called averaging meters because they read a broad angle of the scene; those that read a
narrow angle are called spot meters. Averaging meters provide somewhat less accuracy than
spot meters but are easy to use. Spot meters give very precise readings, but the
photographer must know how to correctly interpret these readings.

Newer, more sophisticated single-lens-reflex cameras try to increase the accuracy of their
automatic-exposure systems with what is called a multipattern metering system. This type of
system measures the light coming through the lens from several different areas within the
picture frame. It then compares the results to a computerized formula to determine the best
overall exposure. Based on the data gathered, these meters try to guess the kind of picture-
taking situation at hand and compensate for some problems, such as an overly bright sky.

Despite all the advances in exposure technology, meter readings are not foolproof. For
example, neither very dark nor very light skin tones reflect 18 percent of the light, so portrait
photographers have to adjust their exposures to compensate. In backlit conditions, when a
person is surrounded by a bright background, most meters will recommend too little exposure.
Likewise, if the main subject is a snowman in a field of snow, automatic exposure systems will
assume that the snow is an average shade of gray and underexpose it.

B Development and Exposure

Perfectly exposed film will produce imperfect pictures if it is not developed properly. By the
same token, development can be adjusted to compensate for certain variations in exposure.
For example, a roll of ISO 100 slide film exposed by mistake at a rating of ISO 200 can be
pushed—that is, have its development time extended during processing to produce reasonable
results. Lengthening development time lightens the resulting images, which otherwise would
appear too dark.

In black-and-white photography, it is common to adjust the exposure and development of

each picture individually to compensate for varying contrast conditions. If the lighting is harsh,
resulting in high contrast between light and dark areas, a sophisticated photographer might
overexpose the negative and then shorten its development time to subdue the harsh light.
This technique is often used in large-format, view camera photography and is the foundation
of the method used by American wilderness photographer Ansel Adams.

C Long and Short Exposure Times

Most films are intended for use at shutter speeds between 1/2 and 1/1000 second. At
significantly slower or faster speeds the reciprocity, or one-to-one relationship between
exposure and image density, fails. Pictures taken at either very fast or very slow speeds tend
to result in underexposed images. With color films, the colors may also shift.
Exposure meters do not compensate for reciprocity effects; instead, the photographer must
compensate by manually adjusting the exposure according to charts supplied by the film
manufacturer. With black-and-white film, development times must also be increased.

For some fast-moving subjects—such as the wings of a hummingbird in flight or a golf club as
a golfer swings it—even a shutter-speed setting of 1/1000 second is not sufficient to capture
the image in focus. Flash photography can produce an effect equivalent to shorter exposure
times. Special electronic flash units are able to limit the duration of their light output to as
little as 1/100,000 second.

D Flash Photography

In the absence of adequate daylight, photographers use artificial light to illuminate scenes,
both indoors and outdoors. The most commonly used sources of artificial illumination are
electronic flash, tungsten lamps called photofloods, and quartz lamps. Another once-popular
light source, the flashbulb, a disposable bulb filled with oxygen and a mass of fine magnesium
alloy wire, has gone the way of the dinosaur.

Flash units vary in size from small, battery-powered, camera-mounted units to large studio
units that plug into an electric wall socket. Generally speaking, the larger the unit, the greater
the intensity of light produced. Camera-mounted flashes are adequate for snapshots of family
and friends, but to illuminate a large scene evenly and with a single burst of light, a powerful
studio unit is needed.

An electronic flash unit consists of a glass quartz tube filled with an inert gas—usually xenon.
When a brief jolt of electricity is applied to the electrodes sealed at the ends of the tube, the
gas produces an intense burst of light of very short duration. The process can be repeated
thousands of times, sometimes in rapid succession, without wearing out the tube. Most flash
exposures last from 1/1000 to 1/5000 second, although a duration of 1/100,000 second is
now readily available. In 1931 the inventor of the electronic flash, American engineer Harold
Eugene Edgerton, developed an electronic strobe light (see Stroboscope) with which he
produced flashes of 1/500,000 second, enabling him to capture the image of a bullet in flight.

Flash units are designed either as part of the camera mechanism or as accessories. Some
designs, called dedicated flash units, are made for use with a particular camera model and
have circuitry that sets the shutter speed and illuminates a light in the viewfinder when the
tube is ready to fire again. Setting the shutter speed is important because the shutter and the
flash need to be synchronized—that is, the shutter must be open for the duration of the flash.
In cameras with a focal-plane shutter (this includes most commonly used cameras), the
maximum speed at which synchronization is possible is usually 1/125 second.
Modern dedicated flash units, as well as built-in camera units, contain automatic flash
systems. They have a sensor that determines the appropriate amount of light from the flash
tube, depending on the aperture set on the lens. This sensor is commonly located inside the
camera, where it can gauge the amount of light at the film plane. Before automatic flash was
invented, it was not possible to adjust the flash output; photographers could control the
exposure only by adjusting the aperture.

Flash aimed directly at the subject usually produces harsh, flat lighting. When photographing
people or animals in very dim conditions, using flash also causes a condition known as red
eye, making the centers of the subject’s eyes appear red. With some flash units it is possible
to achieve more pleasant results indoors by aiming the flash at the ceiling. As light bounces
from ceiling to subject, it produces a softer, more even light and eliminates red eye.

Flash can also be used in daylight to fill in foreground areas where shadows may be too
strong. For this type of picture, the exposure generally should be set to half of what would be
required for the existing light. This technique, called fill-flash, lightens shadows without
overriding the main source of light. The color temperature of electronic flash is practically the
same as daylight so the two light sources do not produce noticeable color differences.

E Filters

Filters added to the front of a camera lens change the quantity or quality of the light that
reaches the film. Made of gelatin or glass, filters may alter the color balance of light, change
contrast or brightness, minimize haze, or create special effects. In black-and-white
photography, color filters transmit light of one color while blocking light of a contrasting color.
In a landscape photograph taken with a red filter, for example, much of the blue light of the
sky is blocked, causing the sky to appear darker and thereby emphasizing clouds. A yellow
filter produces a less extreme effect because more blue light is transmitted to the film. The
medium-yellow filter is often used for outdoor black-and-white photography because it renders
the tone of a blue sky in much the same way as the human eye does.

Another type of filter, called a conversion filter, changes the color balance of light when it is
radically different from that of the film. Tungsten films, for example, are balanced for use
indoors with light from photofloods or incandescent lightbulbs. Exposed in daylight, they
produce pictures with a bluish cast. A series 85 conversion filter can correct this. Daylight film,
which is balanced for sunlight at noon, has a yellow-amber cast when exposed indoors under
incandescent light or photofloods. A series 80 conversion filter corrects this problem. Similar
to conversion filters are light-balancing filters, which can adjust tungsten film designed for one
type of artificial light to work with a second type of artificial light.

Color-compensating (CC) filters help balance fluorescent light for daylight film or indoor
(tungsten) film. Photographers also use CC filters to make small changes in color rendition on
the film or when printing in the darkroom. Some professional transparency films require CC
filtration as a matter of course.

Skylight, or ultraviolet (UV), filters are familiar amateur accessories. They filter ultraviolet
light, which is invisible to humans but which can register on film as blue. Screwed into the end
of a lens, a UV filter eliminates most of the excess blue that appears in distant landscape
photographs and secondarily provides a transparent protective cap for the lens.

A polarizing filter reduces reflections from the surfaces of shiny subjects such as windows. In
color photography, polarizing filters also produce more intense colors.

All filters reduce the amount of light reaching the lens to some degree—with a polarizing filter
the reduction can amount to two stops or more of exposure. All such reductions, called filter
factors, must be calculated into manual exposures. With automatic exposure, which measures
the light after it has come through the lens, filter factors are less relevant, but they still
require slower shutter speeds or larger apertures.


A darkroom is a room for processing photography materials. It must completely seal out light
from outside the room. In the early days of the medium, many photographers traveled with
portable darkrooms, which were housed in horse-drawn wagons or carried by servants. Today
many people have a home darkroom built in their basement, laundry room, or closet.

A darkroom is divided into a dry side and a wet side. The dry side is used for loading,
enlarging, and preparation; the wet side contains a sink with temperature-controlled running
water, and is used for the chemical processing of films and prints. Because many processing
chemicals are toxic, certain precautions are necessary: the darkroom should have an exhaust
fan to expel fumes and dust, and the photographer should always wear latex gloves when
handling wet materials and a dust mask when mixing powdered chemicals with water.

During the process of exposing and developing black-and-white printing paper, a special
orange-colored light bulb called a safelight can provide some illumination. But during the
processing of black-and-white films, color films, and color printing papers, the darkroom must
be totally dark, because these materials are panchromatic—that is, they are sensitive to all
types of light.

In the home darkroom, film is customarily developed in a lighttight tank, which holds metal
reels onto which the exposed film has been wound. Photographers make prints with an
enlarger, an upright device that functions much like a camera except that it contains its own
light source. The enlarger light shines through the negative, the enlarger lens focuses this
light, and a large image of the negative projects onto the printing paper, which sits on a flat
easel at the base of the enlarger.

A Developing the Film

Photographers develop film by treating it with an alkaline chemical solution called a developer.
This solution reactivates the process begun by the action of light when the film was exposed.
It encourages large grains of silver to form around the minute particles of metal that already
make up the latent (not yet visible) image.

As large particles of silver begin to form, a visible image develops on the film. The density of
silver deposited in each area depends on the amount of light the area received during
exposure. In order to arrest the action of the developer, photographers transfer the film to a
solution called the stop bath, which chemically neutralizes the developer. After rinsing the
film, they apply another chemical solution to the negative image to fix it—that is, to remove
residual silver halide crystals unexposed to light. The solution used for this process is
commonly referred to as hypo, or fixer.

After a short rinse, a fixer remover, or hypo-clearing agent, is applied to clear any remaining
fixer from the film. The film must then be thoroughly washed in water, as residual fixer tends
to destroy negatives over time. Finally, bathing the processed film in a washing aid promotes
uniform drying and prevents formation of water spots or streaks.

B Printing the Photos

Photographers produce prints by either of two methods: contact or projection. The contact
method works for making prints of exactly the same size as the negative. Using this method,
they place the emulsion side of the negative in contact with the printing material and expose
the two together to a source of light. Photographers with 35-millimeter cameras commonly
use this method to print what is called a contact sheet, which shows all the exposures from a
single roll of film in small size.

For projection printing, photographers first place the negative in the enlarger and place a
piece of sensitized printing material on the flat easel at its base. Switching on the enlarger
light source projects an enlarged image of the negative onto the paper. An aperture on the
enlarging lens controls the exposure, along with a timer connected to the enlarger light. The
exposure commonly lasts from ten seconds to a minute. By blocking part of the light source
with hands or small tools, the photographer can reduce or increase the amount of light falling
on selected portions of the image, thus lightening or darkening those areas in the final print.
This technique is known as dodging when used to lighten an area and as burning when making
it darker.
For either printing process, prints are made on sheets of paper or plastic that have been
coated with a light-sensitive emulsion. This coating is similar to that used for film but is much
less sensitive to light. After exposing the print, the photographer can then develop and fix the
positive image by a process very similar to that used for developing film. To process black-
and-white prints, the paper is usually placed in a series of open trays; for color prints, a drum
or automatic roller processor is preferred.


The technology of photography continues to develop rapidly. Electronic technologies have not
only changed the way that most cameras work, but are changing photography in such
fundamental ways that the distinction has begun to blur between photography and other
image-making systems, such as computers and the graphic arts.


In the early 1990s the Eastman Kodak Company introduced a new line of cameras and film
designed for amateur photographers. Called the Advanced Photo System (APS), this
technology challenges conventional 35-millimeter photography on several fronts. APS film is
easier to load, since the APS film cartridge has no leader to thread into a take-up spool. And
APS cameras magnetically encode information onto the exposed film that automated
photofinishing machines can read. According to Kodak, this results in a higher percentage of
well-exposed prints than with 35-millimeter processing. And although APS film is of a smaller
format than 35-millimeter film, it is capable of results that nearly match the precision and
sharpness of the older format.

Soon after Kodak’s introduction of APS, other film and camera makers also adopted the
system; dozens of APS cameras are now available, including several single-lens-reflex models.
However, the target market for APS remains the point-and-shoot camera user. In comparison
to 35-millimeter point-and-shoot models, APS cameras are slightly smaller and lighter.

One of the biggest differences between APS and conventional photography is that
photographers can have their pictures processed conventionally or have them scanned onto a
compact disc (CD) for use with a computer. APS is not a digital photography system; unlike
digital systems, which are explained in the next section, APS employs well-established color
film technology, including silver halides and dye couplers.

B Digital Photography

Digital photography is a method of making images without the use of conventional

photographic film. Instead, a machine called a scanner records visual information and converts
it into a code of ones and zeroes that a computer can read. Photographs in digital form can be
manipulated by means of various computer programs. Digital photography was widely used in
advertising and graphic design in the late 1990s, and was quickly replacing conventional
photographic technology in areas such as photojournalism.

Digital cameras are now available for both professional photographers and amateur
enthusiasts. The more expensive professional cameras function as sophisticated 35-millimeter
cameras but record the picture information as pixels, or digital dots of color (see Computer
Graphics). There can be several million pixels in a high-resolution, full-color digital
photograph. Some digital cameras are able to transfer their large picture files directly into a
computer for storage. Others accept a disc or similar portable storage unit to achieve the
same purpose. The original high-resolution image can later be reproduced in ink (in a
magazine, for example) or as a conventional silver halide print

Digital cameras aimed at the amateur photography market function much as point-and-shoot
cameras do, with automatic focus, automatic exposure, and built-in electronic flash. Pictures
from these cameras contain fewer pixels than those from a more expensive camera and are
therefore not as sharp. After taking pictures, the user can connect the camera directly to a
television set or video cassette recorder, so the whole family can look at snapshots together.
Alternatively, image files can be transferred to a home computer, stored on disks, or sent to
friends via electronic mail.

Electricity, one of the basic forms of energy. Electricity is associated with electric charge, a
property of certain elementary particles such as electrons and protons, two of the basic
particles that make up the atoms of all ordinary matter. Electric charges can be stationary, as
in static electricity, or moving, as in an electric current.

Electrical activity takes place constantly everywhere in the universe. Electrical forces hold
molecules together. The nervous systems of animals work by means of weak electric signals
transmitted between neurons (nerve cells). Electricity is generated, transmitted, and
converted into heat, light, motion, and other forms of energy through natural processes, as
well as by devices built by people.

Electricity is an extremely versatile form of energy. It can be generated in many ways and
from many different sources. It can be sent almost instantaneously over long distances.
Electricity can also be converted efficiently into other forms of energy, and it can be stored.
Because of this versatility, electricity plays a part in nearly every aspect of modern
technology. Electricity provides light, heat, and mechanical power. It makes telephones,
computers, televisions, and countless other necessities and luxuries possible.


Electricity consists of charges carried by electrons, protons, and other particles. Electric charge
comes in two forms: positive and negative. Electrons and protons both carry exactly the same
amount of electric charge, but the positive charge of the proton is exactly opposite the
negative charge of the electron. If an object has more protons than electrons, it is said to be
positively charged; if it has more electrons than protons, it is said to be negatively charged. If
an object contains as many protons as electrons, the charges will cancel each other and the
object is said to be uncharged, or electrically neutral.

Electricity occurs in two forms: static electricity and electric current. Static electricity consists
of electric charges that stay in one place. An electric current is a flow of electric charges
between objects or locations.

Static electricity can be produced by rubbing together two objects made of different materials.
Electrons move from the surface of one object to the surface of the other if the second
material holds onto its electrons more strongly than the first does. The object that gains
electrons becomes negatively charged, since it now has more electrons than protons. The
object that gives up electrons becomes positively charged. For example, if a nylon comb is run
through clean, dry hair, some of the electrons on the hair are transferred to the comb. The
comb becomes negatively charged and the hair becomes positively charged. The following
materials are named in decreasing order of their ability to hold electrons: rubber, silk, glass,
flannel, and fur (or hair). If any two of these materials are rubbed together, the material
earlier in the list becomes negative, and the material later in the list becomes positive. The
materials should be clean and dry.

A Charging by Contact

Objects become electrically charged in either of two ways: by contact or by induction.

A charged object transfers electric charge to an object with lesser charge if the two touch.
When this happens, a charge flows from the first to the second object for a brief time. Charges
in motion form an electric current. When charge flows between objects in contact, the amount
of charge that an object receives depends on its ability to store charge. The ability to store
charge is called capacitance and is measured in units called farads.

Charging by contact can be demonstrated by touching an uncharged electroscope with a

charged comb. An electroscope is a device that contains two strips of metal foil, called leaves,
that hang from one end of a metal rod. A metal ball is at the other end of the rod. When the
charged comb touches the ball, some of the charges on the comb flow to the leaves, which
separate because they now hold like charges and repel each other. If the comb is removed,
the leaves remain apart because they retain their charges. The electroscope has thus been
charged by contact with the comb.
This flow of charge between objects with different amounts of charge will occur whenever
possible. However, it requires a pathway for the electric charge to move along. Some
materials, called conductors, allow an electric current to flow through them easily. Other
materials, called insulators, strongly resist the passage of an electric current.

Under normal conditions, air is an insulator. However, if an object gains a large enough charge
of static electricity, part of the charge may jump, or discharge, through the air to another
object without touching it directly. When the charge is large enough, the air becomes a
conductor. Lightning is an example of a discharge.

B Coulomb’s Law

Objects with opposite charges attract each other, and objects with similar charges repel each
other. Coulomb’s law, formulated by French physicist Charles Augustin de Coulomb during the
late 18th century, quantifies the strength of the attraction or repulsion. This law states that
the force between two charged objects is directly proportional to the product of their charges
and inversely proportional to the square of the distance between them. The greater the
charges on the objects, the larger the force between them; the greater the distance between
the objects, the lesser the force between them. The unit of electric charge, also named after
Coulomb, is equal to the combined charges of 6.24 × 1018 protons (or electrons).

If two charged objects in contact have the same capacitance, they divide the charge evenly.
Suppose, for example, that one object has a charge of +4 coulombs and the other a charge of
+8 coulombs. When they touch, charge will flow from the 8-coulomb object to the 4-coulomb
object until each has a charge of +6 coulombs. If each object originally had a charge of +6
coulombs, no charge would flow between them.

If two objects have different capacitances, they divide the charge in proportion to their
capacitances. If an object with a capacitance of 10 farads touches an object with a capacitance
of 5 farads, the 10-farad object will end up with twice the amount of charge of the 5-farad

object. Suppose that the objects are oppositely charged and that one has a charge of +20
coulombs and the other a charge of -8 coulombs. Their total charge is therefore +12
coulombs. After they touch, the 10-farad object will have a charge of +8 coulombs and the 5-
farad object will have +4 coulombs.

C Charging by Induction

A charged object may induce a charge in a nearby neutral object without touching it. For
example, if a positively charged object is brought near a neutral object, the electrons in the
neutral object are attracted to the positive object. Some of these electrons flow to the side of
the neutral object that is nearest to the positive object. This side of the neutral object
accumulates electrons and becomes negatively charged. Because electrons leave the far side
of the neutral object while its protons remain stationary, that side becomes positively charged.
Since the negatively charged side of the neutral object is closest to the positive object, the
attraction between this side and the positive object is greater than the repulsion between the
positively charged side and the positive object. The net effect is an attraction between the
objects. Similarly, when a negatively charged object is brought near a neutral object, the
negative object induces a positive charge on the near side of the neutral object and a negative
charge on the far side. As before, the net effect is an attraction between the objects.

The induced charges described above are not permanent. As soon as the charged object is
taken away, the electrons on the other object redistribute themselves evenly over it, so that it
again becomes neutral.

An object can also be charged permanently by induction. If a negatively charged object, A, is

brought near a neutral object, B, the electrons on B are repelled as far as possible from A and
flow to the other side of B. If that side of B is then connected to the ground by a good
conductor, such as a metal wire, the electrons flow out through the wire into the ground. The
ground can receive almost any amount of charge because Earth, being neutral, has an
enormous capacitance. Object B is said to be grounded by the wire connecting it to Earth.

If this wire is then removed, B has a positive charge, since it has lost electrons to Earth. Thus
B has been permanently charged by induction. Even if A is subsequently removed, B still
remains positive because the wire has been disconnected and B cannot regain electrons from
Earth to neutralize its positive charge.

An electric current is a movement of charge. When two objects with different charges touch
and redistribute their charges, an electric current flows from one object to the other until the
charge is distributed according to the capacitances of the objects. If two objects are connected
by a material that lets charge flow easily, such as a copper wire, then an electric current flows
from one object to the other through the wire. Electric current can be demonstrated by
connecting a small light bulb to an electric battery by two copper wires. When the connections
are properly made, current flows through the wires and the bulb, causing the bulb to glow.

Current that flows in one direction only, such as the current in a battery-powered flashlight, is
called direct current. Current that flows back and forth, reversing direction again and again, is
called alternating current. Direct current, which is used in most battery-powered devices, is
easier to understand than alternating current. Most of the following discussion focuses on
direct current. Alternating current, which is used in most devices that are “plugged in” to
electrical outlets in buildings, will be discussed in the Alternating Current section of this article.

Other properties that are used to quantify and compare electric currents are the voltage (also
called electromotive force) driving the current and the resistance of the conductor to the
passage of the current. The amount of current, voltage, and resistance in any circuit are all
related through an equation called Ohm’s law.

A Conductors and Insulators

Conductors are materials that allow an electric current to flow through them easily. Most
metals are good conductors.

Substances that do not allow electric current to flow through them are called insulators,
nonconductors, or dielectrics. Rubber, glass, and air are common insulators. Electricians wear
rubber gloves so that electric current will not pass from electrical equipment to their bodies.
However, if an object contains a sufficient amount of charge, the charge can arc, or jump,
through an insulator to another object. For example, if you shuffle across a wool rug and then
hold your finger very close to, but not in contact with, a metal doorknob or radiator, current
will arc through the air from your finger to the doorknob or radiator, even though air is an
insulator. In the dark, the passage of the current through the air is visible as a tiny spark.

B Measuring Electric Current

Electric current is measured in units called amperes (amp). If 1 coulomb of charge flows past
each point of a wire every second, the wire is carrying a current of 1 amp. If 2 coulombs flow
past each point in a second, the current is 2 amp. See also Electric Meters.

C Voltage
When the two terminals of a battery are connected by a conductor, an electric current flows
through the conductor. One terminal continuously sends electrons into the conductor, while
the other continuously receives electrons from it. The current flow is caused by the voltage, or
potential difference, between the terminals. The more willing the terminals are to give up and
receive electrons, the higher the voltage. Voltage is measured in units called volts. Another
name for a voltage produced by a source of electric current is electromotive force.

D Resistance

A conductor allows an electric current to flow through it, but it does not permit the current to
flow with perfect freedom. Collisions between the electrons and the atoms of the conductor
interfere with the flow of electrons. This phenomenon is known as resistance. Resistance is
measured in units called ohms. The symbol for ohms is the Greek letter omega, Ù.

A good conductor is one that has low resistance. A good insulator has a very high resistance.
At commonly encountered temperatures, silver is the best conductor and copper is the second
best. Electric wires are usually made of copper, which is less expensive than silver.

The resistance of a piece of wire depends on its length, and its cross-sectional area, or
thickness. The longer the wire is, the greater its resistance. If one wire is twice as long as a
wire of identical diameter and material, the longer wire offers twice as much resistance as the
shorter one. A thicker wire, however, has less resistance, because a thick wire offers more
room for an electric current to pass through than a thin wire does. A wire whose cross-
sectional area is twice that of another wire of equal length and similar material has only half
the resistance of the thinner wire. Scientists describe this relationship between resistance,
length, and area by saying that resistance is proportional to length and inversely proportional
to cross-sectional area.

Usually, the higher the temperature of a wire, the greater its resistance. The resistance of
some materials drops to zero at very low temperatures. This phenomenon is known as

E Ohm’s Law

The relationship between current, voltage, and resistance is given by Ohm’s law. This law
states that the amount of current passing through a conductor is directly proportional to the
voltage across the conductor and inversely proportional to the resistance of the conductor.
Ohm’s law can be expressed as an equation, V = IR, where V is the difference in volts
between two locations (called the potential difference), I is the amount of current in amperes
that is flowing between these two points, and R is the resistance in ohms of the conductor
between the two locations of interest. V = IR can also be written R = V/I and I = V/R. If any
two of the quantities are known, the third can be calculated. For example, if a potential
difference of 110 volts sends a 10-amp current through a conductor, then the resistance of the
conductor is R = V/I = 110/10 = 11 ohms. If V = 110 and R = 11, then I = V/R = 110/11 =
10 amp.

Under normal conditions, resistance is constant in conductors made of metal. If the voltage is
raised to 220 in the example above, then R is still 11. The current I will be doubled, however,
since I = V/R = 220/11 = 20 amp.

F Heat and Power

A conductor’s resistance to electric current produces heat. The greater the current passing
through the conductor, the greater the heat. Also, the greater the resistance, the greater the
heat. A current of I amp passing through a resistance of R ohms for t seconds generates an
amount of heat equal to I2Rt joules (a joule is a unit of energy equal to 0.239 calorie).

Energy is required to drive an electric current through a resistance. This energy is supplied by
the source of the current, such as a battery or an electric generator. The rate at which energy
is supplied to a device is called power, and it is often measured in units called watts. The
power P supplied by a current of I amp passing through a resistance of R ohms is given by P =


All electric currents consist of charges in motion. However, electric current is conducted
differently in solids, gases, and liquids. When an electric current flows in a solid conductor, the
flow is in one direction only, because the current is carried entirely by electrons. In liquids and
gases, however, a two-directional flow is made possible by the process of ionization (see

A Conduction in Solids

The conduction of electric currents in solid substances is made possible by the presence of free
electrons (electrons that are free to move about). Most of the electrons in a bar of copper, for
example, are tightly bound to individual copper atoms. However, some are free to move from
atom to atom, enabling current to flow.

Ordinarily the motion of the free electrons is random; that is, as many of them are moving in
one direction as in another. However, if a voltage is applied to the two ends of a copper bar by
means of a battery, the free electrons tend to drift toward one end. This end is said to be at a
higher potential and is called the positive end. The other end is said to be at a lower potential
and is called the negative end. The function of a battery or other source of electric current is
to maintain potential difference. A battery does this by supplying electrons to the negative end
of the bar to replace those that drift to the positive end and also by absorbing electrons at the
positive end.

Insulators cannot conduct electric currents because all their electrons are tightly bound to
their atoms. A perfect insulator would allow no charge to be forced through it, but no such
substance is known at room temperature. The best insulators offer high but not infinite
resistance at room temperature.

Some substances that ordinarily have no free electrons, such as silicon and germanium, can
conduct electric currents when small amounts of certain impurities are added to them. Such
substances are called semiconductors. Semiconductors generally have a higher resistance to
the flow of current than does a conductor, such as copper, but a lower resistance than an
insulator, such as glass.

B Conduction in Gases

Gases normally contain few free electrons and are generally insulators. When a strong
potential difference is applied between two points inside a container filled with a gas, the few
free electrons are accelerated by the potential difference and collide with the atoms of the gas,
knocking free more electrons. The gas atoms become positively charged ions and the gas is
said to be ionized. The electrons move toward the high-potential (more positive) point, while
the ions move toward the low-potential (more negative) point. An electric current in a gas is
composed of these opposite flows of charges.

C Conduction in Liquid Solutions

Many substances become ionized when they dissolve in water or in some other liquid. An
example is ordinary table salt, sodium chloride (NaCl). When sodium chloride dissolves in
water, it separates into positive sodium ions, Na+, and negative chlorine ions, Cl-. If two points
in the solution are at different potentials, the negative ions drift toward the positive point,
while the positive ions drift toward the negative point. As in gases, the electric current is
composed of these flows of opposite charges. Thus, while water that is absolutely pure is an
insulator, water that contains even a slight impurity of an ionized substance is a conductor.

Since the positive and negative ions of a dissolved substance migrate to different points when
an electric current flows, the substance is gradually separated into two parts. This separation
is called electrolysis.


There are several different devices that can supply the voltage necessary to generate an
electric current. The two most common sources are generators and electrolytic cells.

A Generators

Generators use mechanical energy, such as water pouring through a dam or the motion of a
turbine driven by steam, to produce electricity. The electric outlets on the walls of homes and
other buildings, from which electricity to operate lights and appliances is drawn, are connected
to giant generators located in electric power stations. Each outlet contains two terminals. The
voltage between the terminals drives an electric current through the appliance that is plugged
into the outlet. See Electric Power Systems.

B Electrolytic Cells

Electrolytic cells use chemical energy to produce electricity. Chemical reactions within an
electrolytic cell produce a potential difference between the cell’s terminals. An electric battery
consists of a cell or group of cells connected together.

C Other Sources

There are many sources of electric current other than generators and electrolytic cells. Fuel
cells, for example, produce electricity through chemical reactions. Unlike electrolytic cells,
however, fuel cells do not store chemicals and therefore must be constantly refilled.

Certain sources of electric current operate on the principle that some metals hold onto their
electrons more strongly than other metals do. Platinum, for example, holds its electrons less
strongly than aluminum does. If a strip of platinum and a strip of aluminum are pressed
together under the proper conditions, some electrons will flow from the platinum to the
aluminum. As the aluminum gains electrons and becomes negative, the platinum loses
electrons and becomes positive.
The strength with which a metal holds its electrons varies with temperature. If two strips of
different metals are joined and the joint heated, electrons will pass from one strip to the
other. Electricity produced directly by heating is called thermoelectricity.

Some substances emit electrons when they are struck by light. Electricity produced in this way
is called photoelectricity. When pressure is applied to certain crystals, a potential difference
develops across them. Electricity thus produced is called piezoelectricity. Some microphones
work on this principle.


An electric circuit is an arrangement of electric current sources and conducting paths through
which a current can continuously flow. In a simple circuit consisting of a small light bulb, a
battery, and two pieces of wire, the electric current flows from the negative terminal of the
battery, through one piece of connecting wire, through the bulb filament (also a type of wire),
through the other piece of connecting wire, and back to the positive terminal of the battery.
When the electric current flows through the filament, the filament heats up and the bulb

A switch can be placed in one of the connecting wires. A flashlight is an example of such a
circuit. When the switch is open, the connection is broken, electric current cannot flow through
the circuit, and the bulb does not light. When the switch is closed, current flows and the bulb
The bulb filament may burn out if too much electric current flows through it. To prevent this
from happening, a fuse (circuit breaker) may be placed in the circuit. When too much current
flows through the fuse, a wire in the fuse heats up and melts, thereby breaking the circuit and
stopping the flow of current. The wire in the fuse is designed to melt before the filament would

The part of an electric circuit other than the source of electric current is called the load. The
load includes all appliances placed in the circuit, such as lights, radios, fans, buzzers, and
toasters. It also includes the connecting wires, as well as switches, fuses, and other devices.
The load forms a continuous conducting path between the terminals of the current source.

There are two basic ways in which the parts of a circuit are arranged. One arrangement is
called a series circuit, and the other is called a parallel circuit.

A Series Circuits

If various objects are arranged to form a single conducting path between the terminals of a
source of electric current, the objects are said to be connected in series. The electron current
first passes from the negative terminal of the source into the first object, then flows through
the other objects one after another, and finally returns to the positive terminal of the source.
The current is the same throughout the circuit. In the example of the light bulb, the wires,
bulb, switch, and fuse are connected in series.

When objects are connected in series, the electric current flows through them against the
resistance of the first object, then against the resistance of the next object, and so on.
Therefore the total resistance to the current is equal to the sum of the individual resistances.
If three objects with resistances R1, R2, and R3 are connected in series, their total resistance is
R1 + R2 + R3. For example, if a motor with a resistance of 48 ohms is connected to the
terminals of a current source by two wires, each with a resistance of 1 ohm, the total
resistance of the motor and wires is 48 + 1 + 1 = 50 ohms. If the voltage is 100 volts, a
current of 100/50 = 2 amp will flow through the circuit.

Voltage can be thought of as being used up by the objects in a circuit. The voltage that each
object uses up is called the voltage drop across that object. Voltage drop can be calculated
from the equation V = IR, where V is the voltage drop across the object, I is the amount of
current, and R is the resistance of the object.

In the example of the motor, the voltage drop in each wire is V = IR = 2 × 1 = 2 volts, and
the voltage drop in the motor is 2 × 48 = 96 volts. Adding up the voltage drops (2 + 2 + 96)
gives a total drop of 100 volts. In a series circuit the sum of the voltage drops across the
objects always equals the total voltage supplied by the source.

B Parallel Circuits

If various objects are connected to form separate paths between the terminals of a source of
electric current, they are said to be connected in parallel. Each separate path is called a
branch of the circuit. Current from the source splits up and enters the various branches. After
flowing through the separate branches, the current merges again before reentering the
current source.

The total resistance of objects connected in parallel is less than that of any of the individual
resistances. This is because a parallel circuit offers more than one branch (path) for the
electric current, whereas a series circuit has only one path for all the current.

The electric current through a parallel circuit is distributed among the branches according to
the resistances of the branches. If each branch has the same resistance, then the current in
each will be equal. If the branches have different resistances, the current in each branch can
be determined from the equation I = V/R, where I is the amount of current in the branch, V is
the voltage, and R is the resistance of the branch.

The total resistance of a parallel circuit can be calculated from the equation

where R is the total resistance and R1, R2, ... are the resistances of the branches. For example,
if a parallel circuit consists of three branches with resistances of 10, 15, and 30 ohms, then
Therefore, R = 5 ohms. In this circuit, a voltage of 150 volts would produce an electric current
of I = V/R = 150/5 = 30 amp.

The greater the resistance of a given branch, the smaller the portion of the electric current
flowing through that branch. If a parallel circuit of three branches, with resistances of 10, 15,
and 30 ohms, is connected to a 150-volt source, the branch with a resistance of 10 ohms
would receive a current of V/R = 150/10 = 15 amp. Similarly, the 15-ohm branch receives 10
amp, and the 30-ohm branch receives 5 amp. These branch currents add up to a total current
of 30 amp, which is the value obtained by dividing the voltage by the total resistance.

C Series-Parallel Circuits

Many circuits combine series and parallel arrangements. One branch of a parallel circuit, for
example, may have within it several objects in a series. The resistances of these objects must
be combined according to the rules for a series circuit. On the other hand, a series circuit may
at one point divide into two or more branches and then rejoin. The branches are parallel and
must be treated by the rules for parallel circuits.

Complicated series-parallel circuits may be analyzed by means of two rules called Kirchhoff’s
laws. These rules make it possible to find the amount of electric current flowing through each
part of any circuit, as well as the voltage across it. The first of Kirchhoff’s laws states that at
any junction in a circuit through which a steady current is flowing, the sum of the currents
flowing to the junction is equal to the sum of the currents flowing away from that point. The
second law states that, starting at any point in a circuit and following any closed path back to
the starting point, the net sum of the voltage encountered will be equal to the net sum of the
products of the resistances encountered and the currents flowing through them. In other
words, Ohm’s law applies not only to a circuit as a whole, but also to any given section of a

D Series and Parallel Sources

Sources of electric current can also be connected in various ways. Sources can be arranged in
series by connecting a terminal of one source to the opposite terminal of the next source. For
example, if the positive terminal of battery A is connected to the negative terminal of battery
B, and the positive terminal of battery B to the negative terminal of battery C, then batteries
A, B, and C are in series. The load is then placed between the positive terminal of battery C
and the negative terminal of battery A.
When sources of electric current are connected in series, their total voltage is equal to the sum
of their individual voltages. For example, three 1.5-volt batteries connected in series furnish a
total of 4.5 volts. If the load is 9 ohms, the batteries send a current of 4.5/9 = 0.5 amp
through the load.

Current sources may be arranged in parallel by connecting all the positive terminals together
and all the negative terminals together. The load is then placed between the group of positive
terminals and the group of negative terminals.

Arranging sources in parallel does not increase the voltage. If three 1.5-volt batteries are
connected in parallel, the total voltage is still 1.5 volts. Batteries should not be connected in
parallel unless they have approximately the same voltage. If a high voltage battery is
connected in parallel with a low voltage battery, the high voltage battery will force an electric
current through the low voltage battery and damage it.


A single electric charge can attract or repel, and it will demonstrate this ability as soon as
another charge is brought near it. The ability to attract or repel can be thought of as being
stored in the region around the charge. This region is called the electric field of force of the
charge. All charged objects have electric fields around them.

A Lines of Force
An electric field can be visualized as consisting of imaginary lines called lines of force. Each
line corresponds to the path that a positive charge would take if placed in the field on that
line. The lines in the field around a positively charged object radiate in all directions away from
the object, since the object repels positive charges. Conversely, the lines in the field around a
negatively charged object are directed toward the object. If a positive and a negative object
are placed near each other, their lines of force connect. If two objects with similar charges are
placed near each other, the lines do not connect. Lines of force never cross each other.

Lines of force are only imaginary. Nevertheless, the idea of lines of force helps in visualizing
an electric field.

B Field Direction

When a charge is placed at any given point in an electric field, it is acted on by a force that
tends to push it in a certain direction. This direction is called the direction of the field at that
point. The field direction can be represented graphically by the lines of force near an electric

C Field Strength

The strength, or intensity, of a field at any point is defined as the force exerted on a charge of
1 coulomb placed at that point. For example, if a point charge of 1 coulomb is subjected to a
force of 10 newtons, the electric field is 10 newtons per coulomb at that point. An object with
a charge of 5 coulombs would be subjected to a force of 50 newtons at the same point.

Field strength is represented graphically by the closeness (density) of the lines of force. Where
the lines are close together, the field is strong. Where they are far apart, the field is weak.
Near a charge, the field is strong and the lines are close together. At greater distances from
the charge, the field weakens and the lines are not as close together. The field strength values
that the lines represent are relative, since a field can be drawn with as many lines as desired.


Many similarities exist between electric and magnetic phenomena. A magnet has two opposite
poles, referred to as north and south. Opposite magnetic poles attract each other, and similar
magnetic poles repel each other, exactly as happens with electric charges.
The force with which magnetic poles attract or repel each other depends on the strength of the
poles and the distance between them. This relationship is similar to the Coulomb’s inverse
square law for electric charges. See also Magnetism.

The similarities between electric and magnetic phenomena indicate that electricity and
magnetism are related. Electricity produces magnetic effects and magnetism produces electric
effects. The relationship between electricity and magnetism is called electromagnetism. See
also Quantum Electrodynamics.

A Magnetic Effects of Electricity

It has been noted that an electric field exists around any electric charge. If electric charges
are moving, they constitute an electric current. The magnetic effect of electricity is
demonstrated by the fact that a magnetic field exists around any electric current. The field can
be detected when a magnet is brought close to the current-carrying conductor.
The magnetic field around an electric current can be thought of as lines of magnetic force that
form closed circular loops around the wire that carries the current. The direction of the
magnetic field can be determined by a convenient rule called the right-hand rule. To apply this
rule, the thumb of the right hand is pointed in the direction in which the current is flowing and
the fingers are curled around the wire. The direction of the fingers then indicates the direction
of the lines of magnetic force. (The right-hand rule assumes that current flows from positive to

B Motor Effect

As already stated, a magnetic field exists around a wire carrying an electric current, and a
magnetic field exists between the two poles of a magnet. If the wire is placed between the
poles, the magnetic fields interact to produce a force that tends to push the wire out of the
field. This phenomenon, known as the motor effect, is used in electric motors. See also Electric
Motors and Generators.

C Solenoids

If a wire is bent into many continuous loops to form a long spiral coil, then the magnetic lines
of force tend to go through the center of the coil from one end to the other rather than around
the individual loops of wire. Such a coil, called a solenoid, behaves in the same way as a
magnet and is the basis for all electromagnets. The end from which the lines exit is the north
pole and the end into which the lines reenter is the south pole. The polarity of the coil can be
determined by applying the left-hand coil rule. If the left hand grasps the coil in such a way
that the fingers curl around in the direction of the electron current, then the thumb points in
the direction of the north pole.

D Electric Effects of Magnetism

If a wire is moved through a magnetic field in such a way that it cuts the magnetic lines of
force, a voltage is created across the wire. An electric current will flow through the wire if the
two ends of the wire are connected by a conductor to form a circuit. This current is called an
induced current, and the induction of a current in this manner is called electromagnetic

It does not matter whether the wire moves or the magnetic field moves, provided that the
wire cuts through lines of force. If a magnet is moved near a stationary wire, the lines of
magnetic force are cut by the wire and an electric current is induced in the wire.

Like any electric current, an induced current generates a magnetic field around it. Lenz’s law
expresses an important fact concerning this magnetic field: The motion of an induced current
is always in such a direction that its magnetic field opposes the magnetic field that is causing
the current.


An alternating current is an electric current that changes direction at regular intervals. When a
conductor is moved back and forth in a magnetic field, the flow of current in the conductor will
reverse direction as often as the physical motion of the conductor reverses direction. Most
electric power stations supply electricity in the form of alternating currents. The current flows
first in one direction, builds up to a maximum in that direction, and dies down to zero. It then
immediately starts flowing in the opposite direction, builds up to a maximum in that direction,
and again dies down to zero. Then it immediately starts in the first direction again. This
surging back and forth can occur at a very rapid rate.

Two consecutive surges, one in each direction, are called a cycle. The number of cycles
completed by an electric current in one second is called the frequency of the current. In the
United States and Canada, most currents have a frequency of 60 cycles per second.

Although direct and alternating currents share some characteristics, some properties of
alternating current are somewhat different from those of direct current. Alternating currents
also produce phenomena that direct currents do not. Some of the unique traits of alternating
current make it ideal for power generation, transmission, and use.

A Amperage and Voltage

The strength, or amperage, of an alternating current varies continuously between zero and a
maximum. Since it is inconvenient to take into account a whole range of amperage values,
scientists simply deal with the effective amperage. Like a direct current, an alternating current
produces heat as it passes through a conductor. The effective amperage of an alternating
current is equal to the amperage of a direct current that produces heat at the same rate. In
other words, 1 effective amp of alternating current through a conductor produces heat at the
same rate as 1 amp of direct current flowing through the same conductor. Similarly, the
voltage of an alternating current is considered in terms of the effective voltage.

B Impedance

Like direct current, alternating current is hindered by the resistance of the conductor through
which it passes. In addition, however, various effects produced by the alternating current itself
hinder the alternating current. These effects depend on the frequency of the current and on
the design of the circuit, and together they are called reactance. The total hindering effect on
an alternating current is called impedance. It is equal to the resistance plus the reactance.

The relationship of effective current, effective voltage, and impedance is expressed by V = IZ,
where V is the effective voltage in volts, I is the effective current in amperes (amp), and Z is
the impedance in ohms.

C Advantages of Alternating Current

Alternating current has several characteristics that make it more attractive than direct current
as a source of electric power, both for industrial installations and in the home. The most
important of these characteristics is that the voltage or the current may be changed to almost
any value desired by means of a simple electromagnetic device called a transformer. When an
alternating current surges back and forth through a coil of wire, the magnetic field about the
coil expands and collapses and then expands in a field of opposite polarity and again collapses.
In a transformer, a coil of wire is placed in the magnetic field of the first coil, but not in direct
electric connection with it. The movement of the magnetic field induces an alternating current
in the second coil. If the second coil has more turns than the first, the voltage induced in the
second coil will be larger than the voltage in the first, because the field is acting on a greater
number of individual conductors. Conversely, if there are fewer turns in the second coil, the
secondary, or induced, voltage will be smaller than the primary voltage.

The action of a transformer makes possible the economical transmission of electric power over
long distances. If 200,000 watts of power is supplied to a power line, it may be equally well
supplied by a potential of 200,000 volts and a current of 1 amp or by a potential of 2,000 volts
and a current of 100 amp, because power is equal to the product of voltage and current. The
power lost in the line through heating, however, is equal to the square of the current times the
resistance. Thus, if the resistance of the line is 10 ohms, the loss on the 200,000-volt line will
be 10 watts, whereas the loss on the 2,000-volt line will be 100,000 watts, or half the
available power. Accordingly, power companies tend to favor high voltage lines for long
distance transmission.


Humans have known about the existence of static electricity for thousands of years, but
scientists did not make great progress in understanding electricity until the 1700s.

A Early Theories

The ancient Greeks observed that amber, when rubbed, attracted small, light objects. About
600 BC Greek philosopher Thales of Miletus held that amber had a soul, since it could make
other objects move. In a treatise written about three centuries later, another Greek
philosopher, Theophrastus, stated that other substances also have this power.

For almost 2,000 years after Theophrastus, little progress was made in the study of electricity.
In 1600 English physician William Gilbert published a book in which he noted that many
substances besides amber could be charged by rubbing. He gave these substances the Latin
name electrica, which is derived from the Greek word elektron (which means “amber”). The
word electricity was first used by English writer and physician Sir Thomas Browne in 1646.

The fact that electricity can flow through a substance was discovered by 17th-century German
physicist Otto von Guericke, who observed conduction in a linen thread. Von Guericke also
described the first machine for producing an electric charge in 1672. The machine consisted of
a sulfur sphere turned by a crank. When a hand was held against the sphere, a charge was
induced on the sphere. Conduction was rediscovered independently by Englishman Stephen
Gray during the early 1700s. Gray also noted that some substances are good conductors while
others are insulators.

Also during the early 1700s, Frenchman Charles Dufay observed that electric charges are of
two kinds. He found that opposite kinds attract each other while similar kinds repel. Dufay
called one kind vitreous and the other kind resinous.

American scientist Benjamin Franklin theorized that electricity is a kind of fluid. According to
Franklin’s theory, when two objects are rubbed together, electric fluid flows from one object to
the other. The object that gains electric fluid acquires a vitreous charge, which Franklin called
positive charge. The object that loses electric fluid acquires a resinous charge, which Franklin
called negative charge.
Franklin demonstrated that lightning is a form of electricity. In 1752 he constructed a kite and
flew it during a storm. When the string became wet enough to conduct, Franklin, who stood
under a shed and held the string by a dry silk cord, put his hand near a metal key attached to
the string. A spark jumped. Electric charge gathered by the kite had flowed down the wet
string to the key and then jumped across an air gap to flow to the ground through Franklin’s
body. Franklin also showed that a Leyden jar, a device able to store electric charge, could be
charged by touching it to the key when electric current was flowing down the string.

Around 1766 British chemist Joseph Priestley proved experimentally that the force between
electric charges varies inversely with the square of the distance between the charges. Priestley
also demonstrated that an electric charge distributes itself uniformly over the surface of a
hollow metal sphere and that no charge and no electric field of force exists within such a
sphere. French physicist Charles Augustin de Coulomb reinvented a torsion balance to
measure accurately the force exerted by electric charges. With this apparatus he confirmed
Priestley’s observations and also showed that the force between two charges is proportional to
the product of the individual charges.

In 1791 Italian biologist Luigi Galvani published the results of experiments that he had
performed on the muscles of dead frogs. Galvani had found earlier that the muscles in a frog’s
leg would contract if he applied an electric current to them.

B 19th and 20th Centuries

In 1800 another Italian scientist, Alessandro Volta, announced that he had created the voltaic
pile, a form of electric battery. The voltaic pile made the study of electric current much easier
by providing a reliable, steady source of current. Danish physicist Hans Christian Oersted
demonstrated that electric currents are surrounded by magnetic fields in 1819. Shortly
afterward, André Marie Ampère discovered the relationship known as Ampere’s law, which
gives the direction of the magnetic field. Ampère also demonstrated the magnetic properties of
solenoids. Georg Simon Ohm, a German high school teacher, investigated the conducting
abilities of various metals. In 1827 Ohm published his results, including the relationship now
known as Ohm’s law.

In 1830 American physicist Joseph Henry discovered that a moving magnetic field induces an
electric current. The same effect was discovered a year later by English scientist Michael
Faraday. Faraday introduced the concept of lines of force, a concept that proved extremely
useful in the study of electricity.

About 1840 British physicist James Prescott Joule and German scientist Hermann Ludwig
Ferdinand von Helmholtz demonstrated that electricity is a form of energy and that electric
circuits obey the law of the conservation of energy.
Also during the 19th century, British physicist James Clerk Maxwell investigated the properties
of electromagnetic waves and light and developed the theory that the two are identical.
Maxwell summed up almost all the laws of electricity and magnetism in four mathematical
equations. His work paved the way for German physicist Heinrich Rudolf Hertz, who produced
and detected electric waves in the atmosphere in 1886, and for Italian engineer Guglielmo
Marconi, who harnessed these waves in 1895 to produce the first practical radio signaling

The electron theory, which is the basis of modern electrical theory, was first advanced by
Dutch physicist Hendrik Antoon Lorentz in 1892. American physicist Robert Andrews Millikan
accurately measured the charge on the electron in 1909. The widespread use of electricity as a
source of power is largely due to the work of pioneering American engineers and inventors
such as Thomas Alva Edison, Nikola Tesla, and Charles Proteus Steinmetz during the late 19th
and early 20th centuries.

Microsoft ® Encarta ® Reference Library 2005. © 1993-2004 Microsoft Corporation. All rights

Worm Gears
If you want to create a high gear ratio, nothing beats the worm gear. In a worm gear, a
threaded shaft engages the teeth on a gear. Each time the shaft spins one revolution, the
gear moves one tooth forward. If the gear has 40 teeth, you have a 40:1 gear ratio in a very
small package. Here's one example from a windshield wiper.

A mechanical odometer is another place that uses a lot of worm gears:

There are three worm gears visible in this odometer. See How
Odometers Work for more information.

Planetary Gears
There are many other ways to use gears. One specialized gear train is called a planetary
gear train. Planetary gears solve the following problem. Let's say you want a gear ratio of
6:1 with the input turning in the same direction as the output. One way to create that ratio is
with the following three-gear train:

In this train, the blue gear has six times the diameter of the yellow gear (giving a 6:1 ratio).
The size of the red gear is not important because it is just there to reverse the direction of
rotation so that the blue and yellow gears turn the same way. However, imagine that you
want the axis of the output gear to be the same as that of the input gear. A common place
where this same-axis capability is needed is in an electric screwdriver. In that case, you can
use a planetary gear system, as shown here:
In this gear system, the yellow gear (the sun) engages all three red gears (the planets)
simultaneously. All three are attached to a plate (the planet carrier), and they engage the
inside of the blue gear (the ring) instead of the outside. Because there are three red gears
instead of one, this gear train is extremely rugged. The output shaft is attached to the blue
ring gear, and the planet carrier is held stationary -- this gives the same 6:1 gear ratio. You
can see a picture of a two-stage planetary gear system on the electric screwdriver page, and
a three-stage plenetary gear system of the sprinkler page. You also find planetary gear
systems inside automatic transmissions.

Another interesting thing about planetary gearsets is that they can produce different gear
ratios depending on which gear you use as the input, which gear you use as the output, and
which one you hold still. For instance, if the input is the sun gear, and we hold the ring gear
stationary and attach the output shaft to the planet carrier, we get a different gear ratio. In
this case, the planet carrier and planets orbit the sun gear, so instead of the sun gear having
to spin six times for the planet carrier to make it around once, it has to spin seven times. This
is because the planet carrier circled the sun gear once in the same direction as it was
spinning, subtracting one revolution from the sun gear. So in this case, we get a 7:1

You could rearrange things again, and this time hold the sun gear stationary, take the output
from the planet carrier and hook the input up to the ring gear. This would give you a 1.17:1
gear reduction. An automatic transmission uses planetary gearsets to create the different
gear ratios, using clutches and brake bands to hold different parts of the gearset stationary
and change the inputs and outputs.

Gear Trains
To create large gear ratios, gears are often connected together in gear trains, as shown
The right-hand (purple) gear in the train is actually made in two parts, as shown above. A
small gear and a larger gear are connected together, one on top of the other. Gear trains
often consist of multiple gears in the train, as shown in the next two figures.
In the case above, the purple gear turns at a rate twice that of the blue gear. The green gear
turns at twice the rate of the purple gear. The red gear turns at twice the rate as the green
gear. The gear train shown below has a higher gear ratio:

In this train, the smaller gears are one-fifth the size of the larger gears. That means that if
you connect the purple gear to a motor spinning at 100 revolutions per minute (rpm), the
green gear will turn at a rate of 500 rpm and the red gear will turn at a rate of 2,500 rpm. In
the same way, you could attach a 2,500-rpm motor to the red gear to get 100 rpm on the
purple gear. If you can see inside your power meter and it's of the older style with five
mechanical dials, you will see that the five dials are connected to one another through a gear
train like this, with the gears having a ratio of 10:1. Because the dials are directly connected
to one another, they spin in opposite directions (you will see that the numbers are reversed
on dials next to one another).

An Example
Imagine the following situation: You have two red gears that you want to keep synchronized,
but they are some distance apart. You can place a big gear between them if you want them
to have the same direction of rotation:

Or you can use two equal-sized gears if you want them to have opposite directions of

However, in both of these cases the extra gears are likely to be heavy and you need to
create axles for them. In these cases, the common solution is to use either a chain or a
toothed belt, as shown here:
The advantages of chains and belts are light weight, the ability to separate the two gears by
some distance, and the ability to connect many gears together on the same chain or belt. For
example, in a car engine, the same toothed belt might engage the crankshaft, two camshafts
and the alternator. If you had to use gears in place of the belt, it would be a lot harder.

For more information on gears and their applications, check out the links on the next page!

Machine, simple device that affects the force, or effort, needed to do a certain amount of
work. Machines can make a tough job seem easier by enabling a person to apply less force or
to apply force in a direction that is easier to manipulate. Machines lessen the force needed to
perform work by lengthening the distance over which the force is applied. Although less force
is subsequently used, the amount of work that results remains the same. Machines can also
increase the speed at which work makes an object travel, but increasing speed requires the
application of more effort.

There are four types of simple machines: the lever, the pulley, the inclined plane, and the
wheel and axle. Each machine affects the direction or the amount of effort needed to do work.
Most mechanical machines, such as automobiles or power tools, are complex machines
composed of many parts. However, no matter how complicated a machine is, it is composed of
some combination of the four simple machines. Although these simple machines have been
known and used for thousands of years, no other simple machines have been discovered. Two
other common simple machines, the screw and the wedge, are really adaptations of the
inclined plane.

Some common examples of simple machines are the shovel (a form of lever), the pulley at the
top of a flagpole, the steering wheel of an automobile (a form of wheel and axle), and the
wheelchair ramp (a form of inclined plane). An everyday example of a complex machine is the
can opener, which combines a lever (the hinged handle), a wheel and axle (the turning knob),
and a wedge (the sharpened cutting disk).


Machines help people do work by changing the amount of force and the distance needed to
move objects. Work, in physics, is the amount of force used to move an object multiplied by
the distance over which the force is applied. This can be written in mathematical terms:

Work = Force × Distance

Force is defined as a push or a pull exerted on one body by another, such as a hand pushing a
book across a table. Distance refers to the distance a load is moved by the force. The
advantage that a machine gives its user by affecting the amount of force needed is called the
machine’s mechanical advantage, or MA. Knowing the mechanical advantage of a machine
allows a user to predict how much force is needed to lift a given object.
A How Machines Work

A machine can make a given task seem easier by reducing the amount of force needed to
move an object, by changing the direction in which the force must be applied, or by doing
both. A machine decreases the amount of force needed by increasing the distance over which
the effort is applied to move the object. The amount of work needed to overcome gravity and
lift a given load always remains the same, but spreading the necessary effort out over a longer
distance makes the task seem easier. This is why walking gradually up a gentle slope is easier
than walking up a steep slope. The distance walked on the gentle slope is longer, but the
effort needed to reach the top is less. A gentle slope is a form of inclined plane.

Applying effort over a greater distance takes more time, and this slows down the speed of
work. Some machines can actually speed up a task. They do this by reducing the distance
over which the effort is applied. If the distance in the equation defining work (Work = Force ×
Distance) is reduced, then the force must therefore be increased to keep work constant.
Increasing the speed at which a task is performed requires more force than would otherwise
be necessary. The wheel and axle and certain types of levers are simple machines that can
either speed up a task (requiring more effort) or slow down a task (requiring less effort). The
various gears on a multispeed bicycle (another complex machine) work in a manner similar to
that of the wheel and axle. Some gears require more effort, but they make the bicycle travel
faster on flat terrain. Other gears require less effort and are useful for climbing hills.

People use simple machines, such as levers and pulleys, to make manual chores easier. The
mechanical energy in a person’s muscles makes the machine do work. Not all machines use
muscle power, however, to do work. A complex machine, such as an airplane engine or an
elevator, is made up of many simple machines. Airplane engines and elevators are not
powered by hand. Complex machines often use the energy stored in chemical substances,
such as airplane fuel or the energy stored in electricity, to provide the necessary force to do
work. An airplane engine uses the combustion, or rapid burning, of airplane fuel to power the
engine that turns the propeller. An elevator uses large engines, usually powered by electricity,
to pull cables that raise and lower the elevator car. Electricity also powers the levers that help
open and shut the elevator doors.

B Mechanical Advantage and Friction

Measuring the mechanical advantage (MA) is a mathematical way to determine how much a
machine affects the amount of force needed to do work. Scientists find the mechanical
advantage of a machine by dividing the force the machine delivers by the effort put into the
machine. The theoretical, or ideal, mechanical advantage of a machine is the advantage it
would produce if the machine were perfect. In simple machines, the main source of
imperfection is friction. Friction results from two bodies moving against each other in different
directions. Friction always opposes motion and makes doing work harder. Since friction is
present in almost every machine, the actual mechanical advantage is always less than the
theoretical mechanical advantage.

Because simple machines increase mechanical advantage by increasing the distance over
which the effort is applied, one way to compute theoretical mechanical advantage is to divide
the distance the effort is applied by the distance the load actually travels. For example, raising
a load 5 m (16 ft) off the ground is easier if the load is moved up a gradual slope, or an
inclined plane, rather than lifted straight up. Moving the load along a 10-m (32-ft) inclined
plane would provide a mechanical advantage of 10 divided by 5, or 2. This means that the
work was twice as easy, or that only half as much effort was needed to raise the load.
Because of the inclined plane, however, the load needed to be pushed twice as far to end up 5
meters above the ground.

C Efficiency

Another factor that people sometimes compute for machines is their efficiency, or the ratio of
the work that results to the amount of work put into the machine. The efficiency of a machine
is usually expressed as a percentage and can vary from 5 percent to 95 percent. A perfect
machine would be 100 percent efficient. Most simple machines are very efficient, but they
always lose some efficiency due to friction. An automobile engine is much less efficient
because much of the energy used to move the crankshaft is lost to friction in the form of heat
dissipating from the engine.


The four simple machines each function in different ways, but they all change the direction or
the amount of effort put into them. All four of these machines can be used to decrease the
amount of force needed to do work or to change the direction of the force. The wheel and axle
and some levers can also be used to increase the speed of performance of a task, but doing so
always increases the amount of force needed.

A Inclined Plane

Ramps and staircases are simple examples of inclined planes. An inclined plane is an object
that decreases the effort needed to lift an object by increasing the distance over which the
effort is applied. This increase in distance allows a person to move a large object to a certain
height while applying less force than would otherwise be needed. (Without the plane, a person
would need to lift with a force equal to the entire weight of the object.) The tradeoff is that
with the inclined plane, the person must move the object a farther distance. An inclined plane
also changes the direction—from straight up to along the angle of the plane—of the effort
applied. The amount of work done is the same whether the person lifts the object straight up
or along an inclined plane.

The MA of an inclined plane equals the length of the plane divided by the height to which the
object is raised. A long inclined plane at a small angle has a greater mechanical advantage
than a steep inclined plane, because the effort is applied over a greater distance. A wedge is a
double inclined plane, with a plane on each side. Wedges are often used to split wood,
changing the downward direction of the force from a sledgehammer to a sideways force
toward the wood being split.

A screw is a form of inclined plane in which the plane is wrapped around an axis, or pole. The
MA of a screw is related to the pitch of the threads (the distance along the axis of the screw
from one thread to the next) and the diameter of the axis. There are two different types of
screws: fastening screws and lifting screws. Fastening screws are used to join things together.
Examples of fastening screws are wood or metal screws, which have threads that dig into the
materials being joined. The materials are held together by a combination of friction on the
threads and compression of the screw by the materials. Other screws, sometimes called
machine screws or bolts, have threads that are matched by the threads on the inside of a nut.

Lifting screws are used to lift loads or to exert forces on other bodies. An example of a lifting
screw is the screw jack used to change tires on a car. Lifting screws are usually lubricated to
reduce friction, but some friction with lifting screws is helpful so that the screw can safely hold
the load.

B Lever
One of the most commonly used simple machines is the lever. A seesaw is an example of a
lever. The human arm is actually a lever, and the muscles apply the force needed to lift weight
or move objects. A lever consists of a bar that rotates around a pivot point, which is called the
fulcrum. The force applied by the user is the effort. The object being lifted is called the load.
There are three classes of levers, which vary in the placement of the effort, the load, and the
fulcrum along the bar. In a Class 1 lever, the fulcrum lies between the effort and the load, as
in a seesaw. In a Class 2 lever, the fulcrum lies at one end, the effort is applied at the other
end, and the load is in the middle, as in a wheelbarrow. In a Class 3 lever, the fulcrum is
again at one end, but the load is at the other end, and the effort is applied in the middle. The
human forearm is a Class 3 lever. The elbow is the fulcrum, and the forearm muscles apply
the effort between the elbow and hand. Tweezers are another example of a Class 3 lever.

One of the limitations of levers is that they only operate through relatively small angles. The
MA of a lever is the distance from the fulcrum to the point where the force is applied divided
by the distance from the fulcrum to the load. The MA is maximized when the load is close to
the fulcrum and the effort is far from the fulcrum. In this case, a small effort can move a large

C Pulley

The pulley is a special type of wheel, called a sheave, which has a groove cut into the edge to
guide a rope, cable, or chain. Pulleys are used at the top of flagpoles and in some types of
window blinds. If a single pulley is used, the mechanical advantage is 1, and the only
advantage of using the pulley is that the direction of the force needed is changed. For
example, to raise window blinds, a downward pull on a cord is required.

When multiple pulleys are combined (in what is called a block and tackle), they can have
mechanical advantages greater than 1, because they increase the distance the rope travels,
thereby increasing the distance over which the effort is applied. The MA of a block and tackle
is equal to the number of strands of rope on the part of the block and tackle that is attached
to the load. Using a combination of pulleys that results in three strands of rope attached to the
load requires the user to pull the rope three times farther than the load actually moves. This
results in an MA of 3, which means that one-third as much effort is required to move the load.
The rope on a pulley causes a good deal of friction, and this limits the number of pulleys that
can be used.

D Wheel and Axle

The wheel and axle is similar in appearance to a pulley, with one major difference: the wheel
is fixed to the axle, as is the steering wheel of a car. A user applies effort to the large outer
wheel of the steering wheel to move the load at the axle. The MA of a wheel and axle is equal
to the radius of the wheel divided by the radius of the axle. The radius of the wheel, and
therefore its circumference, is usually much larger than the radius of the axle. Therefore, the
distance over which the effort is applied is much greater than the distance the load, which is
placed at the axle, moves. The difference in the sizes of the wheel and axle can result in a
large mechanical advantage. Some common examples of a wheel and axle are a doorknob and
a round water faucet handle.


Many everyday objects are really combinations of simple machines. Such combinations are
known as complex machines. The doorknob is a wheel and axle system that transfers the force
applied by a person to a system of levers. The levers move the bolt and unlatch the door. A
pair of pliers is really two Class 1 levers with the same fulcrum (the pivot pin). Pliers usually
have a mechanical advantage of 5 or higher. A pair of scissors is a pair of pliers with wedges
as the cutting edge. Cutting something thick or hard is easier when the scissors are opened
wide and the object is placed near the pivot pin. This placement decreases the distance
between the load and the fulcrum, giving the scissors a higher MA than if the cutting was done
near the tip of the scissors.

Some complex machines are very complicated. An automobile is one such machine. The
engine contains many levers, wheels and axles, and pulleys. The whole engine is held together
by threaded bolts, which are a form of inclined plane. The transmission uses gears, which are
a form of wheel and axle with specially shaped teeth on the outside of the wheels. Two gears
fit together and transfer force and power from one gear shaft to another. By choosing the size
of the gears, the speed and direction of the rotation of the axles can be controlled.

Even devices that do not seem to be mechanical use simple machines. A computer, which is
thought of as an electronic device, has a cooling fan. This fan is a complex machine in which
the motor shaft turns the fan, which is a form of wheel. The disk drive uses a wheel and axle
to turn the disk and a system of levers to position the heads that read and write the data on
the disk.

The history of machines dates back thousands of years. Although the date of the first use of
simple machines is not known, the lever is believed to be the first simple machine that was
utilized by humans. However, someone choosing a long, gradual approach up a mountain
rather than walking up a steeper, shorter path would have been taking advantage of an
inclined plane. The first levers were probably branches or logs used to lift heavy objects.
People used a counterbalanced lever called a shadoof in ancient Egypt for lifting irrigation
water. People also used such a device for lifting soldiers over battlements. Metal or stone
wedges have been used since ancient times for splitting wood. People used wooden wedges to
split rocks by placing dry wooden wedges into cracks in rocks and then allowing the wedges to
swell by absorbing water. Historians believe the people of ancient Mesopotamia (an early
civilization near modern-day Iraq) used wheels as early as 3500 BC. Chariots in Asia Minor
used spoked wheels, which were lighter than solid wheels, as early as 2000 BC. The Greek
inventor Archimedes (287-212 BC) developed a screw-type device known as Archimedes’ screw
for raising water. Some modern water pumps still use this principle. According to legend,
Archimedes also used a block and tackle to pull ships onto dry land.

Machines can transform natural energy, such as wind and falling water, into work.
Waterwheels, first used in ancient Greece and Rome, and later adopted by Europeans in the
12th century, used the water falling from a waterfall to turn large wheels (see Waterpower).
The windmill also uses the same wheel and axle principle to magnify and change the direction
of force to do work. Grinding wheels connected to waterwheels can grind grain for making
flour or power large saws for sawing wood. Pumps connected to windmills transform the rotary
motion of a windmill into reciprocating (back and forth) motion, which is used to pump water
from the ground. Waterwheels and windmills can also be connected to electrical generators to
produce electricity.

Complicated machines such as the power loom (patented in 1786) helped cultivate the
improvements seen in Great Britain during the first Industrial Revolution at the end of the
18th century. Later Industrial Revolutions elsewhere brought about the invention of even more
complex machines, such as the cotton gin (used to separate cotton fibers from seeds), the
mechanical reaper (used to cut grain), and the automobile.

Contributed By:
Odis Hayden Griffin, Jr.

Microsoft ® Encarta ® Reference Library 2005. © 1993-2004 Microsoft Corporation. All rights

Lever, simple machine consisting of a rigid bar that rotates about a fixed point, called a
fulcrum. Levers affect the effort, or force, needed to do a certain amount of work, and are
used to lift heavy objects. To move an object with a lever, force is applied to one end of the
lever, and the object to be moved (referred to as the resistance or load) is usually located at
the other end of the lever, with the fulcrum somewhere between the two. By varying the
distances between the force and the fulcrum and between the load and the fulcrum, the
amount of effort needed to move the load can be decreased, making the job easier.

Physicists classify the lever as one of the four simple machines used to do work. (The other
three are the pulley, the wheel and axle, and the inclined plane.) Work is defined in physics as
the result of a force, such as a person lifting, that moves an object over a distance. A common
example of a lever is the seesaw. The human arm is also a lever, where the elbow is the
fulcrum and the muscles apply the force.


A lever makes work easier by reducing the force needed to move a load. Work, in physics, is
the product of the force used to lift a load multiplied by the distance the force, or effort, is
applied. This relationship can be written mathematically as:

Work = Force × Distance

The amount of work needed to move an object a given distance always remains the same
except when friction is present. The lever, like all simple machines, makes doing work easier
by reducing the force needed to move an object. In order to reduce the force needed, the
distance over which the force is applied must be increased.

To increase this distance, the load to be moved must be close to the fulcrum and the force
must be applied far from the fulcrum. A good example is a claw hammer used to pry nails
loose. The user’s hand applies force to the handle at one end of the lever. The head of the
hammer is the fulcrum, and the nail at the other end of the lever is the load to be moved. The
nail is much closer to the fulcrum than is the hand applying the force. Since the hand is farther
away from the fulcrum, the force travels a greater distance than does the load as the nail is
pried loose. The same amount of work would have been done if the nail had been pulled
directly out by hand. However, by using the lever the force was spread out over a greater
distance, and so less force was needed. Another example is a seesaw. The force of a smaller
person can balance and even lift the load of a larger person as the smaller person moves
farther away from the fulcrum.

The mechanical advantage (MA) of a lever tells how much the lever magnifies effort. The
greater the MA, the less the effort needed to move a load. The MA of a lever is the ratio of the
distance the force travels to the distance the load travels. In practical terms, the MA is the
distance of the force to the fulcrum divided by the distance of the load to the fulcrum.
Depending on the class of lever and the location of the fulcrum, the MA may be less than or
greater than 1.


There are three different classes of levers, depending on the arrangement of the force, the
load, and the fulcrum along the lever bar. Each class of lever affects force in a different way,
and each class has different applications.

A Class 1 Levers

The class 1 lever has the fulcrum between the force and the load, as in a seesaw. When two
people of equal weight use the seesaw, they position themselves an equal distance from the
fulcrum, and the system is balanced. When a heavier person sits on one end, that person
usually moves toward the center, which gives a mechanical advantage to the lighter person so
that the system is again in balance. It is possible for a class 1 lever to have a significant
mechanical advantage.

B Class 2 Levers

The class 2 lever has the fulcrum at one end, the force at the other end, and the load in the
middle. A common example is the wheelbarrow, where the wheel is the fulcrum, the load rests
within the box, and the force is the lift supplied by the user. A class 2 lever always has a
mechanical advantage of greater than 1. To reduce the force required by the user even more,
the best wheelbarrow design is one where the wheel is directly under the load, reducing the
distance from the load to the fulcrum almost to zero. Many wheelbarrows and garden carts are
designed in that manner to make them easy for the user to move.

C Class 3 Levers

A class 3 lever has the fulcrum at one end, the load at the other end, and the force in the
middle. The human forearm is a class 3 lever. The elbow is the fulcrum, and the muscles of
the forearm apply the force between the elbow and the hand. The class 3 lever always has a
mechanical advantage of less than 1, because the load travels a greater distance than the
force travels. Consequently, the work requires more effort than would ordinarily be needed.
Although they boost the amount of effort needed, class 3 levers are useful for increasing the
speed at which a load is moved. A baseball bat and a broom are also examples of class 3
levers, with which a greater effort results in a smaller load moving at a greater speed.


The first levers were probably branches or logs used to lift heavy objects, followed by sticks
used to till soil for planting crops. In both of those applications, the lever magnifies the force
applied by a human. Learning to use those simple tools led to the development of other
applications of the lever.

In addition to using human power as the force applied to the lever, people added weights so
that the force they had to exert was lessened. These weights are called counterweights. A
counterbalanced lever called a shadoof was used in ancient Egypt for lifting irrigation water
from the Nile River up onto land, and it is still used today. During the Middle Ages, attacking
armies used a similar device for lifting soldiers over fortress walls. The principle of the lever
was often utilized through the rotary motion of the wheel and axle. Waterwheels installed near
waterfalls used the continuous force of moving water to provide the necessary leverage to turn
large grindstones for grinding grain into flour.

A crowbar and the claw of a hammer used to pry loose nails are both common examples of
levers in action. Balance scales use levers to find the mass of an object. Complex machines
often use a series of levers to transfer force. The keys of a piano use levers to transmit force
from the keys to the hammers that strike the strings.

Pulley, simple machine used to lift objects. A pulley consists of a grooved wheel or disk within
a housing, and a rope or cable threaded around the disk. The disk of the pulley rotates as the
rope or cable moves over it. Pulleys are used for lifting by attaching one end of the rope to the
object, threading the rope through the pulley (or system of pulleys), and pulling on the other
end of the rope.

A single fixed pulley changes the direction of the force applied to the end of the rope. A
common example of a pulley can be found at the top of a flagpole. Pulling down on the rope
causes the flag to go up because the pulley changes the direction of the force applied to the
flag. Multiple pulleys can change both the direction of the applied force and the amount of
force, so that less force is needed to lift an object. Construction cranes use multiple pulley
systems to reduce the amount of force needed to lift heavy equipment.

The pulley is one of the four simple machines (along with the lever, the wheel and axle, and
the inclined plane) used to do work. Work is defined in physics as the result of a force, such as
the effort of pulling on a rope, that moves an object across a distance. Pulleys reduce the
effort to lift an object by increasing the distance over which the effort is applied.


To lift any object, a person must do some work. Work is the product of the effort, or force,
applied to an object multiplied by the distance the force is applied. The relation of work to
force and distance can be show as an equation:

Work = Force × Distance

A pulley makes work easier by increasing the distance over which effort is applied. Pulleys
increase distance by requiring additional rope to be pulled to lift an object. Increasing the
distance reduces the amount of force needed for the job. By changing the direction of a force,
pulleys make it easier to apply the force because it is more convenient to pull down than to
pull up. Combining pulleys increases the amount of rope needed to lift an object, so heavy
loads can be lifted with even less effort.

Mechanical advantage (MA) is a term that describes how much a machine magnifies effort.
The greater the MA, the less the effort needed to lift a given load. There are two types of MA:
theoretical and actual. Theoretical MA is the MA most commonly referred to. It is the MA a
machine would have if it were perfect. The actual MA, which is always less than theoretical
MA, takes into account imperfections in simple machines. The main source of imperfection is
friction, the result of two bodies rubbing against each other. Friction always opposes motion,
and is present to some degree in almost every machine. Friction is a major problem in pulleys
because of the weight on the rope and the movement of the rope on the pulley. Lubricants
and bearings are often used in pulleys to reduce friction.

MA is generally determined by dividing the distance the effort travels by the distance the load
travels. The higher the MA, the easier it is to do work. A single fixed pulley, such as that at the
top of a flagpole, has a theoretical MA of 1, which means for each distance of rope the user
pulls in, the flag rises the same distance. Effort is not magnified in this case. The load that can
be lifted is equal to the force that is applied by the user. The primary benefit of a single pulley
is to change the direction of the force or to move a load to a point (such as the top of a
flagpole) that cannot be reached by the user. In reality, the actual MA is slightly less than 1
because of the friction of the rope against the pulley and the friction between the pulley and
the axle on which it turns.

Pulleys can offer MAs of greater than 1 if they are movable. A movable pulley is one that is
attached to the load to be lifted and therefore moves with the load as the rope is pulled. Even
a single pulley, when placed on the object to be moved, provides an MA of 2, meaning that
twice the load can be lifted with the same amount of effort. The MA of a movable pulley (or a
system of pulleys with a movable part) equals the number of strands of rope coming from the
movable part (the load being lifted).

A movable pulley can be used to lift a heavy load from the bottom of a cargo ship up to the
deck. For a single movable pulley to work, one end of the rope is tied to a fixed anchor on the
deck. The rope leads from the anchor down through the pulley (which is attached to the load),
and back up to the user. Since both strands of rope coming from the pulley equally support
the load, any effort applied is doubled. Since a pulley system with an MA of 2 increases the
force by a factor of 2, the pulley system must also double the distance the effort travels.
Therefore, in order to raise a load a given distance, the user must pull and take in twice as
much rope.


Systems of pulleys have been used for centuries to move loads. Two common types of pulley
systems are the block and tackle and the chain hoist. Chain hoists are usually operated by
hand, while a block and tackle system is often used with an engine or motor.

A Block and Tackle

When several movable and fixed pulleys are used together, the entire system is usually called
a block and tackle. Block and tackle systems are commonly used on sailing ships to lift heavy
sails. The term block refers to the case that houses the pulleys side by side and holds the axle
of the pulleys in place. Tackle is a term traditionally used to refer to a sailing ship’s rigging,
which was usually made of rope. Thus the block and tackle consists of a system of pulleys in
their housings and a rope used to apply the forces. The MA of a block and tackle is equal to
the number of strands of rope coming from the moveable set of pulleys attached to the load.

A block and tackle typically houses several pulleys, and can increase MA considerably. On
sailing ships, a block and tackle is used to apply forces to another block and tackle to gain an
even greater MA. This is often necessary because of the large friction losses in such systems,
which are usually made of wood with some metal parts. By using these devices, sailors can
exert large forces. They will, however, have to pull a greater length of rope to accomplish this.

B Chain Hoist

A chain hoist is a pulley system joined together by a closed loop of chain that is pulled by
hand. Chain hoists are sometimes used to lift automobile engines out of cars. The pulleys on a
chain hoist have teeth that hold the chain, much like the sprockets that hold a bicycle chain in
place. A chain hoist is made up of two sections. The top has a large pulley and a small pulley
joined side by side on the same axle. The large and small pulleys turn together as a unit. The
bottom section of a chain hoist is a movable pulley attached to the load.

The chain hangs down from the large pulley on one side, and then threads back up around the
small pulley, down through the movable pulley, and back up to the large pulley. When a user
pulls on the chain hanging down from the large pulley, that pulley pulls in chain from the
movable pulley. The chain threading through the movable pulley is fed from the small pulley
on top. When the chain is pulled, the large pulley brings in more chain than the small pulley
lets out, and so the load is raised. Since the effort travels a greater distance than the load, the
chain hoist multiplies force. The MA of a chain hoist depends on the diameters of the large and
small pulleys.


As is the case with all the simple machines, the origin of the pulley is unknown. When early
peoples lifted heavy objects by throwing vines or other crude ropes over tree limbs, they used
the idea of a single fixed pulley to change the direction of a force. But since there was no
wheel to turn, this use resulted in considerable friction. It is believed that by 1500 BC people in
Mesopotamia used rope pulleys for hoisting water. Legend has it that the Greek inventor
Archimedes (287-212 BC) used a block and tackle system to pull ships onto dry land.
Modern pulley systems are often combined with motors to create hoists for lifting heavy loads.
By using a motor, the user only has to push a button to lift or lower the load. Construction
cranes and cranes used at shipyards move heavy loads using block and tackle systems
connected to powerful motors. An elevator in a building uses a pulley system to raise and
lower the elevator cars.
Inclined Plane

Inclined Plane, simple machine, consisting of a ramp or a similar wedge-shaped device, that
makes doing a given amount of work easier. In physical terms, work is the result of a force,
such as the effort of pushing or pulling something, that moves an object over a distance. An
inclined plane makes it easier to lift heavy objects by enabling a person to apply the necessary
force over a greater distance. The same amount of work is accomplished in lifting the object
with or without the inclined plane, but because the inclined plane increases the distance over
which the force is applied, the work requires less force.

The inclined plane is one of the four simple machines (along with the lever, the wheel and
axle, and the pulley). Two other simple machines, the screw and the wedge, are really
alternate forms of the inclined plane. One of the most common examples of an inclined plane
is a staircase, which allows people to move within a building from one floor to another with
less effort than climbing straight up a ladder would require. Some jacks that are used to lift
cars use threaded screws. A sharp knife is an everyday example of a wedge.


An inclined plane makes doing work easier by changing both the direction and the amount of
effort that are used to lift an object. Work, in physics, is defined as the amount of force
applied to an object multiplied by the distance over which the force is applied. Mathematically,
this can be expressed by the following equation:

Work = Force x Distance

When lifting an object is the work being done, the force needed is the effort required to lift the
object, and the distance corresponds to the distance the object is lifted. Rather than lifting an
object straight up, an inclined plane allows a person to lift an object gradually (at an angle)
over a greater distance. By increasing distance, the inclined plane decreases the amount of
force needed to do the same amount of work without the plane.

The mechanical advantage (MA) of an inclined plane measures how much the plane magnifies
the effort applied to the machine. There are two kinds of MA: theoretical and actual.
Theoretical MA is the MA a machine would have if it were perfect. All machines, however, lose
some of their MA to friction, a resistance created between objects when they move against
each other. Friction makes the process of moving objects, and therefore doing work, more
difficult. The actual MA of a machine is less than the theoretical MA because of friction.
The MA of an inclined plane without any friction is equal to the length of the plane divided by
the height of the plane. A ramp that is twice as long as it is high has a mechanical advantage
of 2. This means that the ramp doubles the effort applied by the user, or that the user needs
to apply only half as much effort to lift an object to a desired height as he or she would
without the ramp. Increasing the ratio of the length of the ramp to the height of the ramp
decreases the effort needed to lift an object. This idea explains why climbing up a steep hill
takes more effort (and seems more difficult) than walking up a longer, more gradual path to
the same height as that of the steep hill. The longer the inclined plane, the larger the MA will
be. If the length of a ramp was equal to its height, the ramp would simply run straight up, like
a vertical ladder. In this case, the mechanical advantage would be 1, which means the ramp
did not magnify the user’s effort.

Friction is a phenomenon that reduces the efficiency of all machines. Walking up an inclined
plane or rolling a load (such as a barrel) up a plane creates little friction, and the actual MA is
close to the theoretical MA. However, sliding a load (especially a flat load such as a crate) up a
plane creates friction and causes the plane to lose much of its MA. Wheels can be added to the
load to decrease friction. People also frequently build inclined planes with small rollers or
casters built into the plane to reduce friction.


The screw and the wedge are common adaptations of the inclined plane. A screw is an inclined
plane wrapped around an axis, or pole. The edge of the inclined plane forms a helix, or spiral,
around the axis. The mechanical advantage of a screw is related to the circumference of the
screw divided by the pitch of the threads. The pitch of a thread is the distance along the axis
of the screw from one thread to the next. Since the pitch is generally small compared to the
circumference, large mechanical advantages can be achieved by using screws. Screws are
often used to raise objects, and some jacks used to lift automobiles rely on screws. A jack has
a large screw attached to a small platform, which is placed under a vehicle. Turning the screw
many times produces a small amount of vertical lift on the platform, and raises the
automobile. The screw requires a lot of turning, which equates with effort applied over a long
distance; this allows heavy loads to be lifted with a small amount of effort. Screws are also
useful as fastening devices. Screws driven straight into wood or other materials, as well as
threaded nuts and bolts take advantage of the friction that results from the contact between
the inclined plane and other objects. These devices use friction to hold things together.

A wedge is another form of inclined plane. A wedge is essentially a double inclined plane,
where two planes are joined at their bases. The joined inclined planes form a blunt end that
narrows down to a tip. Wedges transfer downward effort applied to the blunt edge of the
wedge out to the sides of the wedge to help it cut through an object. Effort is applied directly
to the wedge, which differs from an inclined plane, where the effort travels along the plane.
Wedges are often used to split materials such as wood or stone. Since there is much friction
involved, the mechanical advantage of a wedge is difficult to determine. The main benefit of
the wedge is changing the direction of effort to help split or cut through an object. A knife is
also a form of wedge. The wedge shape of the knife edge helps the user cut through material.


The inclined plane is undoubtedly one of the first of the simple machines people ever used. A
person walking up a gradual path to the top of a mountain rather than climbing straight up a
steep face is taking advantage of the principle of the inclined plane. There are indications that
the Egyptians created earthen ramps to raise huge blocks of stone during the construction of
the pyramids, from about 2700 BC to 1000 BC. Evidence from drawings of that time indicates
that the Egyptians used a lubricant, probably milk, to reduce the sliding friction and thus
increase the efficiency of the inclined planes.

People used wedges in ancient times to split wood, transferring the force they applied to the
blunt edge out to the sides of the wedge. People also used wooden wedges in prehistoric times
to split rocks. They placed dry wooden wedges into cracks in rocks and then allowed the
wedges to swell by absorbing water. The resulting pressure in the cracks caused the rocks to
split. Screws were used in ancient times as lifting devices. Historians believe that Greek
inventor Archimedes (287-212 BC) invented a screw-type device (known as Archimedes’
screw) for raising water. It consists of a cylinder with a wide-threaded screw inside. The
bottom end of the cylinder is set in water, and turning the screw lifts water up the cylinder to
a higher level. This principle is still used in some pumps today.
Wheel and Axle

Wheel and Axle, simple machine, consisting of a circular object—the wheel—with a shaft—the
axle—running through and attached to the center of the wheel. A round doorknob and a round
water faucet are both examples of wheels and axles. The much larger handle turns a much
smaller axle to move a door latch, in the case of a doorknob, or open a water valve, in the
case of a faucet. The wheel and axle is used to make doing a given amount of work easier.
Work is the result of a physical force, such as the effort of pushing or pulling, that moves an
object over a distance. A wheel and axle makes work easier by changing the amount and
direction of the force applied to move (or in this case, turn) an object. The object to be moved
is a resistance, or load, located at the axle. A force applied at the outer edge of the wheel
moves or turns the load located at the axle. The wheel enables a user to apply the force over
a greater distance than would be possible if the force were applied directly to the axle. In this
way, a wheel and axle reduces the effort needed to move a load.

The wheel and axle is one of the four simple machines (along with the lever, the pulley, and
the inclined plane). All simple machines change the amount of effort needed to do work, and
are the basis for all other machines. Another common example of a wheel and axle is the
steering wheel of a car, where the driver exerts a force on the outer edge of the wheel to
cause the load at the axle (the front wheels) to turn.


A wheel and axle makes work easier by changing the amount of force applied to a load. Work,
in physics, is defined as the amount of force applied to an object multiplied by the distance
over which that force is applied. Mathematically, the formula to compute work can be
expressed as:

Work = Force × Distance

For a wheel and axle, the work to be done is the moving or turning of a load, usually located
at the axle. The force needed is the effort required to turn the load, and the distance
corresponds to how far the wheel is turned as effort is applied. Because the circumference of
the wheel is always larger than the circumference of the axle, any effort applied to the wheel
will always move a greater distance than the load at the axle. The wheel and axle makes the
effort move a greater distance than the load, and so less effort is needed to move the load.

The mechanical advantage (MA) of a wheel and axle measures how much the machine
multiplies the force applied by the user. There are two kinds of MA: theoretical and actual.
Theoretical MA is the MA that would exist if the machine were perfect, but all machines lose
some of their MA to friction. Friction is a resistance created between objects when they move
against each other. Friction makes the process of moving objects, and therefore doing work,
more difficult. Theoretical MA is the one most commonly referred to, since actual MA can be
difficult to calculate.

To find the MA of a simple machine, the distance the effort travels is divided by the distance
the load travels. If force is applied to the wheel, then the MA of a wheel and axle equals the
radius of the wheel divided by the radius of the axle. This will always produce an MA greater
than 1, since the force will always travel a greater distance on the larger wheel than will the
load at the smaller axle. A screwdriver is a type of wheel and axle. The wheel (the handle)
transmits the user’s force to the axle (the screwdriver shaft) to turn a screw. Turning the
larger handle of the screwdriver is much easier than trying to turn the smaller screw by itself.
The mechanical advantage of this type of wheel and axle can be very large.

In the previous examples, force applied at the wheel moved a heavy load at the axle. Force
can also be applied at the axle to move a load at the wheel. This requires more force to move
the wheel, but one benefit is that the wheel will move much faster. When force is applied at
the axle, the MA is the radius of the axle divided by the radius of the wheel. This produces a
MA less than 1, and means that speed will be gained. Wheels and axles used in this fashion
often obtain force generated by fuel-powered engines. The large blades of an airplane
propeller move much faster than the small axle in the middle, but it takes the power of an
engine to turn the axle.

If a wheel can rotate independently about the axle, then the device is not a true machine,
because it does not change force. However, freely rotating wheels and axles are used
frequently to reduce friction. Rolling is easier than pushing or dragging an object.


Wheels and axles are used in one form or another in most complex machines. Gears, such as
those used in a mechanical clock, are actually wheels with teeth around the edge. When one
gear turns, the other gear turns in the opposite direction. If the gears are the same size, they
turn at the same speed. However, if one gear is larger than the other, the smaller gear turns
faster than the larger gear.

Wheel and axle combinations also can be used with belts or chains (as on a bicycle) to
transmit the forces from one wheel to the other. A wheel that drives or is driven by a chain is
usually referred to as a sprocket. Belts, closed loops of rope or rubber, are often used in
automobiles to transmit the rotary power from the engine to fans or other devices.
Wheels and axles are also used to change the direction of applied force. The back and forth
motion (called reciprocating motion) of a piston in an engine can be changed into rotary
motion by connecting the piston to the edge of a wheel. The drive wheels of an old-fashioned
steam locomotive operate in this way. The pistons in an automobile engine are connected to a
crankshaft, a special type of axle that provides the rotary motion to the wheels of a car. The
process is also used in reverse so rotary motion can be changed to reciprocating motion. This
method is used to convert the rotary motion of an electric motor into the up-and-down motion
of a jigsaw blade or a sewing-machine needle.


Wheels and axles have been used for centuries to magnify force. The use of wheels to reduce
friction while moving objects was one of the most important inventions in human civilization,
because it made transportation much easier. Wheels used for transportation are believed to
have been used on carts in Mesopotamia as early as 3500 BC.

One of the first uses of the wheel as a tool was the potter’s wheel, usually made of stone and
used to make pottery. It was invented about the same time as the wheel used in
transportation. A likely early use of the fixed wheel and axle to multiply force was the winch,
which can be used with a rope to pull heavy objects with less effort. A small force applied at
the outer edge of a winch handle is changed into a large force at the axle. Winches can be
used to haul heavy buckets of water up from wells, or move other large objects. Windmills and
waterwheels (both forms of wheel and axle) were combined with gearing to make mills for
grinding grain. The wrench uses the principle of the wheel and axle to turn screws or tighten
bolts. Most mechanical devices make some use of the wheel and axle.
Propeller (mechanics)

Propeller (mechanics), mechanical device that produces a force, or thrust, along the axis of
rotation when rotated in a fluid, gas or liquid. Propellers may operate in either air or water,
although a propeller designed for efficient operation in one of these media would be extremely
inefficient in the other. Virtually all ships are equipped with propellers, and until the
development of jet propulsion, virtually all aircraft, except gliders, were also propelled in the
same way. A propeller, mounted on a high-speed wheel geared to a generator, acts as a
windmill when placed in a wind current.

The propeller is essentially a screw that, when turned, pulls itself through the air or water in
the same way that a bolt pulls itself through a nut. Marine propellers are frequently termed
screws, and aircraft propellers are termed airscrews in Britain. Typical propellers consist of
two, three, or four blades, each of which is a section of a helix, which is the geometric form of
a screw thread. The distance that a propeller or propeller blade will move forward when the
propeller shaft is given one complete rotation, if there is no slippage, is called the geometric
pitch; this corresponds to the pitch, or the distance between adjacent threads, of a simple
screw. The distance that the propeller actually moves through the air or water in one rotation
is called the effective pitch, and the difference between effective and geometric pitch is called
slip. In general, an efficient propeller slips little, and the effective pitch is almost equal to the
geometric pitch; the criterion of propeller efficiency is not slip, however, but the ratio of
propulsive energy produced to energy consumed in rotating the propeller shaft. Aircraft
propellers are often operated at efficiencies as high as 86 percent, but marine propellers
operate at lower efficiencies.


An aircraft propeller blade is aerodynamically similar to a wing, which, when driven through
the air, creates lift and drag, perpendicular and parallel to the air velocity relative to a section
of the blade (see Aerodynamics; Airplane). The forces created by the motion of the propeller
are resolved into a component, thrust in the direction of the flight, when all the blade
elements and the number of blades are accounted for. The other component in the plane of
rotation represents the force that must be overcome by the torque, or the turning force, of the
driving engine. The complete motion of a blade element involves a combination of the forward
velocity represented by the flight speed, and the peripheral velocity due to the rotation of the
blade. This simple concept of propeller action has been extensively refined by aerodynamicists
in recent years. Another method of analysis of propeller action is based on the changes in
momentum of the flow as it passes through the propeller disk. This approach was originally
used by the British engineer and naval architect William Froude but, in general, it is not as
comprehensive as the blade-element theory.
For a given rotational speed, the resultant velocity at a blade element increases in magnitude
as the forward speed is increased, while at the same time the angle of the resultant velocity
vector with the plane of rotation is also increased. Thus, if the blade has a fixed pitch, a
condition will eventually be reached at which the blade will produce little or no thrust. On the
other hand, as the forward speed is decreased, the angle between the velocity vector and the
blade will become so large as to cause the blade to stall, with a severe corresponding drop in
the blade's efficiency.

In order to adapt a given propeller to aircraft with different flight characteristics, adjustable-
pitch propellers are sometimes used, in which the blade can be rotated in the hub so as to
alter the effective pitch. This operation must be accomplished on the ground with the propeller
removed from the aircraft. A more effective procedure is to use a variable-pitch propeller with
the pitch or blade angle controllable in flight so as to maintain operating conditions very close
to the optimum. Propellers of this type are usually operated at a constant rotational speed by
means of either a hydraulic or electrical governing mechanism. Controllable-pitch propellers
are usually capable of being feathered, that is, the blade angle can be set parallel to the flight
direction, so as to prevent windmilling that could otherwise occur in the event of an engine
failure. The capability of setting the blade in a negative pitch condition may also be included in
the design so as to provide negative thrust and aerodynamic braking action in landing.

Modern propeller blades are usually made either of solid aluminum alloy or of hollow steel. The
propellers are equipped with deicing equipment. The propeller must be very precisely
balanced, both statically and dynamically. If, for example, a 57-g (2-oz) weight were attached
to the middle of one blade of a two-bladed propeller, and a 28.5-g (1-oz) weight were
attached to the tip of the other blade, the propeller would be in static balance, that is, it would
not rotate if the propeller shaft were placed on knife edges with the blades in any position; it
would not, however, be in dynamic balance, and would vibrate if rotated at high speed.

The rotor of an autogiro or helicopter is essentially similar to an ordinary aircraft propeller in

that it consists of several blades, each shaped like an airfoil in cross section, and produces lift.
The blades are not twisted, but, like ordinary aircraft propeller blades, their pitch may be


A ship propeller operates in much the same way as the airplane propeller. In the ship
propeller, however, each blade is very broad (from leading to trailing edge) and very thin. The
blades are usually built of copper alloys to resist corrosion. The speed of sound in water is
much higher than the speed in air, and because of the high frictional resistance of water, the
top speed never approaches the speed of sound. Although efficiencies as high as 77 percent
have been achieved with experimental propellers, most ship propellers operate at efficiencies
of about 56 percent. Clearance is also less of a problem on ship propellers, although the
diameter and position of the propeller are limited by the loss in efficiency if the propeller
blades come anywhere near the surface of the water. The principal problem of ship-propeller
design and operation is cavitation, the formation of a vacuum along parts of the propeller
blade, which leads to excessive slip, loss of efficiency, and pitting of the blades. It also causes
excessive underwater noise, a serious disadvantage on submarines.
Machine Tools

Machine Tools, stationary power-driven machines used to shape or form solid materials,
especially metals. The shaping is accomplished by removing material from a workpiece or by
pressing it into the desired shape. Machine tools form the basis of modern industry and are
used either directly or indirectly in the manufacture of machine and tool parts.

Machine tools may be classified under three main categories: conventional chip-making
machine tools, presses, and unconventional machine tools. Conventional chip-making tools
shape the workpiece by cutting away the unwanted portion in the form of chips. Presses
employ a number of different shaping processes, including shearing, pressing, or drawing
(elongating). Unconventional machine tools employ light, electrical, chemical, and sonic
energy; superheated gases; and high-energy particle beams to shape the exotic materials and
alloys that have been developed to meet the needs of modern technology.

Modern machine tools date from about 1775, when the English inventor John Wilkinson
constructed a horizontal boring machine for producing internal cylindrical surfaces. About 1794
Henry Maudslay developed the first engine lathe. Later, Joseph Whitworth speeded the wider
use of Wilkinson's and Maudslay's machine tools by developing, in 1830, measuring
instruments accurate to a millionth of an inch. His work was of great value because precise
methods of measurement were necessary for the subsequent mass production of articles
having interchangeable parts.

The earliest attempts to manufacture interchangeable parts occurred almost simultaneously in

Europe and the United States. These efforts relied on the use of so-called filing jigs, with
which parts could be hand-filed to substantially identical dimensions. The first true mass-
production system was created by the American inventor Eli Whitney, who in 1798 obtained a
contract with the U.S. government to produce 10,000 army muskets, all with interchangeable

During the 19th century, such standard machine tools as lathes, shapers, planers, grinders,
and saws and milling, drilling, and boring machines reached a fairly high degree of precision,
and their use became widespread in the industrializing nations. During the early part of the
20th century, machine tools were enlarged and made even more accurate. After 1920 they
became more specialized in their applications. From about 1930 to 1950 more powerful and
rigid machine tools were built to utilize effectively the greatly improved cutting materials that
had become available. These specialized machine tools made it possible to manufacture
standardized products very economically, using relatively unskilled labor. The machines lacked
flexibility, however, and they were not adaptable to a variety of products or to variations in
manufacturing standards. As a result, in the past three decades engineers have developed
highly versatile and accurate machine tools that have been adapted to computer control,
making possible the economical manufacture of products of complex design. Such tools are
now widely used.


Among the basic machine tools are the lathe, the shaper, the planer, and the milling machine.
Auxiliary to these are drilling and boring machines, grinders, saws, and various metal-forming

A Lathe

A lathe, the oldest and most common type of turning machine, holds and rotates metal or
wood while a cutting tool shapes the material. The tool may be moved parallel to or across the
direction of rotation to form parts that have a cylindrical or conical shape or to cut threads.
With special attachments, a lathe may also be used to produce flat surfaces, as a milling
machine does, or it may drill or bore holes in the workpiece.

B Shaper

The shaper is used primarily to produce flat surfaces. The tool slides against the stationary
workpiece and cuts on one stroke, returns to its starting position, and then cuts on the next
stroke after a slight lateral displacement. In general, the shaper can produce almost any
surface composed of straight-line elements. It uses a single-point tool and is relatively slow,
because it depends on reciprocating (alternating forward and return) strokes. For this reason,
the shaper is seldom found on a production line. It is, however, valuable for tool and die
rooms and for job shops where flexibility is essential and relative slowness is unimportant
because few identical pieces are being made.
C Planer

The planer is the largest of the reciprocating machine tools. Unlike the shaper, which moves a
tool past a fixed workpiece, the planer moves the workpiece past a fixed tool. After each
reciprocating cycle, the workpiece is advanced laterally to expose a new section to the tool.
Like the shaper, the planer is intended to produce vertical, horizontal, or diagonal cuts. It is
also possible to mount several tools at one time in any or all tool holders of a planer to
execute multiple simultaneous cuts.

D Milling Machine

In a milling machine, a workpiece is fed against a circular device with a series of cutting edges
on its circumference. The workpiece is held on a table that controls the feed against the
cutter. The table conventionally has three possible movements: longitudinal, horizontal, and
vertical; in some cases it can also rotate. Milling machines are the most versatile of all
machine tools. Flat or contoured surfaces may be machined with excellent finish and accuracy.
Angles, slots, gear teeth, and recess cuts can be made by using various cutters.

E Drilling and Boring Machines

Hole-making machine tools are used to drill a hole where none previously existed; to alter a
hole in accordance with some specification (by boring or reaming to enlarge it, or by tapping
to cut threads for a screw); or to lap or hone a hole to create an accurate size or a smooth
Drilling machines vary in size and function, ranging from portable drills to radial drilling
machines, multispindle units, automatic production machines, and deep-hole-drilling
machines. See Drill.

Boring is a process that enlarges holes previously drilled, usually with a rotating single-point
cutter held on a boring bar and fed against a stationary workpiece. Boring machines include jig
borers and vertical and horizontal boring mills.

F Grinders

Grinding is the removal of metal by a rotating abrasive wheel; the action is similar to that of a
milling cutter. The wheel is composed of many small grains of abrasive, bonded together, with
each grain acting as a miniature cutting tool. The process produces extremely smooth and
accurate finishes. Because only a small amount of material is removed at each pass of the
wheel, grinding machines require fine wheel regulation. The pressure of the wheel against the
workpiece can be made very slight, so that grinding can be carried out on fragile materials
that cannot be machined by other conventional devices. See Grinding and Polishing.

G Saws

Commonly used power-driven saws are classified into three general types, according to the
kind of motion used in the cutting action: reciprocating, circular, and band-sawing machines.
They generally consist of a bed or frame, a vise for clamping the workpiece, a feed
mechanism, and the saw blade.

H Cutting Tools and Fluids

Because cutting processes involve high local stresses, frictions, and considerable heat
generation, cutting-tool material must combine strength, toughness, hardness, and wear
resistance at elevated temperatures. These requirements are met in varying degrees by such
cutting-tool materials as carbon steels (steel containing 1 to 1.2 percent carbon), high-speed
steels (iron alloys containing tungsten, chromium, vanadium, and carbon), tungsten carbide,
and diamonds and by such recently developed materials as ceramic, carbide ceramic, and
aluminum oxide.

In many cutting operations fluids are used to cool and lubricate. Cooling increases tool life and
helps to stabilize the size of the finished part. Lubrication reduces friction, thus decreasing the
heat generated and the power required for a given cut. Cutting fluids include water-based
solutions, chemically inactive oils, and synthetic fluids.

I Presses

Presses shape workpieces without cutting away material, that is, without making chips. A
press consists of a frame supporting a stationary bed, a ram, a power source, and a
mechanism that moves the ram in line with or at right angles to the bed. Presses are equipped
with dies (see Die) and punches designed for such operations as forming, punching, and
shearing. Presses are capable of rapid production because the operation time is that needed
for only one stroke of the ram.


Unconventional machine tools include plasma-arc, laser-beam, electrodischarge,
electrochemical, ultrasonic, and electron-beam machines. These machine tools were
developed primarily to shape the ultrahard alloys used in heavy industry and in aerospace
applications and to shape and etch the ultrathin materials used in such electronic devices as

A Plasma Arc

Plasma-arc machining (PAM) employs a high-velocity jet of high-temperature gas (see

Plasma) to melt and displace material in its path. The materials cut by PAM are generally
those that are difficult to cut by any other means, such as stainless steels and aluminum

B Laser

Laser-beam machining (LBM) is accomplished by precisely manipulating a beam of coherent

light (see Laser) to vaporize unwanted material. LBM is particularly suited to making
accurately placed holes. The LBM process can make holes in refractory metals and ceramics
and in very thin materials without warping the workpiece. Extremely fine wires can also be
welded using LBM equipment.

C Electrodischarge

Electrodischarge machining (EDM), also known as spark erosion, employs electrical energy to
remove metal from the workpiece without touching it. A pulsating high- frequency electric
current is applied between the tool point and the workpiece, causing sparks to jump the gap
and vaporize small areas of the workpiece. Because no cutting forces are involved, light,
delicate operations can be performed on thin workpieces. EDM can produce shapes
unobtainable by any conventional machining process.

D Electrochemical

Electrochemical machining (ECM) also uses electrical energy to remove material. An

electrolytic cell is created in an electrolyte medium, with the tool as the cathode and the
workpiece as the anode. A high-amperage, low-voltage current is used to dissolve the metal
and to remove it from the workpiece, which must be electrically conductive. A wide variety of
operations can be performed by ECM; these operations include etching, marking, hole making,
and milling.
E Ultrasonic

Ultrasonic machining (USM) employs high-frequency, low-amplitude vibrations to create holes

and other cavities. A relatively soft tool is shaped as desired and vibrated against the
workpiece while a mixture of fine abrasive and water flows between them. The friction of the
abrasive particles gradually cuts the workpiece. Materials such as hardened steel, carbides,
rubies, quartz, diamonds, and glass can easily be machined by USM.

F Electron Beam

In electron-beam machining (EBM), electrons are accelerated to a velocity nearly three-fourths

that of light. The process is performed in a vacuum chamber to reduce the scattering of
electrons by gas molecules in the atmosphere. The stream of electrons is directed against a
precisely limited area of the workpiece; on impact, the kinetic energy of the electrons is
converted into thermal energy that melts and vaporizes the material to be removed, forming
holes or cuts. EBM equipment is commonly used by the electronics industry to aid in the
etching of circuits in microprocessors. See Microprocessor.
Microprocessor, electronic circuit that functions as the central processing unit (CPU) of a
computer, providing computational control. Microprocessors are also used in other
advanced electronic systems, such as computer printers, automobiles, and jet airliners.

Hand-Held Computer The hand-held computer attests to the remarkable miniaturization

of computing hardware. The early computers of the 1940s were so large that they filled
entire rooms. Techonological innovations, such as the integrated circuit in 1959 and the
microprocessor in 1971, shrank computers’ central processing units to the size of tiny
silicon chips.Photo Researchers, Inc.

The microprocessor is one type of ultra-large-scale integrated circuit. Integrated circuits,

also known as microchips or chips, are complex electronic circuits consisting of
extremely tiny components formed on a single, thin, flat piece of material known as a
semiconductor. Modern microprocessors incorporate transistors (which act as electronic
amplifiers, oscillators, or, most commonly, switches), in addition to other components
such as resistors, diodes, capacitors, and wires, all packed into an area about the size of a
postage stamp.
Microprocessor Microprocessors, also called silicon chips, are typically embedded in a
protective casing. The wires radiating from the silicon chip above connect to short metal
legs that are soldered into integrated circuit boards.The Image Bank/Jean-Pierre Horlin

A microprocessor consists of several different sections: The arithmetic/logic unit (ALU)

performs calculations on numbers and makes logical decisions; the registers are special
memory locations for storing temporary information much as a scratch pad does; the
control unit deciphers programs; buses carry digital information throughout the chip and
computer; and local memory supports on-chip computation. More complex
microprocessors often contain other sections—such as sections of specialized memory,
called cache memory, to speed up access to external data-storage devices. Modern
microprocessors operate with bus widths of 64 bits (binary digits, or units of information
represented as 1s and 0s), meaning that 64 bits of data can be transferred at the same

A crystal oscillator in the computer provides a clock signal to coordinate all activities of
the microprocessor. The clock speed of the most advanced microprocessors allows
billions of computer instructions to be executed every second.


Because the microprocessor alone cannot accommodate the large amount of memory
required to store program instructions and data, such as the text in a word-processing
program, transistors can be used as memory elements in combination with the
microprocessor. Separate integrated circuits, called random-access memory (RAM)
chips, which contain large numbers of transistors, are used in conjunction with the
microprocessor to provide the needed memory. There are different kinds of random-
access memory. Static RAM (SRAM) holds information as long as power is turned on
and is usually used as cache memory because it operates very quickly. Another type of
memory, dynamic RAM (DRAM), is slower than SRAM and must be periodically
refreshed with electricity or the information it holds is lost. DRAM is more economical
than SRAM and serves as the main memory element in most computers.


A microprocessor is not a complete computer. It does not contain large amounts of

memory or have the ability to communicate with input devices—such as keyboards,
joysticks, and mice—or with output devices, such as monitors and printers. A different
kind of integrated circuit, a microcontroller, is a complete computer on a chip, containing
all of the elements of the basic microprocessor along with other specialized functions.
Microcontrollers are used in video games, videocassette recorders (VCRs), automobiles,
and other machines.


Manufacturing an Integrated Circuit
Beginning in the late 20th century, integrated circuits based on silicon chips shrank
rapidly in price and size while expanding in capacity. These advances in chip technology
contributed to a boom in the computer industry. The creation of a single silicon chip
requires hundreds of manufacturing steps. In this Scientific American article, Intel
Corporation president and chief operating officer Craig R. Barrett describes the chip
manufacturing process from design through completion.
open sidebar

All integrated circuits are fabricated from semiconductors, substances whose ability to
conduct electricity ranks between that of a conductor and that of a nonconductor, or
insulator. Silicon is the most common semiconductor material. Because the electrical
conductivity of a semiconductor can change according to the voltage applied to it,
transistors made from semiconductors act like tiny switches that turn electrical current on
and off in just a few nanoseconds (billionths of a second). This capability enables a
computer to perform many billions of simple instructions each second and to complete
complex tasks quickly.

The basic building block of most semiconductor devices is the diode, a junction, or
union, of negative-type (n-type) and positive-type (p-type) materials. The terms n-type
and p-type refer to semiconducting materials that have been doped—that is, have had
their electrical properties altered by the controlled addition of very small quantities of
impurities such as boron or phosphorus. In a diode, current flows in only one direction:
across the junction from the p- to n-type material, and then only when the p-type material
is at a higher voltage than the n-type. The voltage applied to the diode to create this
condition is called the forward bias. The opposite voltage, for which current will not
flow, is called the reverse bias. An integrated circuit contains millions of p-n junctions,
each serving a specific purpose within the millions of electronic circuit elements. Proper
placement and biasing of p- and n-type regions restrict the electrical current to the correct
paths and ensure the proper operation of the entire chip.


The transistor used most commonly in the microelectronics industry is called a metal-
oxide semiconductor field-effect transistor (MOSFET). It contains two n-type regions,
called the source and the drain, with a p-type region in between them, called the channel.
Over the channel is a thin layer of nonconductive silicon dioxide topped by another layer,
called the gate. For electrons to flow from the source to the drain, a voltage (forward
bias) must be applied to the gate. This causes the gate to act like a control switch, turning
the MOSFET on and off and creating a logic gate that transmits digital 1s and 0s
throughout the microprocessor.


Microprocessors are fabricated using techniques similar to those used for other integrated
circuits, such as memory chips. Microprocessors generally have a more complex
structure than do other chips, and their manufacture requires extremely precise

Economical manufacturing of microprocessors requires mass production. Several

hundred dies, or circuit patterns, are created on the surface of a silicon wafer
simultaneously. Microprocessors are constructed by a process of deposition and removal
of conducting, insulating, and semiconducting materials one thin layer at a time until,
after hundreds of separate steps, a complex sandwich is constructed that contains all the
interconnected circuitry of the microprocessor. Only the outer surface of the silicon
wafer—a layer about 10 microns (about 0.01 mm/0.0004 in) thick, or about one-tenth the
thickness of a human hair—is used for the electronic circuit. The processing steps include
substrate creation, oxidation, lithography, etching, ion implantation, and film deposition.

The first step in producing a microprocessor is the creation of an ultrapure silicon

substrate, a silicon slice in the shape of a round wafer that is polished to a mirror-like
smoothness. At present, the largest wafers used in industry are 300 mm (12 in) in

In the oxidation step, an electrically nonconducting layer, called a dielectric, is placed

between each conductive layer on the wafer. The most important type of dielectric is
silicon dioxide, which is “grown” by exposing the silicon wafer to oxygen in a furnace at
about 1000°C (about 1800°F). The oxygen combines with the silicon to form a thin layer
of oxide about 75 angstroms deep (an angstrom is one ten-billionth of a meter).

Nearly every layer that is deposited on the wafer must be patterned accurately into the
shape of the transistors and other electronic elements. Usually this is done in a process
known as photolithography, which is analogous to transforming the wafer into a piece of
photographic film and projecting a picture of the circuit on it. A coating on the surface of
the wafer, called the photoresist or resist, changes when exposed to light, making it easy
to dissolve in a developing solution. These patterns are as small as 0.13 microns in size.
Because the shortest wavelength of visible light is about 0.5 microns, short-wavelength
ultraviolet light must be used to resolve the tiny details of the patterns. After
photolithography, the wafer is etched—that is, the resist is removed from the wafer either
by chemicals, in a process known as wet etching, or by exposure to a corrosive gas,
called a plasma, in a special vacuum chamber.

In the next step of the process, ion implantation, also called doping, impurities such as
boron and phosphorus are introduced into the silicon to alter its conductivity. This is
accomplished by ionizing the boron or phosphorus atoms (stripping off one or two
electrons) and propelling them at the wafer with an ion implanter at very high energies.
The ions become embedded in the surface of the wafer.

The thin layers used to build up a microprocessor are referred to as films. In the final step
of the process, the films are deposited using sputterers in which thin films are grown in a
plasma; by means of evaporation, whereby the material is melted and then evaporated
coating the wafer; or by means of chemical-vapor deposition, whereby the material
condenses from a gas at low or atmospheric pressure. In each case, the film must be of
high purity and its thickness must be controlled within a small fraction of a micron.

Microprocessor features are so small and precise that a single speck of dust can destroy
an entire die. The rooms used for microprocessor creation are called clean rooms because
the air in them is extremely well filtered and virtually free of dust. The purest of today's
clean rooms are referred to as class 1, indicating that there is no more than one speck of
dust per cubic foot of air. (For comparison, a typical home is class one million or so.)

Pentium Microprocessor The Pentium microprocessor (shown at 2.5X magnification) is

manufactured by the Intel Corporation. It contains more than three million transistors.
The most common semiconductor materials used in making computer chips are the
elements silicon and germanium, although nearly all computer chips are made from
silicon.Photo Researchers, Inc./Michael W. Davidson

The first microprocessor was the Intel 4004, produced in 1971. Originally developed for
a calculator, and revolutionary for its time, it contained 2,300 transistors on a 4-bit
microprocessor that could perform only 60,000 operations per second. The first 8-bit
microprocessor was the Intel 8008, developed in 1972 to run computer terminals. The
Intel 8008 contained 3,300 transistors. The first truly general-purpose microprocessor,
developed in 1974, was the 8-bit Intel 8080 (see Microprocessor, 8080), which contained
4,500 transistors and could execute 200,000 instructions per second. By 1989, 32-bit
microprocessors containing 1.2 million transistors and capable of executing 20 million
instructions per second had been introduced.

further reading
These sources provide additional information on Microprocessor.

In the 1990s the number of transistors on microprocessors continued to double nearly

every 18 months. The rate of change followed an early prediction made by American
semiconductor pioneer Gordon Moore. In 1965 Moore predicted that the number of
transistors on a computer chip would double every year, a prediction that has come to be
known as Moore’s Law. In the mid-1990s chips included the Intel Pentium Pro,
containing 5.5 million transistors; the UltraSparc-II, by Sun Microsystems, containing 5.4
million transistors; the PowerPC620, developed jointly by Apple, IBM, and Motorola,
containing 7 million transistors; and the Digital Equipment Corporation's Alpha 21164A,
containing 9.3 million transistors. By the end of the decade microprocessors contained
many millions of transistors, transferred 64 bits of data at once, and performed billions of
instructions per second.
Operating System (OS), in computer science, the basic software that controls a computer.
The operating system has three major functions: It coordinates and manipulates computer
hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it
organizes files on a variety of storage media, such as floppy disk, hard drive, compact
disc, digital video disc, and tape; and it manages hardware errors and the loss of data.

An Operating System Interface A screen shot from the Windows XP operating system
displays icons and other images typical of the graphical user interface (GUI) that makes
computers easy to use. With a GUI, a computer user can easily execute commands by
clicking on pictures, words, or icons with a pointing device known as a mouse.©
Microsoft Corporation. All Rights Reserved.

Operating systems control different computer processes, such as running a spreadsheet
program or accessing information from the computer's memory. One important process is
interpreting commands, enabling the user to communicate with the computer. Some
command interpreters are text oriented, requiring commands to be typed in or to be
selected via function keys on a keyboard. Other command interpreters use graphics and
let the user communicate by pointing and clicking on an icon, an on-screen picture that
represents a specific command. Beginners generally find graphically oriented interpreters
easier to use, but many experienced computer users prefer text-oriented command

Operating systems are either single-tasking or multitasking. The more primitive single-
tasking operating systems can run only one process at a time. For instance, when the
computer is printing a document, it cannot start another process or respond to new
commands until the printing is completed.

All modern operating systems are multitasking and can run several processes
simultaneously. In most computers, however, there is only one central processing unit
(CPU; the computational and control unit of the computer), so a multitasking OS creates
the illusion of several processes running simultaneously on the CPU. The most common
mechanism used to create this illusion is time-slice multitasking, whereby each process is
run individually for a fixed period of time. If the process is not completed within the
allotted time, it is suspended and another process is run. This exchanging of processes is
called context switching. The OS performs the “bookkeeping” that preserves a suspended
process. It also has a mechanism, called a scheduler, that determines which process will
be run next. The scheduler runs short processes quickly to minimize perceptible delay.
The processes appear to run simultaneously because the user's sense of time is much
slower than the processing speed of the computer.

Operating systems can use a technique known as virtual memory to run processes that
require more main memory than is actually available. To implement this technique, space
on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is
more time-consuming than accessing main memory, however, so performance of the
computer slows.


Operating systems commonly found on personal computers include UNIX, Macintosh

OS, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular
operating system among academic computer users. Its popularity is due in large part to
the growth of the interconnected computer network known as the Internet. Software for
the Internet was initially designed for computers that ran UNIX. Variations of UNIX
include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft
Corporation), and Linux (available for download free of charge and distributed
commercially by companies such as Red Hat, Inc.). UNIX and its clones support
multitasking and multiple users. Its file system provides a simple means of organizing
disk files and lets users control access to their files. The commands in UNIX are not
readily apparent, however, and mastering the system is difficult. Consequently, although
UNIX is popular for professionals, it is not the operating system of choice for the general

Instead, windowing systems with graphical interfaces, such as Windows and the
Macintosh OS, which make computer technology more accessible, are widely used in
personal computers (PCs). However, graphical systems generally have the disadvantage
of requiring more hardware—such as faster CPUs, more memory, and higher-quality
monitors—than do command-oriented operating systems.


Operating systems continue to evolve. A recently developed type of OS called a

distributed operating system is designed for a connected, but independent, collection of
computers that share resources such as hard drives. In a distributed OS, a process can run
on any computer in the network (presumably a computer that is idle) to increase that
process's performance. All basic OS functions—such as maintaining file systems,
ensuring reasonable behavior, and recovering data in the event of a partial failure—
become more complex in distributed systems.

Research is also being conducted that would replace the keyboard with a means of using
voice or handwriting for input. Currently these types of input are imprecise because
people pronounce and write words very differently, making it difficult for a computer to
recognize the same input from different users. However, advances in this field have led to
systems that can recognize a small number of words spoken by a variety of people. In
addition, software has been developed that can be taught to recognize an individual's
Iron and Steel Manufacture

Iron and Steel Manufacture, technology related to the production of iron and its alloys,
particularly those containing a small percentage of carbon. The differences between the
various types of iron and steel are sometimes confusing because of the nomenclature used.
Steel in general is an alloy of iron and carbon, often with an admixture of other elements.
Some alloys that are commercially called irons contain more carbon than commercial steels.
Open-hearth iron and wrought iron contain only a few hundredths of 1 percent of carbon.
Steels of various types contain from 0.04 percent to 2.25 percent of carbon. Cast iron,
malleable cast iron, and pig iron contain amounts of carbon varying from 2 to 4 percent. A
special form of malleable iron, containing virtually no carbon, is known as white-heart
malleable iron. A special group of iron alloys, known as ferroalloys, is used in the manufacture
of iron and steel alloys; they contain from 20 to 80 percent of an alloying element, such as
manganese, silicon, or chromium.

The exact date at which people discovered the technique of smelting iron ore to produce
usable metal is not known. The earliest iron implements discovered by archaeologists in Egypt
date from about 3000 BC, and iron ornaments were used even earlier; the comparatively
advanced technique of hardening iron weapons by heat treatment was known to the Greeks
about 1000 BC.

The alloys produced by early iron workers, and, indeed, all the iron alloys made until about the
14th century AD, would be classified today as wrought iron. They were made by heating a
mass of iron ore and charcoal in a forge or furnace having a forced draft. Under this treatment
the ore was reduced to the sponge of metallic iron filled with a slag composed of metallic
impurities and charcoal ash. This sponge of iron was removed from the furnace while still
incandescent and beaten with heavy sledges to drive out the slag and to weld and consolidate
the iron. The iron produced under these conditions usually contained about 3 percent of slag
particles and 0.1 percent of other impurities. Occasionally this technique of ironmaking
produced, by accident, a true steel rather than wrought iron. Ironworkers learned to make
steel by heating wrought iron and charcoal in clay boxes for a period of several days. By this
process the iron absorbed enough carbon to become a true steel.

After the 14th century the furnaces used in smelting were increased in size, and increased
draft was used to force the combustion gases through the “charge,” the mixture of raw
materials. In these larger furnaces, the iron ore in the upper part of the furnace was first
reduced to metallic iron and then took on more carbon as a result of the gases forced through
it by the blast. The product of these furnaces was pig iron, an alloy that melts at a lower
temperature than steel or wrought iron. Pig iron (so called because it was usually cast in
stubby, round ingots known as pigs) was then further refined to make steel.

Modern steelmaking employs blast furnaces that are merely refinements of the furnaces used
by the old ironworkers. The process of refining molten iron with blasts of air was accomplished
by the British inventor Sir Henry Bessemer who developed the Bessemer furnace, or
converter, in 1855. Since the 1960s, several so-called minimills have been producing steel
from scrap metal in electric furnaces. Such mills are an important component of total U.S.
steel production. The giant steel mills remain essential for the production of steel from iron


The basic materials used for the manufacture of pig iron are iron ore, coke, and limestone.
The coke is burned as a fuel to heat the furnace; as it burns, the coke gives off carbon
monoxide, which combines with the iron oxides in the ore, reducing them to metallic iron. This
is the basic chemical reaction in the blast furnace; it has the equation: Fe2O3 + 3CO = 3CO2 +
2Fe. The limestone in the furnace charge is used as an additional source of carbon monoxide
and as a “flux” to combine with the infusible silica present in the ore to form fusible calcium
silicate. Without the limestone, iron silicate would be formed, with a resulting loss of metallic
iron. Calcium silicate plus other impurities form a slag that floats on top of the molten metal at
the bottom of the furnace. Ordinary pig iron as produced by blast furnaces contains iron,
about 92 percent; carbon, 3 or 4 percent; silicon, 0.5 to 3 percent; manganese, 0.25 to 2.5
percent; phosphorus, 0.04 to 2 percent; and a trace of sulfur.

A typical blast furnace consists of a cylindrical steel shell lined with a refractory, which is any
nonmetallic substance such as firebrick. The shell is tapered at the top and at the bottom and
is widest at a point about one-quarter of the distance from the bottom. The lower portion of
the furnace, called the bosh, is equipped with several tubular openings or tuyeres through
which the air blast is forced. Near the bottom of the bosh is a hole through which the molten
pig iron flows when the furnace is tapped, and above this hole, but below the tuyeres, is
another hole for draining the slag. The top of the furnace, which is about 27 m (about 90 ft) in
height, contains vents for the escaping gases, and a pair of round hoppers closed with bell-
shaped valves through which the charge is introduced into the furnace. The materials are
brought up to the hoppers in small dump cars or skips that are hauled up an inclined external
skip hoist.
Blast furnaces operate continuously. The raw material to be fed into the furnace is divided into
a number of small charges that are introduced into the furnace at 10- to 15-min intervals.
Slag is drawn off from the top of the melt about once every 2 hr, and the iron itself is drawn
off or tapped about five times a day.

The air used to supply the blast in a blast furnace is preheated to temperatures between
approximately 540° and 870° C (approximately 1,000° and 1,600° F). The heating is
performed in stoves, cylinders containing networks of firebrick. The bricks in the stoves are
heated for several hours by burning blast-furnace gas, the waste gases from the top of the
furnace. Then the flame is turned off and the air for the blast is blown through the stove. The
weight of air used in the operation of a blast furnace exceeds the total weight of the other raw
materials employed.

An important development in blast furnace technology, the pressurizing of furnaces, was

introduced after World War II. By “throttling” the flow of gas from the furnace vents, the
pressure within the furnace may be built up to 1.7 atm or more. The pressurizing technique
makes possible better combustion of the coke and higher output of pig iron. The output of
many blast furnaces can be increased 25 percent by pressurizing. Experimental installations
have also shown that the output of blast furnaces can be increased by enriching the air blast
with oxygen.

The process of tapping consists of knocking out a clay plug from the iron hole near the bottom
of the bosh and allowing the molten metal to flow into a clay-lined runner and then into a
large, brick-lined metal container, which may be either a ladle or a rail car capable of holding
as much as 100 tons of metal. Any slag that may flow from the furnace with the metal is
skimmed off before it reaches the container. The container of molten pig iron is then
transported to the steelmaking shop.

Modern-day blast furnaces are operated in conjunction with basic oxygen furnaces and
sometimes the older open-hearth furnaces as part of a single steel-producing plant. In such
plants the molten pig iron is used to charge the steel furnaces. The molten metal from several
blast furnaces may be mixed in a large ladle before it is converted to steel, to minimize any
irregularities in the composition of the individual melts.


Although almost all the iron and steel manufactured in the world is made from pig iron
produced by the blast-furnace process, other methods of iron refining are possible and have
been practiced to a limited extent. One such method is the so-called direct method of making
iron and steel from ore, without making pig iron. In this process iron ore and coke are mixed
in a revolving kiln and heated to a temperature of about 950° C (about 1,740° F). Carbon
monoxide is given off from the heated coke just as in the blast furnace and reduces the oxides
of the ore to metallic iron. The secondary reactions that occur in a blast furnace, however, do
not occur, and the kiln produces so-called sponge iron of much higher purity than pig iron.
Virtually pure iron is also produced by means of electrolysis (see Electrochemistry), by passing
an electric current through a solution of ferrous chloride. Neither the direct nor the electrolytic
processes has yet achieved any great commercial significance.


Essentially the production of steel from pig iron by any process consists of burning out the
excess carbon and other impurities present in the iron. One difficulty in the manufacture of
steel is its high melting point, about 1,370° C (about 2,500° F), which prevents the use of
ordinary fuels and furnaces. To overcome this difficulty the open-hearth furnace was
developed; this furnace can be operated at a high temperature by regenerative preheating of
the fuel gas and air used for combustion in the furnace. In regenerative preheating, the
exhaust gases from the furnace are drawn through one of a series of chambers containing a
mass of brickwork and give up most of their heat to the bricks. Then the flow through the
furnace is reversed and the fuel and air pass through the heated chambers and are warmed by
the bricks. Through this method open-hearth furnaces can reach temperatures as high as
1,650° C (approximately 3,000° F).

The furnace itself consists typically of a flat, rectangular brick hearth about 6 m by 10 m
(about 20 ft by 33 ft), which is roofed over at a height of about 2.5 m (about 8 ft). In front of
the hearth a series of doors opens out onto a working floor in front of the hearth. The entire
hearth and working floor are one story above ground level, and the space under the hearth is
taken up by the heat-regenerating chambers of the furnace. A furnace of this size produces
about 100 metric tons of steel every 11 hr.

The furnace is charged with a mixture of pig iron (either molten or cold), scrap steel, and iron
ore that provides additional oxygen. Limestone is added for flux and fluorspar to make the
slag more fluid. The proportions of the charge vary within wide limits, but a typical charge
might consist of 56,750 kg (125,000 lb) of scrap steel, 11,350 kg (25,000 lb) of cold pig iron,
45,400 kg (100,000 lb) of molten pig iron, 11,800 kg (26,000 lb) of limestone, 900 kg (2,000
lb) of iron ore, and 230 kg (500 lb) of fluorspar. After the furnace has been charged, the
furnace is lighted and the flames play back and forth over the hearth as their direction is
reversed by the operator to provide heat regeneration.

Chemically the action of the open-hearth furnace consists of lowering the carbon content of
the charge by oxidization and of removing such impurities as silicon, phosphorus, manganese,
and sulfur, which combine with the limestone to form slag. These reactions take place while
the metal in the furnace is at melting heat, and the furnace is held between 1,540° and
1,650° C (2,800° and 3,000° F) for many hours until the molten metal has the desired carbon
content. Experienced open-hearth operators can often judge the carbon content of the metal
by its appearance, but the melt is usually tested by withdrawing a small amount of metal from
the furnace, cooling it, and subjecting it to physical examination or chemical analysis. When
the carbon content of the melt reaches the desired level, the furnace is tapped through a hole
at the rear. The molten steel then flows through a short trough to a large ladle set below the
furnace at ground level. From the ladle the steel is poured into cast-iron molds that form
ingots usually about 1.5 m (about 5 ft) long and 48 cm (19 in) square. These ingots, the raw
material for all forms of fabricated steel, weigh approximately 2.25 metric tons in this size.
Recently, methods have been put into practice for the continuous processing of steel without
first having to go through the process of casting ingots.


The oldest process for making steel in large quantities, the Bessemer process, made use of a
tall, pear-shaped furnace, called a Bessemer converter, that could be tilted sideways for
charging and pouring. Great quantities of air were blown through the molten metal; its oxygen
united chemically with the impurities and carried them off.

In the basic oxygen process, steel is also refined in a pear-shaped furnace that tilts sideways
for charging and pouring. Air, however, has been replaced by a high-pressure stream of nearly
pure oxygen. After the furnace has been charged and turned upright, an oxygen lance is
lowered into it. The water-cooled tip of the lance is usually about 2 m (about 6 ft) above the
charge although this distance can be varied according to requirements. Thousands of cubic
meters of oxygen are blown into the furnace at supersonic speed. The oxygen combines with
carbon and other unwanted elements and starts a high-temperature churning reaction that
rapidly burns out impurities from the pig iron and converts it into steel. The refining process
takes 50 min or less; approximately 275 metric tons of steel can be made in an hour.


In some furnaces, electricity instead of fire supplies the heat for the melting and refining of
steel. Because refining conditions in such a furnace can be regulated more strictly than in
open-hearth or basic oxygen furnaces, electric furnaces are particularly valuable for producing
stainless steels and other highly alloyed steels that must be made to exacting specifications.
Refining takes place in a tightly closed chamber, where temperatures and other conditions are
kept under rigid control by automatic devices. During the early stages of this refining process,
high-purity oxygen is injected through a lance, raising the temperature of the furnace and
decreasing the time needed to produce the finished steel. The quantity of oxygen entering the
furnace can always be closely controlled, thus keeping down undesirable oxidizing reactions.

Most often the charge consists almost entirely of scrap. Before it is ready to be used, the scrap
must first be analyzed and sorted, because its alloy content will affect the composition of the
refined metal. Other materials, such as small quantities of iron ore and dry lime, are added in
order to help remove carbon and other impurities that are present. The additional alloying
elements go either into the charge or, later, into the refined steel as it is poured into the ladle.

After the furnace is charged, electrodes are lowered close to the surface of the metal. The
current enters through one of the electrodes, arcs to the metallic charge, flows through the
metal, and then arcs back to the next electrode. Heat is generated by the overcoming of
resistance to the flow of current through the charge. This heat, together with that coming from
the intensely hot arc itself, quickly melts the metal. In another type of electric furnace, heat is
generated in a coil. See Electric Furnace.


Steel is marketed in a wide variety of sizes and shapes, such as rods, pipes, railroad rails,
tees, channels, and I-beams. These shapes are produced at steel mills by rolling and
otherwise forming heated ingots to the required shape. The working of steel also improves the
quality of the steel by refining its crystalline structure and making the metal tougher.
The basic process of working steel is known as hot rolling. In hot rolling the cast ingot is first
heated to bright-red heat in a furnace called a soaking pit and is then passed between a series
of pairs of metal rollers that squeeze it to the desired size and shape. The distance between
the rollers diminishes for each successive pair as the steel is elongated and reduced in

The first pair of rollers through which the ingot passes is commonly called the blooming mill,
and the square billets of steel that the ingot produces are known as blooms. From the
blooming mill, the steel is passed on to roughing mills and finally to finishing mills that reduce
it to the correct cross section. The rollers of mills used to produce railroad rails and such
structural shapes as I-beams, H-beams, and angles are grooved to give the required shape.
Modern manufacturing requires a large amount of thin sheet steel. Continuous mills roll steel
strips and sheets in widths of up to 2.4 m (8 ft). Such mills process thin sheet steel rapidly,
before it cools and becomes unworkable. A slab of hot steel over 11 cm (about 4.5 in) thick is
fed through a series of rollers which reduce it progressively in thickness to 0.127 cm (0.05 in)
and increase its length from 4 m (13 ft) to 370 m (1,210 ft). Continuous mills are equipped
with a number of accessory devices including edging rollers, descaling devices, and devices for
coiling the sheet automatically when it reaches the end of the mill. The edging rollers are sets
of vertical rolls set opposite each other at either side of the sheet to ensure that the width of
the sheet is maintained. Descaling apparatus removes the scale that forms on the surface of
the sheet by knocking it off mechanically, loosening it by means of an air blast, or bending the
sheet sharply at some point in its travel. The completed coils of sheet are dropped on a
conveyor and carried away to be annealed and cut into individual sheets. A more efficient way
to produce thin sheet steel is to feed thinner slabs through the rollers. Using conventional
casting methods, ingots must still be passed through a blooming mill in order to produce slabs
thin enough to enter a continuous mill.

By devising a continuous casting system that produces an endless steel slab less than 5 cm (2
in) thick, German engineers have eliminated any need for blooming and roughing mills. In
1989, a steel mill in Indiana became the first outside Europe to adopt this new system.


Cheaper grades of pipe are shaped by bending a flat strip, or skelp, of hot steel into cylindrical
form and welding the edges to complete the pipe. For the smaller sizes of pipe, the edges of
the skelp are usually overlapped and passed between a pair of rollers curved to correspond
with the outside diameter of the pipe. The pressure on the rollers is great enough to weld the
edges together. Seamless pipe or tubing is made from solid rods by passing them between a
pair of inclined rollers that have a pointed metal bar, or mandrel, set between them in such a
way that it pierces the rods and forms the inside diameter of the pipe at the same time that
the rollers are forming the outside diameter.


By far the most important coated product of the steel mill is tin plate for the manufacture of
containers. The “tin” can is actually more than 99 percent steel. In some mills steel sheets
that have been hot-rolled and then cold-rolled are coated by passing them through a bath of
molten tin. The most common method of coating is by the electrolytic process. Sheet steel is
slowly unrolled from its coil and passed through a chemical solution. Meanwhile, a current of
electricity is passing through a piece of pure tin into the same solution, causing the tin to
dissolve slowly and to be deposited on the steel. In electrolytic processing, less than half a
kilogram of tin will coat more than 18.6 sq m (more than 200 sq ft) of steel. For the product
known as thin tin, sheet and strip are given a second cold rolling before being coated with tin,
a treatment that makes the steel plate extra tough as well as extra thin. Cans made of thin tin
are about as strong as ordinary tin cans, yet they contain less steel, with a resultant saving in
weight and cost. Lightweight packaging containers are also being made of tin-plated steel foil
that has been laminated to paper or cardboard.

Other processes of steel fabrication include forging, founding, and drawing the steel through
dies (see Die).


The process of making the tough, malleable alloy known as wrought iron differs markedly from
other forms of steel making. Because this process, known as puddling, required a great deal of
hand labor, production of wrought iron in tonnage quantities was impossible. The development
of new processes using Bessemer converters and open-hearth furnaces allowed the production
of larger quantities of wrought iron.

Wrought iron is no longer produced commercially, however, because it can be effectively

replaced in nearly all applications by low-carbon steel, which is less expensive to produce and
is typically of more uniform quality than wrought iron.

The puddling furnace used in the older process has a low, arched roof and a depressed hearth
on which the crude metal lies, separated by a wall from the combustion chamber in which
bituminous coal is burned. The flame in the combustion chamber surmounts the wall, strikes
the arched roof, and “reverberates” upon the contents of the hearth. After the furnace is lit
and has become moderately heated, the puddler, or furnace operator, “fettles” it by plastering
the hearth and walls with a paste of iron oxide, usually hematite ore. The furnace is then
charged with about 270 kg (about 600 lb) of pig iron and the door is closed. After about 30
min the iron is melted and the puddler adds more iron oxide or mill scale to the charge,
working the oxide into the iron with a bent iron bar called a raddle. The silicon and most of the
manganese in the iron are oxidized and some sulfur and phosphorus are eliminated. The
temperature of the furnace is then raised slightly, and the carbon starts to burn out as carbon-
oxide gases. As the gas is evolved the slag puffs up and the level of the charge rises. As the
carbon is burned away the melting temperature of the alloy increases and the charge becomes
more and more pasty, and finally the bath drops to its former level. As the iron increases in
purity, the puddler stirs the charge with the raddle to ensure uniform composition and proper
cohesion of the particles. The resulting pasty, spongelike mass is separated into lumps, called
balls, of about 80 to 90 kg (about 180 to 200 lb) each. The balls are withdrawn from the
furnace with tongs and are placed directly in a squeezer, a machine in which the greater part
of the intermingled siliceous slag is expelled from the ball and the grains of pure iron are
thoroughly welded together. The iron is then cut into flat pieces that are piled on one another,
heated to welding temperature, and then rolled into a single piece. This rolling process is
sometimes repeated to improve the quality of the product.

The modern technique of making wrought iron uses molten iron from a Bessemer converter
and molten slag, which is usually prepared by melting iron ore, mill scale, and sand in an
open-hearth furnace. The molten slag is maintained in a ladle at a temperature several
hundred degrees below the temperature of the molten iron. When the molten iron, which
carries a large amount of gas in solution, is poured into the ladle containing the molten slag,
the metal solidifies almost instantly, releasing the dissolved gas. The force exerted by the gas
shatters the metal into minute particles that are heavier than the slag and that accumulate in
the bottom of the ladle, agglomerating into a spongy mass similar to the balls produced in a
puddling furnace. After the slag has been poured off the top of the ladle, the ball of iron is
removed and squeezed and rolled like the product of the puddling furnace.


Steels are grouped into five main classifications.

A Carbon Steels

More than 90 percent of all steels are carbon steels. They contain varying amounts of carbon
and not more than 1.65 percent manganese, 0.60 percent silicon, and 0.60 percent copper.
Machines, automobile bodies, most structural steel for buildings, ship hulls, bedsprings, and
bobby pins are among the products made of carbon steels.

B Alloy Steels

These steels have a specified composition, containing certain percentages of vanadium,

molybdenum, or other elements, as well as larger amounts of manganese, silicon, and copper
than do the regular carbon steels. Automobile gears and axles, roller skates, and carving
knives are some of the many things that are made of alloy steels.

C High-Strength Low-Alloy Steels

Called HSLA steels, they are the newest of the five chief families of steels. They cost less than
the regular alloy steels because they contain only small amounts of the expensive alloying
elements. They have been specially processed, however, to have much more strength than
carbon steels of the same weight. For example, freight cars made of HSLA steels can carry
larger loads because their walls are thinner than would be necessary with carbon steel of equal
strength; also, because an HSLA freight car is lighter in weight than the ordinary car, it is less
of a load for the locomotive to pull. Numerous buildings are now being constructed with
frameworks of HSLA steels. Girders can be made thinner without sacrificing their strength, and
additional space is left for offices and apartments.

D Stainless Steels

Stainless steels contain chromium, nickel, and other alloying elements that keep them bright
and rust resistant in spite of moisture or the action of corrosive acids and gases. Some
stainless steels are very hard; some have unusual strength and will retain that strength for
long periods at extremely high and low temperatures. Because of their shining surfaces
architects often use them for decorative purposes. Stainless steels are used for the pipes and
tanks of petroleum refineries and chemical plants, for jet planes, and for space capsules.
Surgical instruments and equipment are made from these steels, and they are also used to
patch or replace broken bones because the steels can withstand the action of body fluids. In
kitchens and in plants where food is prepared, handling equipment is often made of stainless
steel because it does not taint the food and can be easily cleaned.

E Tool Steels

These steels are fabricated into many types of tools or into the cutting and shaping parts of
power-driven machinery for various manufacturing operations. They contain tungsten,
molybdenum, and other alloying elements that give them extra strength, hardness, and
resistance to wear.


The physical properties of various types of steel and of any given steel alloy at varying
temperatures depend primarily on the amount of carbon present and on how it is distributed in
the iron. Before heat treatment most steels are a mixture of three substances: ferrite,
pearlite, and cementite. Ferrite is iron containing small amounts of carbon and other elements
in solution and is soft and ductile. Cementite, a compound of iron containing about 7 percent
carbon, is extremely brittle and hard. Pearlite is an intimate mixture of ferrite and cementite
having a specific composition and characteristic structure, and physical characteristics
intermediate between its two constituents. The toughness and hardness of a steel that is not
heat treated depend on the proportions of these three ingredients. As the carbon content of a
steel increases, the amount of ferrite present decreases and the amount of pearlite increases
until, when the steel has 0.8 percent of carbon, it is entirely composed of pearlite. Steel with
still more carbon is a mixture of pearlite and cementite. Raising the temperature of steel
changes ferrite and pearlite to an allotropic form of iron-carbon alloy known as austenite,
which has the property of dissolving all the free carbon present in the metal. If the steel is
cooled slowly the austenite reverts to ferrite and pearlite, but if cooling is sudden, the
austenite is “frozen” or changes to martensite, which is an extremely hard allotropic
modification that resembles ferrite but contains carbon in solid solution.


The basic process of hardening steel by heat treatment consists of heating the metal to a
temperature at which austenite is formed, usually about 760° to 870° C (about 1,400° to
1,600° F) and then cooling, or quenching, it rapidly in water or oil. Such hardening
treatments, which form martensite, set up large internal strains in the metal, and these are
relieved by tempering, or annealing, which consists of reheating the steel to a lower
temperature. Tempering results in a decrease in hardness and strength and an increase in
ductility and toughness.

The primary purpose of the heat-treating process is to control the amount, size, shape, and
distribution of the cementite particles in the ferrite, which in turn determines the physical
properties of the steel.

Many variations of the basic process are practiced. Metallurgists have discovered that the
change from austenite to martensite occurs during the latter part of the cooling period and
that this change is accompanied by a change in volume that may crack the metal if the cooling
is too swift. Three comparatively new processes have been developed to avoid cracking. In
time-quenching the steel is withdrawn from the quenching bath when it has reached the
temperature at which the martensite begins to form, and is then cooled slowly in air. In
martempering the steel is withdrawn from the quench at the same point, and is then placed in
a constant-temperature bath until it attains a uniform temperature throughout its cross
section. The steel is then allowed to cool in air through the temperature range of martensite
formation, which for most steels is the range from about 288° C (about 550° F) to room
temperature. In austempering the steel is quenched in a bath of metal or salt maintained at
the constant temperature at which the desired structural change occurs and is held in this
bath until the change is complete before being subjected to the final cooling.

Other methods of heat treating steel to harden it are used. In case hardening, a finished piece
of steel is given an extremely hard surface by heating it with carbon or nitrogen compounds.
These compounds react with the steel, either raising the carbon content or forming nitrides in
its surface layer. In carburizing, the piece is heated in charcoal or coke, or in carbonaceous
gases such as methane or carbon monoxide. Cyaniding consists of hardening in a bath of
molten cyanide salt to form both carbides and nitrides. In nitriding, steels of special
composition are hardened by heating them in ammonia gas to form alloy nitrides.