You are on page 1of 37

Significant Figures and Uncertainty in Measurements

When performing measurements and calculations how many digits do we extend in our final
answers? This can be done by using significant figures to express the uncertainty in
measurements. Significant figures is a tool that helps determine the number of digits used in a
measurement. When performing measurements and calculations, significant figures express the
uncertainty in a measurement and determine how many digits are used in the final answer.

The general rules for determining significant figures are as follows:

Any nonzero digit 123, 1.23, .123 all have 3 significant figures.
A zero that is between two nonzero digits 1023, 1.023, .1023 all have 4 significant figures.
Zeros that occur AFTER the decimal place are 123.0 has 4 significant figures. 123.00 has 5
significant significant figures.
All zeros to the left of the first nonzero digit are
.00123 has 3 significant figures.
NOT significant
Zeros that occur without a decimal are NOT
1230 has 3 significant figures
significant

Significant Figures in Multiplication

When multiplying and dividing numbers, the number of significant figures in your answer will
be equal to the number with the least significant figures. Let’s divide 123 by 3.14159 which
equals 39.152149071. In this example, 123 has 3 significant figures and 3.14159 has 6.
Therefore the answer can only have 3 significant figures.

Note that exact numbers like 10mm in a centimeter have an infinite number of significant
figures. We do not write the significant figures down and these numbers are not used in
determining the number of significant figures to be used in your answer.

Significant Figures in Addition and Subtraction

To determine the number of significant figures in an addition or subtraction problem it is


necessary to look at the number with the least number of significant figures to the right of the
decimal place. The number of decimal places will determine the number of significant figures to
be used in the answer. For example, 29.4165 has 4 significant digits to the right of the decimal
place and 234.65 has only 2. Therefore your answer will be rounded off to the 2nd digit after the
decimal place.

Uncertainty of Measurements
How precise the final answer will be is determined by the least precise measuring instrument
used. For example if an object is measured with a set of calipers with a resolution to 0.001 inches
the measurement would be recorded to the thousandth of an inch, 2.200 inches. 2.2 inches show
it was measured with a device that only measures to 0.1 inches. 2.200 shows it was measured
with a device that measures to 0.001 inches. Even though these measurements are the same the
uncertainty of measurement is not.

Significant Figures in Measuring

A simple density calculation is used to illustrate the use of significant figures. An object weighs
16.5 grams and has a volume of 9.3 milliliters. The formula for calculation is

. By inserting the appropriate values we’ll illustrate how this works.

In this example, 16.5g has 3 significant figures and 9.3mL has 2 significant figures. Using the
rules of multiplying and dividing with significant figures, the answer would have the same
number of significant figures as the least number in the calculation. Therefore, the answer should
be reported using 2 significant figures, 1.7g/mL.

Let’s try another density calculation. We have a rectangular rubber block and want to calculate
the density to determine from what type of elastomer it is made. To calculate the volume of a
rectangular rubber block I need to use the formula Length X Width X Height (LxWxH). I will
substitute this for volume in the density formula.

A scale will be used to measure the mass and calipers to measure the dimensions of the block
and calculate the volume. The scale has a resolution of 0.001 grams and the calipers have a
resolution to 0.01mm. The results of the measurements are as follows:

Instrument Measured Value


Scale with .001g Resolution 372.561g
Length = 10.00mm
Digital Calipers with .01mm Resolution Width = 10.00mm
Height = 30.00mm

Note that the length, width and height values are recorded to 2 decimal places and the mass to 3
decimal places as these are the resolutions of instruments used and therefore want to show the
uncertainty of measurements in the final answer. For example, if the measurement was recorded
as 10mm, which has 2 significant figures, as opposed to 10.00mm, which has 4 significant
figures, the measurement would not be expressing the proper uncertainty of measurement.
We want our density value in grams per cubic centimeter because the known density’s of rubber
is specified in grams per cubic centimeter not grams per cubic millimeter, it is therefore
necessary to convert the length, width and height measurements from millimeters to centimeters.
10.00mm divided by 10 mm/cm = 1.000cm. Because 10mm/cm is an exact number with infinite
number of significant figures, it’s not used in determining the number of significant figures in the
answer. Therefore, the number of significant figures is determined using measurements, which
have 4 significant figures, so the final values can only have 4 significant figures, 1.000cm,
1.000cm and 3.000cm.

Let’s plug in the values into the density formula and calculate the results using significant
figures. Using the measured values in cm, the density is calculated:

The answer has 4 significant figures because the least number of significant figures in any
number used is 4. Note that the numbers in the formula have the unit of measure which are also
calculated out in the final answer. For example, in 1.000cm X 1.000cm X 3.000cm, the 3 “cm”
are expressed as “cm3”. In the final value we have “grams” divided by “cm3” expressed as
“g/cm3”.

It is good practice to include significant figures in your calculations to show the uncertainty of
your measurements so other know the precision at which you were able to measure. It is also
good practice to always include the unit of measure in your values. These will be calculated out
in your formula and used to determine the proper unit of measure in your final answer.

Dimensional analysis

In engineering and science, dimensional analysis is the analysis of the relationships between
different physical quantities by identifying their base quantities (such as length, mass, time, and
electric charge) and units of measure (such as miles vs. kilometers, or pounds vs. kilograms vs.
grams) and tracking these dimensions as calculations or comparisons are performed. Converting
from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more
specifically the factor-label method, also known as the unit-factor method, is a widely used
technique for such conversions using the rules of algebra.

The concept of physical dimension was introduced by Joseph Fourier in 1822. Physical
quantities that are of the same kind (also called commensurable) have the same dimension
(length, time, mass) and can be directly compared to each other, even if they are originally
expressed in differing units of measure (such as inches and meters, or pounds and newtons). If
physical quantities have different dimensions (such as length vs. mass), they cannot be expressed
in terms of similar units and cannot be compared in quantity (also called incommensurable). For
example, asking whether a kilogram is greater than, equal to, or less than an hour is meaningless.

Any physically meaningful equation (and likewise any inequality and inequation) will have the
same dimensions on its left and right sides, a property known as dimensional homogeneity.
Checking for dimensional homogeneity is a common application of dimensional analysis,
serving as a plausibility check on derived equations and computations. It also serves as a guide
and constraint in deriving equations that may describe a physical system in the absence of a more
rigorous derivation.

Temperature

Temperature is an objective measurement of how hot or cold an object is. It can be measured
with a thermometer or a calorimeter. It is a means of determining the internal energy contained
within the system.

Because humans instantly perceive the amount of heat and cold within an area, it is
understandable that temperature is a feature of reality that we have a fairly intuitive grasp on.
Indeed, temperature is a concept that arises as crucial within a wide variety of scientific
disciplines.

Consider that many of us have our first interaction with a thermometer in the context of
medicine, when a doctor (or our parent) uses one to discern our temperature, as part of
diagnosing our illness.

Heat Versus Temperature

Note that temperature is different from heat, though the two concepts are linked. Temperature is
a measure of the internal energy of the system, while heat is a measure of how energy is
transferred from one system (or body) to another. This is roughly described by the kinetic theory,
at least for gases and fluids. The greater the heat absorbed by a material, the more rapidly the
atoms within the material begin to move, and thus the greater the rise in temperature. Things get
a little more complicated for solids, of course, but that's the basic idea.

Temperature Scales

Several temperature scales exist. In America, the Fahrenheit temperature is most commonly
used, though the SI unit Centrigrade (or Celsius) is used in most of the rest of the world.

The Kelvin scale is used often in physics, and is adjusted so that 0 degrees Kelvin is absolute
zero, in theory, the coldest possible temperature, in which all kinetic motion ceases.
Measuring Temperature

A traditional thermometer measures temperature by containing a fluid that expands as it gets


hotter and contracts as it gets cooler.

As the temperature changes, the liquid within a contained tube moves along a scale on the
device.

As with much of modern science, we can look back to the ancients for the origins of the ideas
about how to measure temperature back to the ancients. Specifically, in the first century BCE,
the philosopher Hero of Alexandria wrote in Pneumatics about the relationship between
temperature and the expansion of air. This book was published in Europe in 1575, inspiring the
creation of the earliest thermometers throughout the following century.

Galileo was one of the first scientists recorded to have actually used such a device, though it's
unclear whether he actually built it himself or acquired the idea from someone else. He used a
device, called a thermoscope, to measure the amount of heat and cold, at least as early as 1603.

Throughout the 1600s, various scientists tried to create thermometers that measured temperature
by a change of pressure within a contained measurement device. Robert Fludd built a
thermoscope in 1638 that had a temperature scale built into the physical structure of the device,
resulting in the first thermometer.

Without any centralized system of measurement, each of these scientists developed their own
measurement scales, and none of them really caught on until Daniel Gabriel Fahrenheit built his
in the early 1700's.

He built a thermometer with alcohol in 1709, but it was really his mercury-based thermometer of
1714 that became the gold standard of temperature measurement.

Density

Density is a measure of mass per unit of volume.

Density is a measure of mass per volume.

The average density of an object equals its total mass divided by its total volume.

An object made from a comparatively dense material (such as iron) will have less volume than
an object of equal mass made from some less dense substance (such as water).

Perhaps the highest density known is reached in neutron star matter.


The singularity at the centre of a black hole, according to general relativity, does not have any
volume, so its density is undefined.

Classification of Matter

The three states of matter are the three distinct physical forms that matter can take in most
environments: solid, liquid, and gas. In extreme environments, other states may be present, such
as plasma, Bose-Einstein condensates, and neutron stars. Further states, such as quark-gluon
plasmas, are also believed to be possible. Much of the atomic matter of the universe is hot
plasma in the form of rarefied interstellar medium and dense stars.

Historically, the states of matter were distinguished based on qualitative differences in their bulk
properties. Solid is the state in which matter maintains a fixed volume and shape; liquid is the
state in which matter adapts to the shape of its container but varies only slightly in volume; and
gas is the state in which matter expands to occupy the volume and shape of its container. Each of
these three classical states of matter can transition directly into either of the other two classical
states.

The states of matter: This diagram shows the nomenclature for the different phase transitions.

Solids

A solid’s particles are packed closely together. The forces between the particles are strong
enough that the particles cannot move freely; they can only vibrate. As a result, a solid has a
stable, definite shape and a definite volume. Solids can only change shape under force, as when
broken or cut.

In crystalline solids, particles are packed in a regularly ordered, repeating pattern. There are
many different crystal structures, and the same substance can have more than one structure. For
example, iron has a body-centered cubic structure at temperatures below 912 °C and a face-
centered cubic structure between 912 and 1394 °C. Ice has fifteen known crystal structures, each
of which exists at a different temperature and pressure.

A solid can transform into a liquid through melting, and a liquid can transform into a solid
through freezing. A solid can also change directly into a gas through a process called
sublimation.

Liquids

A liquid is a fluid that conforms to the shape of its container but that retains a nearly constant
volume independent of pressure. The volume is definite (does not change) if the temperature and
pressure are constant. When a solid is heated above its melting point, it becomes liquid because
the pressure is higher than the triple point of the substance. Intermolecular (or interatomic or
interionic) forces are still important, but the molecules have enough energy to move around,
which makes the structure mobile. This means that a liquid is not definite in shape but rather
conforms to the shape of its container. Its volume is usually greater than that of its corresponding
solid (water is a well-known exception to this rule). The highest temperature at which a
particular liquid can exist is called its critical temperature.

A liquid can be converted to a gas through heating at constant pressure to the substance’s boiling
point or through reduction of pressure at constant temperature. This process of a liquid changing
to a gas is called evaporation.

Gases

Gas molecules have either very weak bonds or no bonds at all, so they can move freely and
quickly. Because of this, not only will a gas conform to the shape of its container, it will also
expand to completely fill the container. Gas molecules have enough kinetic energy that the effect
of intermolecular forces is small (or zero, for an ideal gas), and they are spaced very far apart
from each other; the typical distance between neighboring molecules is much greater than the
size of the molecules themselves.

A gas at a temperature below its critical temperature can also be called a vapor. A vapor can be
liquefied through compression without cooling. It can also exist in equilibrium with a liquid (or
solid), in which case the gas pressure equals the vapor pressure of the liquid (or solid).

A supercritical fluid (SCF) is a gas whose temperature and pressure are greater than the critical
temperature and critical pressure. In this state, the distinction between liquid and gas disappears.
A supercritical fluid has the physical properties of a gas, but its high density lends it the
properties of a solvent in some cases. This can be useful in several applications. For example,
supercritical carbon dioxide is used to extract caffeine in the manufacturing of decaffeinated
coffee.

Substances and Mixtures

Substances are composed of pure elements or chemically bonded elements, whereas mixtures are
composed of non-bonded substances.

Chemical Substances

In chemistry, a chemical substance is a form of matter that has constant chemical composition
and characteristic properties. It cannot be separated into components without breaking chemical
bonds. Chemical substances can be solids, liquids, gases, or plasma. Changes in temperature or
pressure can cause substances to shift between the different phases of matter.
An element is a chemical substance that is made up of a particular kind of atom and hence cannot
be broken down or transformed by a chemical reaction into a different element. All atoms of an
element have the same number of protons, though they may have different numbers of neutrons
and electrons.

A pure chemical compound is a chemical substance that is composed of a particular set of


molecules or ions that are chemically bonded. Two or more elements combined into one
substance through a chemical reaction, such as water, form a chemical compound. All
compounds are substances, but not all substances are compounds. A chemical compound can be
either atoms bonded together in molecules or crystals in which atoms, molecules or ions form a
crystalline lattice. Compounds made primarily of carbon and hydrogen atoms are called organic
compounds, and all others are called inorganic compounds. Compounds containing bonds
between carbon and a metal are called organometallic compounds.

Chemical substances are often called ‘pure’ to set them apart from mixtures. A common example
of a chemical substance is pure water; it always has the same properties and the same ratio of
hydrogen to oxygen whether it is isolated from a river or made in a laboratory. Other chemical
substances commonly encountered in pure form are diamond (carbon), gold, table salt (sodium
chloride), and refined sugar (sucrose). Simple or seemingly pure substances found in nature can
in fact be mixtures of chemical substances. For example, tap water may contain small amounts of
dissolved sodium chloride and compounds containing iron, calcium, and many other chemical
substances. Pure distilled water is a substance, but seawater, since it contains ions and complex
molecules, is a mixture.

Chemical Mixtures

A mixture is a material system made up of two or more different substances, which are mixed
but not combined chemically. A mixture refers to the physical combination of two or more
substances in which the identities of the individual substances are retained. Mixtures take the
form of alloys, solutions, suspensions, and colloids.

Heterogeneous Mixtures

A heterogeneous mixture is a mixture of two or more chemical substances (elements or


compounds), where the different components can be visually distinguished and easily separated
by physical means. Examples include:

 mixtures of sand and water


 mixtures of sand and iron filings
 a conglomerate rock
 water and oil
 a salad
 trail mix
 mixtures of gold powder and silver powder
Homogenous Mixtures

A homogeneous mixture is a mixture of two or more chemical substances (elements or


compounds), where the different components cannot be visually distinguished. Often separating
the components of a homogeneous mixture is more challenging than separating the components
of a heterogeneous mixture.

Distinguishing between homogeneous and heterogeneous mixtures is a matter of the scale of


sampling. On a small enough scale, any mixture can be said to be heterogeneous, because a
sample could be as small as a single molecule. In practical terms, if the property of interest is the
same regardless of how much of the mixture is taken, the mixture is homogeneous.

A mixture’s physical properties, such as its melting point, may differ from those of its individual
components. Some mixtures can be separated into their components by physical (mechanical or
thermal) means.

Elements and Compounds

An element is a material that consists of a single type of atom, while a compound consists of two
or more types of atoms.

Elements

A chemical element is a pure substance that consists of one type of atom. Each atom has an
atomic number, which represents the number of protons that are in the nucleus of a single atom
of that element. The periodic table of elements is ordered by ascending atomic number.

The chemical elements are divided into the metals, the metalloids, and the non-metals. Metals,
typically found on the left side of the periodic table, are:

 often conductive to electricity


 malleable
 shiny
 sometimes magnetic.

Aluminum, iron, copper, gold, mercury and lead are metals.

In contrast, non-metals, found on the right side of the periodic table (to the right of the staircase),
are:

 typically not conductive


 not malleable
 dull (not shiny)
 not magnetic.

Examples of elemental non-metals include carbon and oxygen.


Metalloids have some characteristics of metals and some characteristics of non-metals. Silicon
and arsenic are metalloids.

As of November, 2011, 118 elements have been identified (the most recently identified was
ununseptium, in 2010). Of these 118 known elements, only the first 98 are known to occur
naturally on Earth. The elements that do not occur naturally on Earth are the synthetic products
of man-made nuclear reactions. 80 of the 98 naturally-occurring elements are stable; the rest are
radioactive, which means they decay into lighter elements over timescales ranging from fractions
of a second to billions of years.

The periodic table: The periodic table shows 118 elements, including metals (blue), nonmetals
(red), and metalloids (green).

Hydrogen and helium are by far the most abundant elements in the universe. However, iron is the
most abundant element (by mass) in the composition of the Earth, and oxygen is the most
common element in the layer that is the Earth’s crust.

Although all known chemical matter is composed of these elements, chemical matter itself
constitutes only about 15% of the matter in the universe. The remainder is dark matter, a
mysterious substance that is not composed of chemical elements. Dark matter lacks protons,
neutrons, or electrons.

Compounds

Pure samples of isolated elements are uncommon in nature. While the 98 naturally occurring
elements have all been identified in mineral samples from the Earth’s crust, only a small
minority of them can be found as recognizable, relatively pure minerals. Among the more
common of such “native elements” are copper, silver, gold, and sulfur. Carbon is also commonly
found in the form of coal, graphite, and diamonds. The noble gases (e.g., neon) and noble metals
(e.g., mercury) can also be found in their pure, non-bonded forms in nature. Still, most of these
elements are found in mixtures.

When two distinct elements are chemically combined—i.e., chemical bonds form between their
atoms—the result is called a chemical compound. Most elements on Earth bond with other
elements to form chemical compounds, such as sodium (Na) and Chloride (Cl), which combine
to form table salt (NaCl). Water is another example of a chemical compound. The two or more
component elements of a compound can be separated through chemical reactions.

Chemical compounds have a unique and defined structure, which consists of a fixed ratio of
atoms held together in a defined spatial arrangement by chemical bonds. Chemical compounds
can be:

 molecular compounds held together by covalent bonds


 salts held together by ionic bonds
 intermetallic compounds held together by metallic bonds
 complexes held together by coordinate covalent bonds.

Pure chemical elements are not considered chemical compounds, even if they consist of diatomic
or polyatomic molecules (molecules that contain only multiple atoms of a single element, such as
H2 or S8).

Distillation

Distillation is an important commercial process that is used in the purification of a large variety
of materials. However, before we begin a discussion of distillation, it would probably be
beneficial to define the terms that describe the process and related properties. Many of these are
terms that you are familiar with but the exact definitions may not be known to you. Let us begin
by describing the process by which a substance is transformed from the condensed phase to the
gas phase. For a liquid, this process is called vaporization and for a solid it is called sublimation.
Both processes require heat. This is why even on a hot day at the beach, if there is a strong
breeze blowing, it may feel cool or cold after you come out of the water. The wind facilitates the
evaporation process and you supply some of the heat that is required. All substances regardless
of whether they are liquids or solids are characterized by a vapor pressure. The vapor pressure of
a pure substance is the pressure exerted by the substance against the external pressure which is
usually atmospheric pressure. Vapor pressure is a measure of the tendency of a condensed
substance to escape the condensed phase. The larger the vapor pressure, the greater the tendency
to escape. When the vapor pressure of a liquid substance reaches the external pressure, the
substance is observed to boil. If the external pressure is atmospheric pressure, the temperature at
which a pure substance boils is called the normal boiling point. Solid substances are not
characterized by a similar phenomena as boiling. They simply vaporize directly into the
atmosphere. Many of you may have noticed that even on a day in which the temperature stays
below freezing, the volume of snow and ice will appear to decrease, particularly from dark
pavements on the streets. This is a consequence of the process of sublimation. Both vaporization
and sublimation are processes that can be used to purify compounds. In order to understand how
to take advantage of these processes in purifying organic materials, we first need to learn how
pure compounds behave when they are vaporized or sublimed.

Let's begin by discussing the vapor pressure of a pure substance and how it varies with
temperature. Vapor pressure is an equilibrium property. If we return to that hot windy day at the
beach and consider the relative humidity in the air, the cooling effect of the wind would be most
effective if the relative humidity was low. If the air contained a great deal of water vapor, its
cooling effect would be greatly diminished and if the relative humidity was 100%, there would
be no cooling effect. Everyone in St. Louis has experienced how long it takes to dry off on a hot
humid day. At equilibrium, the process of vaporization is compensated by an equal amount of
condensation. Incidentally, if vaporization is an endothermic process (i.e. heat is absorbed),
condensation must be an exothermic process (i.e. heat is liberated). Now consider how vapor
pressure varies with temperature. Figure 1 illustrates that vapor pressure is a very sensitive
function of temperature. It does not increase linearly but in fact increases exponentially with
temperature. A useful "rule of thumb" is that the vapor pressure of a substance roughly doubles
for every increase in 10 °C. If we follow the temperature dependence of vapor pressure for a
substance like water left out in an open container, we would find that the equilibrium vapor
pressure of water would increase until it reached 1 atmosphere or 101325 Pa (101.3 kPa, 760 mm

Hg). At this temperature and pressure, the water would begin to boil and would continue to do so

until all of the water distilled or boiled off. It is not possible to achieve a vapor pressure greater
than 1 atmosphere in a container left open to the atmosphere.

Chromatography, a group of methods for separating very small quantities of complex mixtures,
with very high resolution, is one of the most important techniques in environmental analysis. The
ability of the modern analytical chemist to detect specific compounds at ng/g or lower levels in
such complex matrices as natural waters or animal tissues is due in large part to the development
of chromatographic methods.

The science of chromatography began early in the twentieth century, with the Russian botanist
Mikhail Tswett, who used a column packed with calcium carbonate to separate plant pigments.
The method was developed rapidly in the years after World War II, and began to be applied to
environmental problems with the invention of the electron capture detector (ECD) in 1960 by
James Lovelock. This detector, with its specificity and very high sensitivity toward halogenated
organic compounds, was just what was needed to determine traces of pesticides in soils, food and
water and halocarbon gases in the atmosphere. This happened at exactly the time when the effect
of anthropogenic chemicals on many environmental systems was becoming an issue of public
concern. Within a year, it was being applied to pesticide analysis. The pernicious effects of long
lived, bioaccumulating pesticides, such as DDT, would have been very difficult to detect without
the use of the ECD. The effect of this information on public policy has been far-reaching.

The basis of all types of chromatography is the partition of the sample compounds between a
stationary phase and a mobile phase which flows over or through the stationary phase. Different
combinations of gaseous or liquid phases give rise to the types of chromatography used in
analysis, namely gas chromatography (GC), liquid chromatography (LC), thin layer
chromatography (TLC), and supercritical fluid chromatography (SFC).

Chromatography has increased the utility of several types of spectroscopy, by delivering separate
components of a complex sample, one at a time, to the spectrometer. This combination of the
separating power of chromatography with the identification and quantitation of spectroscopy has
been most important in environmental analysis. It has enabled analysts to cope with
tremendously complex and extremely dilute samples.
PRINCIPLES OF CHROMATOGRAPHY

All chromatographic systems rely on the fact that a substance placed in contact with two
immiscible phases, one moving and one stationary, will equilibrate between them. A
reproducible fraction will partition into each phase, depending on the relative affinity of the
substance for each phase. A substance which has affinity for the moving or mobile phase will be
moved rapidly through the system. A material which has a stronger affinity for the stationary
phase, on the other hand, will spend more time immobilized in that phase, and will take a longer
time to pass through the system. Therefore, it will be separated from the first substance. By
definition, chromatography is a separation technique in which a sample is equilibrated between a
mobile and a stationary phase.

Gas chromatography employs an inert gas as the mobile phase, and either a solid adsorbent or a
nonvolatile liquid coated on a solid support as the stationary phase. The solid or coated support is
packed into a tube, with the gas flowing through it. Separation depends on the relative partial
pressures of the sample components above the stationary phase. Liquid chromatography uses
similar packed tubular columns and usually a pump to force a liquid mobile phase through the
column. Supercritical fluid chromatography occupies a middle ground between gas and liquid
chromatography. The mobile phase is a supercritical fluid, i.e., a fluid above its critical
temperature and pressure. This allows the use of GC type detectors, since the mobile phase has
gas-like properties, but also allows continuous variation in such mobile phase properties as
viscosity and density, by changing temperature and pressure. Finally, chromatography may be
done on a planar surface. The sample is transported over a solid surface such as cellulose or
silica gel, coated on a plate. The sample components are moved over the surface by the mobile
phase which is usually allowed to travel through the adsorbent layer by capillary action.

The reason that all molecules of a certain type tend to exit the system at the same time is that
they are always re-equilibrating between the phases. Over a large number of such equilibrations,
the molecules spend, on average, the same amount of time in each phase. Let us look at one point
in the chromatographic column. When the analyte achieves or approaches equilibrium, the
mobile phase moves on, leaving the stationary phase with too much of the analyte and the mobile
phase with too little. To attempt to reestablish equilibrium, more sample dissolves in the mobile
phase and moves along. Figure 0.1 shows a mixture of three substances as they move through a
chromatographic column.

The movement of analytes in the column can be described mathematically. The basis of
chromatography is the equilibrium of each analyte between the mobile and stationary phase. This
can be expressed by a simple equilibrium equation, where Kx is partition coefficient,

Kx = [C]s / [C]m ( 0.0)

that is: the concentration in the stationary phase ([Cs]) is directly related to the concentration in
the mobile phase ([Cm]), at least when the concentrations are low. Chromatographic separations
are best done with a small amount of analyte, which keeps either phase from becoming saturated
with analyte, so that the concentrations in the two phases are directly proportional. Overloading
the column with sample causes one of the phases to become saturated with sample, leading to a
loss of column efficiency, and poorly shaped peak profiles.

The quantities in the equilibrium expression for Kx, [C]s and [C]m are not easy to measure. We
can define a new constant, the capacity factor, k':

k'x = (moles of X in stationary phase)/(moles of X in mobile phase) ( 0.0)

Since the number of moles can be expressed as the concentration multiplied by the volume,
Equations 6.1 and 6.2 can be combined and reduced to:

k'x = Kx (Vs/Vm) ( 0.0)

All sample molecules spend the same amount of time in the mobile phase. If they were
completely unretained by the stationary phase they would exit the column in the time it takes for
one volume of mobile phase to pass through the column. This is equal to the void volume of the
column. Molecules pass through the column in the time equal to the passage of one void volume,
Vm, plus the time spent in the stationary phase, expressed by k'x. Therefore, the volume of eluent
which will pass through the column before the sample elutes (the retention volume, Vr) can be
expressed as:

Vr = Vm ( 1 + k'x ) ( Chapter 4 .4)

The retention volume, Vr, is related to the column flow Fc, and the retention time, tr. Likewise,
the volume of the mobile phase, Vm, is related to the flow and the time the void volume takes to
pass through the column, to.

Vr = t r Fc ( Chapter 4 .5)

Vm = t o Fc ( Chapter 4 .6 )

Substituting these into equation 6.4 and rearranging gives:

k'x = (tr - to) / to ( Chapter 4 .7)

Values for all these variables can all be obtained from the experimental chromatogram, as shown
in Figure 0.2. The term (tr - to) is called the adjusted retention time and is often expressed as t’r,
so k’x can also be expressed as t’r/ to. This is then the basis for separation of any two analytes.
The separation is directly related to the difference in the k’ values for the two substances. If the
k’ for the sample components is very small, there is so little retention of the compound that
separation is not possible. If the difference between the k’ factors for two compounds is small,
separation of them will be difficult. The selectivity, a, of a column for a particular separation,
say of substances A and B, is expressed as a ratio of their retention times or retention factors:
( 0.0)

One should notice that the sample bands tend to spread out as they move through the column.
The narrower the initial band of sample, and the less the individual compounds are spread out as
they traverse the column, the more efficient the column is. An efficient column can separate a
greater number of individual compounds in a given time.

Filtration

Filtration is a process used to separate solids from liquids or gases using a filter medium that
allows the fluid to pass, but not the solid. The term "filtration" applies whether the filter is
mechanical, biological, or physical. The fluid that passes through the filter is called the filtrate.
The filter medium may be a surface filter, which is a solid that traps solid particles, or a depth
filter, which is a bed of material that traps the solid.

Filtration is typically an imperfect process. Some fluid remains on the feed side of the filter or
embedded in the filter media and some small solid particulates find their way through the filter.
As a chemistry and engineering technique, there is always some lost product, whether it's the
liquid or solid being collected.

Examples of Filtration

While filtration is an important separation technique in a laboratory, it's also common in


everyday life.

 Brewing coffee involves passing hot water through ground coffee and a filter. The liquid coffee is
the filtrate. Steeping tea is much the same, whether you use a tea bag (paper filter) or tea ball
(usually a metal filter).
 The kidneys are an example of a biological filter. Blood is filtered by the glomerulus. Essential
molecules are reabsorbed back into the blood.
 Air conditioners and many vacuum cleaners use HEPA filters to remove dust and pollen from the
air.

 Many aquariums use filters that contain fibers that capture particulates.
 Belt filters recover precious metals during mining.
 Water in an aquifer is relatively pure because it has been filtered through sand and permeable
rock in the ground.

Filtration Methods

There are different types of filtration. Which method is used depends largely on whether the
solid is a particulate (suspended) or dissolved in the fluid.
General Filtration: The most basic form of filtration is using gravity to filter a mixture. The
mixture is poured from above onto a filter medium (e.g., filter paper) and gravity pulls the liquid
down. The solid is left on the filter, while the liquid flows below it.

Vacuum Filtration: A Büchner flask and hose are used to pull a vacuum to suck the fluid through
the filter (usually with the aid of gravity). This greatly speeds the separation and can be used to
dry the solid. A related technique uses a pump to form a pressure difference on both sides of the
filter. Pump filters do not need to be vertical because gravity is not the source of the pressure
difference on the sides of the filter.

Cold Filtration: Cold filtration is used to quickly cool a solution, prompting the formation of
small crystals. This is a method used when the solid is initially dissolved. A common method is
to place the container with the solution in an ice bath prior to filtration.

Hot Filtration: In hot filtration, the solution, filter, and funnel are heated to minimize crystal
formation during filtration. Stemless funnels are useful because there is less surface area for
crystal growth. This method is used when crystals would clog the funnel or to prevent
crystallization of a second component in a mixture.

Sometimes filter aids are used to improve flow through a filter. Examples of filter aids are silica,
diatomaceous earth, perlite, and cellulose. Filter aids may be placed on the filter prior to filtration
or mixed with the liquid. The aids can help prevent clogging of the filter and can increase
porosity of the "cake" or feed into the filter.

Filtration Versus Sieving

A related separation technique is sieving. Sieving refers to use of a single mesh or perforated
layer to retain large particles, while allowing the passage of smaller ones. In filtration, in
contrast, the filter is a lattice or has multiple layers. Fluids follow channels in the medium to pass
through a filter.

Alternatives to Filtration

In some situations, there are better separation methods than filtration. For example, for very
small samples where it's important to collect the filtrate, the filter medium may soak up too much
of the fluid.

In other cases, too much of the solid becomes trapped in the filter medium. Two other processes
that can be used to separate solids from fluids are decantation and centrifugation. Centrifugation
involves spinning a sample, forcing the heavier solid to the bottom of a container. Decantation
can be used following centrifugation or on its own. In decantation, the fluid is siphoned or
poured off of the solid after it has fallen out of solution.
Atom
An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical
element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are
extremely small; typical sizes are around 100 picometers (a ten-billionth of a meter, in the short
scale).

Atoms are small enough that attempting to predict their behavior using classical physics – as if
they were billiard balls, for example – gives noticeably incorrect predictions due to quantum
effects. Through the development of physics, atomic models have incorporated quantum
principles to better explain and predict the behavior.

Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The
nucleus is made of one or more protons and typically a similar number of neutrons. Protons and
neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons
have a positive electric charge, the electrons have a negative electric charge, and the neutrons
have no electric charge. If the number of protons and electrons are equal, that atom is electrically
neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or
positive charge, respectively, and it is called an ion.

The electrons of an atom are attracted to the protons in an atomic nucleus by this electromagnetic
force. The protons and neutrons in the nucleus are attracted to each other by a different force, the
nuclear force, which is usually stronger than the electromagnetic force repelling the positively
charged protons from one another. Under certain circumstances, the repelling electromagnetic
force becomes stronger than the nuclear force, and nucleons can be ejected from the nucleus,
leaving behind a different element: nuclear decay resulting in nuclear transmutation.

The number of protons in the nucleus defines to what chemical element the atom belongs: for
example, all copper atoms contain 29 protons. The number of neutrons defines the isotope of the
element. The number of electrons influences the magnetic properties of an atom. Atoms can
attach to one or more other atoms by chemical bonds to form chemical compounds such as
molecules. The ability of atoms to associate and dissociate is responsible for most of the physical
changes observed in nature and is the subject of the discipline of chemistry.

Molecules and Ions

Although atoms are the smallest unique unit of a particular element, in nature only the noble
gases can be found as isolated atoms. Most matter is in the form of ions, or compounds.

Molecules and chemical formulas

A molecule is comprised of two or more chemically bonded atoms. The atoms may be of the
same type of element, or they may be different.
Many elements are found in nature in molecular form - two or more atoms (of the same type of
element) are bonded together. Oxygen, for example, is most commonly found in its molecular
form "O2" (two oxygen atoms chemically bonded together).

Oxygen can also exist in another molecular form where three atoms are chemically bonded. O3 is
also known as ozone. Although O2 and O3 are both compounds of oxygen, they are quite
different in their chemical and physical properties. There are seven elements which commonly
occur as diatomic molecules. These include H, N, O, F, Cl, Br, I.

An example of a commonly occurring compound that is composed of two different types of


atoms is pure water, or "H2O". The chemical formula for water illustrates the method of
describing such compounds in atomic terms: there are two atoms of hydrogen and one atom of
oxygen (the "1" subscript is omitted) in the compound known as "water". There is another
compound of Hydrogen and Oxygen with the chemical formula H2O2 , also known as hydrogen
peroxide. Again, although both compounds are composed of the same types of atoms, they are
chemically quite different: hydrogen peroxide is quite reactive and has been used as a rocket fuel
(it powered Evil Kenievel part way over the Snake River canyon).

Most molecular compounds (i.e. involving chemical bonds) contain only non-metallic elements.

Molecular, Empirical, and Structural Formulas

Empirical vs. Molecular formulas

 Molecular formulas refer to the actual number of the different atoms which comprise a
single molecule of a compound.
 Empirical formulas refer to the smallest whole number ratios of atoms in a particular
compound.

Compound Molecular Formula Empirical Formula


Water H2O H2O
Hydrogen Peroxide H2O2 HO
Ethylene C2H4 CH2
Ethane C2H6 CH3

Molecular formulas provide more information, however, sometimes a substance is actually a


collection of molecules with different sizes but the same empirical formula. For example, carbon
is commonly found as a collection of three dimensional structures (carbon chemically bonded to
carbon). In this form, it is most easily represented simply by the empirical formula "C" (the
elemental name).

Structural formulas

Sometimes the molecular formulas are drawn out as structural formulas to give some idea of the
actual chemical bonds which unite the atoms.
Structural formulas give an idea about the connections between atoms, but they don't necessarily
give information about the actual geometry of such bonds.

Ions

The nucleus of an atom (containing protons and neutrons) remains unchanged after ordinary
chemical reactions, but atoms can readily gain or lose electrons.

If electrons are lost or gained by a neutral atom, then the result is that a charged particle is
formed - called an ion.

For example, Sodium (Na) has 11 protons and 11 electrons. However, it can easily lose 1
electron. The resulting cation has 11 protons and 10 electrons, for an overall net charge of 1+
(the units are electron charge). The ionic state of an atom or compound is represented by a
superscript to the right of the chemical formula: Na+, Mg2+ (note the in the case of 1+, or 1-, the
'1'is omitted). In contrast to the Na atom, the Chlorine atom (Cl) easily gains 1 electron to yield
the chloride ion Cl- (i.e. 17 protons and 18 electrons).

In general, metal atoms tend to lose electrons, and nonmetal atoms tend to gain electrons.

Na+ and Cl- are simple ions, in contrast to polyatomic ions such as NO3- (nitrate ion) and SO42-
(sulfate ion). These are compounds made up of chemically bonded atoms, but have a net positive
or negative charge.
The chemical properties of an ion are greatly different from those of the atom from which it was
derived.

Predicting ionic charges

Many atoms gain or lose electrons such that they end up with the same number of electrons as
the noble gas closest to them in the periodic table.

The noble gasses are generally chemically non-reactive, they would appear to have a stable
arrangement of electrons.

Other elements must gain or lose electrons, to end up with the same arrangement of electrons as
the noble gases, in order to achieve the same kind of electron stability.

Example: Nitrogen

Nitrogen has an atomic number of 7; the neutral Nitrogen atom has 7 protons and 7 electrons. If
Nitrogen gained three electrons it would have 10 electrons, like the Noble gas Neon (10 protons,
10 electrons). However, unlike Neon, the resulting Nitrogen ion would have a net charge of N3-
(7 protons, 10 electrons).

The location of the elements on the Periodic table can help in predicting the expected charge of
ionic forms of the elements.

This is mainly true for the elements on either side of the chart.

Ionic compounds

Ions form when one or more electrons transfer from one neutral atom to another. For example,
when elemental sodium is allowed to react with elemental chlorine an electron transfers from a
neutral sodium to a neutral chlorine. The result is a sodium ion (Na+) and a chlorine ion, chloride
(Cl-):
The oppositely charged ions attract one another and bind together to form NaCl (sodium
chloride) an ionic compound.

An ionic compound contains positively and negatively charged ions

It should be pointed out that the Na+ and Cl- ions are not chemically bonded together. Whereas
atoms in molecular compounds, such as H2O, are chemically bonded.

Ionic compounds are generally combinations of metals and non-metals.

Molecular compounds are general combinations of non-metals only.

Pure ionic compounds typically have their atoms in an organized three dimensional arrangement
(a crystal). Therefore, we cannot describe them using molecular formulas. We can describe them
using empirical formulas.
If we know the charges of the ions comprising an ionic compound, then we can determine the
empirical formula. The key is knowing that ionic compounds are always electrically neutral
overall.

Therefore, the concentration of ions in an ionic compound are such that the overall charge is
neutral.

In the NaCl example, there will be one positively charged Na+ ion for each negatively charged
Cl- ion.

What about the ionic compound involving Barium ion (Ba2+) and the Chlorine ion (Cl-)?

1 (Ba2+) + 2 (Cl-) = neutral charge

Resulting empirical formula: BaCl2


History of chemistry
The history of chemistry represents a time span from ancient history to the present. By 1000 BC,
civilizations used technologies that would eventually form the basis of the various branches of
chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting
beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into
soap, making glass, and making alloys like bronze.

The protoscience of chemistry, alchemy, was unsuccessful in explaining the nature of matter and
its transformations. However, by performing experiments and recording the results, alchemists
set the stage for modern chemistry. The distinction began to emerge when a clear differentiation
was made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist
(1661). While both alchemy and chemistry are concerned with matter and its transformations,
chemists are seen as applying scientific method to their work.

Chemistry is considered to have become an established science with the work of Antoine
Lavoisier, who developed a law of conservation of mass that demanded careful measurement and
quantitative observations of chemical phenomena. The history of chemistry is intertwined with
the history of thermodynamics, especially through the work of Willard Gibbs.

Chemical law
Chemical laws are those laws of nature relevant to chemistry. The most fundamental concept in
chemistry is the law of conservation of mass, which states that there is no detectable change in
the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is
actually energy that is conserved, and that energy and mass are related; a concept which becomes
important in nuclear chemistry. Conservation of energy leads to the important concepts of
equilibrium, thermodynamics, and kinetics.

Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust's law
of definite composition says that pure chemicals are composed of elements in a definite
formulation; we now know that the structural arrangement of these elements is also important.

Dalton's law of multiple proportions says that these chemicals will present themselves in
proportions that are small whole numbers (i.e. 1:2 O:H in water); although in many systems
(notably biomacromolecules and minerals) the ratios tend to require large numbers, and are
frequently represented as a fraction. Such compounds are known as non-stoichiometric
compounds

More modern laws of chemistry define the relationship between energy and transformations.
 In equilibrium, molecules exist in mixture defined by the transformations possible on the
timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the
molecules—the lower the intrinsic energy, the more abundant the molecule.
 Transforming one structure to another requires the input of energy to cross an energy
barrier; this can come from the intrinsic energy of the molecules themselves, or from an
external source which will generally accelerate transformations. The higher the energy
barrier, the slower the transformation occurs.
 There is a hypothetical intermediate, or transition structure, that corresponds to the
structure at the top of the energy barrier. The Hammond-Leffler Postulate states that this
structure looks most similar to the product or starting material which has intrinsic energy
closest to that of the energy barrier. Stabilizing this hypothetical intermediate through
chemical interaction is one way to achieve catalysis.
 All chemical processes are reversible (law of microscopic reversibility) although some
processes have such an energy bias, they are essentially irreversible.

Dalton's Atomic Theory

Reintroducing the Atom

Around 1800, the English chemist John Dalton brought back Democritus’ ancient idea of the
atom. You can see a picture of Dalton below. Dalton grew up in a working-class family. As an
adult, he made a living by teaching and just did research in his spare time. Nonetheless, from his
research he developed one of the most important theories in all of science. Based on his research
results, he was able to demonstrate that atoms actually do exist, something that Democritus had
only guessed.

Dalton’s Experiments

Dalton did many experiments that provided evidence for the existence of atoms. For example:
 He investigated pressure and other properties of gases, from which he inferred that gases must
consist of tiny, individual particles that are in constant, random motion.
 He researched the properties of compounds, which are substances that consist of more than
one element. He showed that a given compound is always comprised of the same elements in
the same whole-number ratio and that different compounds consist of different elements or
ratios. This can happen, Dalton reasoned, only if elements are made of separate, discrete
particles that cannot be subdivided.

Atomic Theory

From his research, Dalton developed a theory about atoms. Dalton’s atomic theory consists of
three basic ideas:

 All substances are made of atoms. Atoms are the smallest particles of matter. They cannot be
divided into smaller particles, created, or destroyed.
 All atoms of the same element are alike and have the same mass. Atoms of different elements
are different and have different masses.
 Atoms join together to form compounds, and a given compound always consists of the same
kinds of atoms in the same proportions.

Dalton’s atomic theory was accepted by many scientists almost immediately. Most of it is still
accepted today. However, scientists now know that atoms are not the smallest particles of matter.
Atoms consist of several types of smaller particles, including protons, neutrons, and electrons. 

The Billiard Ball Model

Because Dalton thought atoms were the smallest particles of matter, he envisioned them as solid,
hard spheres, like billiard (pool) balls, so he used wooden balls to model them. Three of his
model atoms are pictured in the Figure below. Do you see the holes in the balls? Dalton added
these so the model atoms could be joined together with hooks and used to model compounds.

Q: When scientists discovered smaller particles inside the atom, they realized that Dalton’s
atomic models were too simple. How do modern atomic models differ from Dalton’s models?
A: Modern atomic models, like the one pictured at the top of this article, usually represent
subatomic particles, including electrons, protons, and neutrons.

Summary

 Around 1800, the English chemist John Dalton reintroduced the idea of the atom, which was
first introduced by the ancient Greek philosopher named Democritus.
 Dalton did many experiments with gases and compounds that provided evidence for the
existence of atoms.
 Dalton developed an atomic theory that is still mostly accepted today. It is one of the most
important theories in all of science.
 Dalton thought individual atoms were solid, hard spheres, so he modeled them with wooden
balls.

Review

1. Who was John Dalton?


2. What evidence did Dalton use to argue for the existence of atoms?
3. State Dalton’s atomic theory.
4. Explain how Dalton modeled atoms and compounds.

Early Experiments to Characterize the Atom

In 1897, the British physicist J. J. Thomson (1856–1940) proved that atoms were not the most
basic form of matter. He demonstrated that cathode rays could be deflected, or bent, by magnetic
or electric fields, which indicated that cathode rays consist of charged particles (Figure 2.2.2

). More important, by measuring the extent of the deflection of the cathode rays in magnetic or
electric fields of various strengths, Thomson was able to calculate the mass-to-charge ratio of the
particles. These particles were emitted by the negatively charged cathode and repelled by the
negative terminal of an electric field. Because like charges repel each other and opposite charges
attract, Thomson concluded that the particles had a net negative charge; these particles are now
called electrons. Most relevant to the field of chemistry, Thomson found that the mass-to-charge
ratio of cathode rays is independent of the nature of the metal electrodes or the gas, which
suggested that electrons were fundamental components of all atoms.
Figure 2.2.2

: Deflection of Cathode Rays by an Electric Field. As the cathode rays travel toward the right,
they are deflected toward the positive electrode (+), demonstrating that they are negatively
charged.

Subsequently, the American scientist Robert Millikan (1868–1953) carried out a series of
experiments using electrically charged oil droplets, which allowed him to calculate the charge on
a single electron. With this information and Thomson’s mass-to-charge ratio, Millikan
determined the mass of an electron:

masscharge×charge=mass(2.2.1)

It was at this point that two separate lines of investigation began to converge, both aimed at
determining how and why matter emits energy. The video below shows how JJ Thompson used
such a tube to measure the ratio of charge over mass of an electron

Radioactivity

The second line of investigation began in 1896, when the French physicist Henri Becquerel
(1852–1908) discovered that certain minerals, such as uranium salts, emitted a new form of
energy. Becquerel’s work was greatly extended by Marie Curie (1867–1934) and her husband,
Pierre (1854–1906); all three shared the Nobel Prize in Physics in 1903. Marie Curie coined the
term radioactivity (from the Latin radius, meaning “ray”) to describe the emission of energy rays
by matter. She found that one particular uranium ore, pitchblende, was substantially more
radioactive than most, which suggested that it contained one or more highly radioactive
impurities. Starting with several tons of pitchblende, the Curies isolated two new radioactive
elements after months of work: polonium, which was named for Marie’s native Poland, and
radium, which was named for its intense radioactivity. Pierre Curie carried a vial of radium in his
coat pocket to demonstrate its greenish glow, a habit that caused him to become ill from
radiation poisoning well before he was run over by a horse-drawn wagon and killed instantly in
1906. Marie Curie, in turn, died of what was almost certainly radiation poisoning.
Figure 2.2.3

: Radium bromide illuminated by its own radioactive glow. This 1922 photo was taken in the
dark in the Curie laboratory.

Building on the Curies’ work, the British physicist Ernest Rutherford (1871–1937) performed
decisive experiments that led to the modern view of the structure of the atom. While working in
Thomson’s laboratory shortly after Thomson discovered the electron, Rutherford showed that
compounds of uranium and other elements emitted at least two distinct types of radiation. One
was readily absorbed by matter and seemed to consist of particles that had a positive charge and
were massive compared to electrons. Because it was the first kind of radiation to be discovered,
Rutherford called these substances α particles. Rutherford also showed that the particles in the
second type of radiation, β particles, had the same charge and mass-to-charge ratio as Thomson’s
electrons; they are now known to be high-speed electrons. A third type of radiation, γ rays, was
discovered somewhat later and found to be similar to a lower-energy form of radiation called x-
rays, now used to produce images of bones and teeth.
Figure 2.2.4

: Effect of an Electric Field on α Particles, β Particles, and γ Rays. A negative electrode deflects
negatively charged β particles, whereas a positive electrode deflects positively charged α
particles. Uncharged γ rays are unaffected by an electric field. (Relative deflections are not
shown to scale.)

These three kinds of radiation—α particles, β particles, and γ rays—are readily distinguished by
the way they are deflected by an electric field and by the degree to which they penetrate matter.
As Figure 2.2.3

illustrates, α particles and β particles are deflected in opposite directions; α particles are deflected to a
much lesser extent because of their higher mass-to-charge ratio. In contrast, γ rays have no charge, so
they are not deflected by electric or magnetic fields. Figure 2.2.5

shows that α particles have the least penetrating power and are stopped by a sheet of paper,
whereas β particles can pass through thin sheets of metal but are absorbed by lead foil or even
thick glass. In contrast, γ-rays can readily penetrate matter; thick blocks of lead or concrete are
needed to stop them.
Figure 2.2.5

: Relative Penetrating Power of the Three Types of Radiation.  A sheet of paper stops
comparatively massive α particles, whereas β particles easily penetrate paper but are stopped by
a thin piece of lead foil. Uncharged γ rays penetrate the paper and lead foil; a much thicker
piece of lead or concrete is needed to absorb them.

The Atomic Model

Once scientists concluded that all matter contains negatively charged electrons, it became clear
that atoms, which are electrically neutral, must also contain positive charges to balance the
negative ones. Thomson proposed that the electrons were embedded in a uniform sphere that
contained both the positive charge and most of the mass of the atom, much like raisins in plum
pudding or chocolate chips in a cookie (Figure 2.2.6

).

Figure 2.2.6
: Thomson’s Plum Pudding or Chocolate Chip Cookie Model of the Atom. In this model, the
electrons are embedded in a uniform sphere of positive charge.

In a single famous experiment, however, Rutherford showed unambiguously that Thomson’s


model of the atom was incorrect. Rutherford aimed a stream of α particles at a very thin gold foil
target (Figure 2.2.7a

) and examined how the α particles were scattered by the foil. Gold was chosen because it could be
easily hammered into extremely thin sheets, minimizing the number of atoms in the target. If Thomson’s
model of the atom were correct, the positively-charged α particles should crash through the uniformly
distributed mass of the gold target like cannonballs through the side of a wooden house. They might be
moving a little slower when they emerged, but they should pass essentially straight through the target
(Figure 2.2.7b). To Rutherford’s amazement, a small fraction of the α particles were deflected at large
angles, and some were reflected directly back at the source (Figure 2.2.7c

). According to Rutherford, “It was almost as incredible as if you fired a 15-inch shell at a piece
of tissue paper and it came back and hit you.”

Figure 2.2.7

: A Summary of Rutherford’s Experiments. (a) A representation of the apparatus Rutherford


used to detect deflections in a stream of α particles aimed at a thin gold foil target. The particles
were produced by a sample of radium. (b) If Thomson’s model of the atom were correct, the α
particles should have passed straight through the gold foil. (c) However, a small number of α
particles were deflected in various directions, including right back at the source. This could be
true only if the positive charge were much more massive than the α particle. It suggested that the
mass of the gold atom is concentrated in a very small region of space, which he called the
nucleus.
Rutherford’s results were not consistent with a model in which the mass and positive charge are
distributed uniformly throughout the volume of an atom. Instead, they strongly suggested that
both the mass and positive charge are concentrated in a tiny fraction of the volume of an atom,
which Rutherford called the nucleus. It made sense that a small fraction of the α particles
collided with the dense, positively charged nuclei in either a glancing fashion, resulting in large
deflections, or almost head-on, causing them to be reflected straight back at the source.

Although Rutherford could not explain why repulsions between the positive charges in nuclei
that contained more than one positive charge did not cause the nucleus to disintegrate, he
reasoned that repulsions between negatively charged electrons would cause the electrons to be
uniformly distributed throughout the atom’s volume.Today it is known that strong nuclear forces,
which are much stronger than electrostatic interactions, hold the protons and the neutrons
together in the nucleus. For this and other insights, Rutherford was awarded the Nobel Prize in
Chemistry in 1908. Unfortunately, Rutherford would have preferred to receive the Nobel Prize in
Physics because he considered physics superior to chemistry. In his opinion, “All science is
either physics or stamp collecting.”

Figure 2.2.8
: A Summary of the Historical Development of Models of the Components and Structure of the
Atom. The dates in parentheses are the years in which the key experiments were performed.

The historical development of the different models of the atom’s structure is summarized
in Figure 2.2.8

. Rutherford established that the nucleus of the hydrogen atom was a positively charged particle,
for which he coined the name proton in 1920. He also suggested that the nuclei of elements other
than hydrogen must contain electrically neutral particles with approximately the same mass as
the proton. The neutron, however, was not discovered until 1932, when James Chadwick (1891–
1974, a student of Rutherford; Nobel Prize in Physics, 1935) discovered it. As a result of
Rutherford’s work, it became clear that an α particle contains two protons and neutrons, and is
therefore the nucleus of a helium atom.

Figure 2.2.9

: The Evolution of Atomic Theory, as Illustrated by Models of the Oxygen Atom.  Bohr’s model
and the current model are described in  Chapter 6, "The Structure of Atoms."

Rutherford’s model of the atom is essentially the same as the modern model, except that it is now
known that electrons are not uniformly distributed throughout an atom’s volume. Instead, they
are distributed according to a set of principles described by Quantum Mechanics. Figure 2.2.9
shows how the model of the atom has evolved over time from the indivisible unit of Dalton to
the modern view taught today.

Summary

Atoms are the ultimate building blocks of all matter. The modern atomic theory establishes the
concepts of atoms and how they compose matter. Atoms, the smallest particles of an element that
exhibit the properties of that element, consist of negatively charged electrons around a
central nucleus composed of more massive positively charged protons and electrically
neutral neutrons. Radioactivity is the emission of energetic particles and rays (radiation) by some
substances. Three important kinds of radiation are α particles (helium nuclei), β particles
(electrons traveling at high speed), and γ rays (similar to x-rays but higher in energy).

Electron

The electron is a subatomic particle, symbol


e−
or
β−
, whose electric charge is negative one elementary charge.[8] Electrons belong to the first
generation of the lepton particle family,[9] and are generally thought to be elementary particles
because they have no known components or substructure.[1] The electron has a mass that is
approximately 1/1836 that of the proton.[10] Quantum mechanical properties of the electron
include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the
reduced Planck constant, ħ. As it is a fermion, no two electrons can occupy the same quantum
state, in accordance with the Pauli exclusion principle.[9] Like all elementary particles, electrons
exhibit properties of both particles and waves: they can collide with other particles and can be
diffracted like light. The wave properties of electrons are easier to observe with experiments than
those of other particles like neutrons and protons because electrons have a lower mass and hence
a longer de Broglie wavelength for a given energy.

Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism,
chemistry and thermal conductivity, and they also participate in gravitational, electromagnetic
and weak interactions.[11] Since an electron has charge, it has a surrounding electric field, and if
that electron is moving relative to an observer it will generate a magnetic field. Electromagnetic
fields produced from other sources will affect the motion of an electron according to the Lorentz
force law. Electrons radiate or absorb energy in the form of photons when they are accelerated.
Laboratory instruments are capable of trapping individual electrons as well as electron plasma by
the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space.
Electrons are involved in many applications such as electronics, welding, cathode ray tubes,
electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle
accelerators.

Interactions involving electrons with other subatomic particles are of interest in fields such as
chemistry and nuclear physics. The Coulomb force interaction between the positive protons
within atomic nuclei and the negative electrons without, allows the composition of the two
known as atoms. Ionization or differences in the proportions of negative electrons versus positive
nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons
between two or more atoms is the main cause of chemical bonding.[12] In 1838, British natural
philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric
charge to explain the chemical properties of atoms.[3] Irish physicist George Johnstone Stoney
named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists
identified it as a particle in 1897.[5][13][14] Electrons can also participate in nuclear reactions, such
as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created
through beta decay of radioactive isotopes and in high-energy collisions, for instance when
cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is
identical to the electron except that it carries electrical and other charges of the opposite sign.
When an electron collides with a positron, both particles can be totally annihilated, producing
gamma ray photons.

Radioactive decay

Radioactive decay (also known as nuclear decay or radioactivity) is the process by which an
unstable atomic nucleus loses energy (in terms of mass in its rest frame) by emitting radiation,
such as an alpha particle, beta particle with neutrino or only a neutrino in the case of electron
capture, gamma ray, or electron in the case of internal conversion. A material containing such
unstable nuclei is considered radioactive. Certain highly excited short-lived nuclear states can
decay through neutron emission, or more rarely, proton emission.

Radioactive decay is a stochastic (i.e. random) process at the level of single atoms, in that,
according to quantum theory, it is impossible to predict when a particular atom will decay,[1][2][3]
regardless of how long the atom has existed. However, for a collection of atoms, the collection's
expected decay rate is characterized in terms of their measured decay constants or half-lives.
This is the basis of radiometric dating. The half-lives of radioactive atoms have no known upper
limit, spanning a time range of over 55 orders of magnitude, from nearly instantaneous to far
longer than the age of the universe.

A radioactive nucleus with zero spin can have no defined orientation, and hence emits the total
momentum of its decay products isotropically (all directions and without bias). If there are
multiple particles produced during a single decay, as in beta decay, their relative angular
distribution, or spin directions may not be isotropic. Decay products from a nucleus with spin
may be distributed non-isotropically with respect to that spin direction, either because of an
external influence such as an electromagnetic field, or because the nucleus was produced in a
dynamic process that constrained the direction of its spin. Such a parent process could be a
previous decay, or a nuclear reaction.[4][5][6][note 1]

The decaying nucleus is called the parent radionuclide (or parent radioisotope[note 2]), and the
process produces at least one daughter nuclide. Except for gamma decay or internal conversion
from a nuclear excited state, the decay is a nuclear transmutation resulting in a daughter
containing a different number of protons or neutrons (or both). When the number of protons
changes, an atom of a different chemical element is created.

The first decay processes to be discovered were alpha decay, beta decay, and gamma decay.
Alpha decay occurs when the nucleus ejects an alpha particle (helium nucleus). This is the most
common process of emitting nucleons, but highly excited nuclei can eject single nucleons, or in
the case of cluster decay, specific light nuclei of other elements. Beta decay occurs in two ways:
(i) beta-minus decay, when the nucleus emits an electron and an antineutrino in a process that
changes a neutron to a proton, or (ii) beta-plus decay, when the nucleus emits a positron and a
neutrino in a process that changes a proton to a neutron. Highly excited neutron-rich nuclei,
formed as the product of other types of decay, occasionally lose energy by way of neutron
emission, resulting in a change from one isotope to another of the same element. The nucleus
may capture an orbiting electron, causing a proton to convert into a neutron in a process called
electron capture. All of these processes result in a well-defined nuclear transmutation.

By contrast, there are radioactive decay processes that do not result in a nuclear transmutation.
The energy of an excited nucleus may be emitted as a gamma ray in a process called gamma
decay, or that energy may be lost when the nucleus interacts with an orbital electron causing its
ejection from the atom, in a process called internal conversion.

Another type of radioactive decay results in products that vary, appearing as two or more
"fragments" of the original nucleus with a range of possible masses. This decay, called
spontaneous fission, happens when a large unstable nucleus spontaneously splits into two (or
occasionally three) smaller daughter nuclei, and generally leads to the emission of gamma rays,
neutrons, or other particles from those products.

For a summary table showing the number of stable and radioactive nuclides in each category, see
radionuclide. There are 29 naturally occurring chemical elements on Earth that are radioactive.
They are those that contain 34 radionuclides that date before the time of formation of the solar
system, and are known as primordial nuclides. Well-known examples are uranium and thorium,
but also included are naturally occurring long-lived radioisotopes, such as potassium-40. Another
50 or so shorter-lived radionuclides, such as radium and radon, found on Earth, are the products
of decay chains that began with the primordial nuclides, or are the product of ongoing
cosmogenic processes, such as the production of carbon-14 from nitrogen-14 in the atmosphere
by cosmic rays. Radionuclides may also be produced artificially in particle accelerators or
nuclear reactors, resulting in 650 of these with half-lives of over an hour, and several thousand
more with even shorter half-lives. [See here for a list of these sorted by half life.
PROJECT IN
GENERAL
CHEMISTRY
Submitted by: Edhel Jay Martin Caducoy

You might also like