You are on page 1of 29



Department of Mechanical Engineering
Additive Manufacturing

Introduction: Importance of Nano technology, Emergence of Nanotechnology, Bottom up and

Top down approaches, challenges in Nanotechnology
Nano-materials Synthesis and Processing: Methods for creating Nanostructures; Processes for
producing ultrafine powders- Mechanical grinding; Wet Chemical Synthesis of Nano materials
sol gel process; Gas Phase synthesis of Nano materials Furnace, Flame assisted ultrasonic spray
pyrolysis; Gas Condensation Processing (GPC), Chemical Vapour Condensation(CVC).
Optical Microscopy : principles, Imaging Modes, Applications, Limitations.
Scanning Electron Microscopy (SEM) : principles, Imaging Modes, Applications, Limitations.
Transmission Electron Microscopy (TEM) : principles, Imaging Modes, Applications,
X- Ray Diffraction (XRD) :principles, Imaging Modes, Applications, Limitations.
Scanning Probe Microscopy (SPM) - principles, Imaging Modes, Applications, Limitations.
Atomic Force Microscopy (AFM) - basic principles, instrumentation, operational modes,
Applications, Limitations.
Electron Probe Micro Analyzer (EPMA) - Introduction, Sample preparation, Working
procedure, Applications, Limitations.

Nanotechnology is a topic that spans a range of science and engineering disciplines. It takes
place within the range of 1-100 nanometres. The nanometre scale is about a billionth of a
metre. These unusual physical and chemical characteristics come about because there is an
increase in surface area compared to volume as particles get smaller and also because they are
subject to quantum effects. This means they can behave in different ways and do not follow the
same laws of physics that larger objects do. The idea of nanotechnology first came from the
physicist Richard Feynman (born in 1959) who imagined the entire Encyclopaedia Britannica
could be written on the head of a pin. Carbon nanotubes, tiny tubes of carbon atoms, which are
very strong yet very light, started to be created in the 1950s. It was improvements in
microscopy in the 1980s that allowed researchers to see single atoms and then manipulate them
on a surface.

Nanotechnology is
 Comprised of nano materials with at least one dimension that measures between
approximately 1 and 100 nm
 Comprised of nano materials that exhibit unique properties as a result of their nano scale
 The manipulation of these nano materials to develop new technologies, applications or
to improve on existing ones
 Used in a wide range of applications from electronics to medicine to energy and more.
 Nanotechnology is the creation of useful or functional materials, devices and systems
through control of matter on the nanometer length scale and exploitation of novel
phenomena and properties which arise because of the nanometer length scale.

Importance of Nano technology

 Importance of nanotechnology depends on the fact that it is possible to tailor the
structures of materials at extremely small scales to achieve specific properties, thus
greatly extending the materials science tool kit.
 Using nanotechnology, materials can effectively be made stronger, lighter, more durable,
more reactive, or better electrical conductors.
 Nano scale additives or surface treatments of fabrics can provide lightweight ballistic
energy deflection in personal body armor, or can help them resist wrinkling, staining, and
bacterial growth.
 Clear nano scale films on eyeglasses, computer and camera displays, windows, and other
surfaces can make them water- and residue-repellent, antireflective, self-cleaning,
resistant to ultraviolet or infrared light, anti fog, antimicrobial, scratch-resistant, or
electrically conductive.
 Nanotechnology has greatly contributed to major advances in computing and electronics,
leading to faster, smaller, and more portable systems that can manage and store larger and
larger amounts of information.
 Nano medicine, the application of nanotechnology in medicine, draws on the natural
scale of biological phenomena to produce precise solutions for disease prevention,
diagnosis, and treatment.
 Commercial applications have adapted gold nano particles as probes for the detection of
targeted sequences of nucleic acids, and gold nano particles are also being clinically
investigated as potential treatments for cancer and other diseases.
 Better imaging and diagnostic tools enabled by nanotechnology are paving the way for

earlier diagnosis, more individualized treatment options, and better therapeutic success
 Nanotechnology is finding application in traditional energy sources and is greatly
enhancing alternative energy approaches to help meet the world’s increasing energy
 Nanotechnology is improving the efficiency of fuel production from raw petroleum
materials through better catalysis. It is also enabling reduced fuel consumption in
vehicles and power plants through higher-efficiency combustion and decreased friction.
 Nanotechnology could help meet the need for affordable, clean drinking water through
rapid, low-cost detection and treatment of impurities in water.
 Engineers have developed a thin film membrane with nano pores for energy-efficient
desalination. This molybdenum disulphide (MoS2) membrane filtered two to five times
more water than current conventional filters.

Emergence of Nanotechnology
The emergence of nanotechnology has led to the design, synthesis, and manipulation of particles
in order to create a new opportunity for the utilization of smaller and more regular structures for
various applications. In recent years, nano-sized metal oxide particles have gotten much attention
in various fields of application due to its unique optical, electrical, magnetic, catalytic and
biomedical properties as well as their high surface to volume ratio and specific affinity for the
adsorption of inorganic pollutants and degradation of organic pollutants in aqueous systems.

Bottom up and Top down approaches

There are two general approaches to the synthesis of nano materials and the fabrication of

Bottom up approach: These approaches include the miniaturization of materials components (up
to atomic level) with further self-assembly process leading to the formation
During self-assembly the physical forces operating at nano scale are used to combine basic units
into larger stable structures.
Typical examples are quantum dot formation during epitaxial growth and formation of nano
particles from colloidal dispersion.
Nano materials are synthesized by assembling the atoms/molecules together.

Instead of taking material away to make structures, the bottom-up approach selectively adds
atoms to create structures.
Eg) Plasma etching, Chemical vapour deposition
Top down approach: These approaches use larger (macroscopic) initial structures, which can be
externally controlled in the processing of nanostructures.
Typical examples are etching through the mask, ball milling, and application of severe plastic
Nano materials are synthesized by breaking down of bulk solids into nano sizes
Top-down processing has been and will be the dominant process in semiconductor
Eg) Ball Milling, Sol-Gel, lithography

Challenges in Nanotechnology
 The most tremendous challenges in Nanotechnology are materials and properties about
Nano scale.
 At present, nanotechnology has been widely applied to the area of drug development.
Some Nano particles could be toxic.
 The Nano particles are small, which will cross the blood brain barrier, a membrane that
protects the brain from poisonous chemicals in the bloodstream.
 Instruments to assess environmental exposure to nano materials.
 Methods to evaluate the toxicity of nano materials.
 Models for predicting the potential impact of new, engineered nano materials.
 Ways of evaluating the impact of nano materials across their lifecycle.
 Strategic programs to enable risk focused research.

Nano-materials Synthesis and Processing

Methods for creating Nanostructures
Methods for fabricating nano materials can be generally subdivided into two groups:
Top down methods and bottom up methods.
In the first case nano materials are derived from a bulk substrate and obtained by progressive
removal of material, until the desired nano material is obtained. A simple way to illustrate a top
down method is to think of carving a statue out of a large block of marble. Printing methods also
belong to this category.
Bottom up methods work in the opposite direction, the nano material, such as a nano coating, is
obtained starting from the atomic or molecular precursors and gradually assembling it until the
desired structure is formed.
In both methods two requisites are fundamental:
control of the fabrication conditions (e.g. energy of the electron beam) and control of the
environment conditions (presence of dust, contaminants, etc). For these reasons,
nanotechnologies use highly sophisticated fabrication tools that are mostly operated in a vacuum
in clean-room laboratories.

Processes for producing ultrafine powder: Mechanical grinding

Popular, simple, inexpensive and extremely scalable material to synthesize all classes of Nano
Mechanical attrition mechanism is used to obtain nano crystalline structures from either
single phase powders or amorphous materials.

Can use either refractory balls or steel balls or plastic balls depending on the material to be
When the balls rotate at a particular rpm, the necessary energy is transferred to the powder
which in turn reduces the powder of coarse grain-sized structure to ultrafine nano particle.

The energy transferred to the powder from the balls depends on many factors such
Rotational Speed of the balls
Size of the balls
Number of Balls
Milling time
Ratio of ball to powder mass
Milling medium /atmosphere
Cryogenic liquids can be used to increase the brittleness of the product, one has to take necessary
steps to prevent oxidation during milling process.
It produces very fine powder (particle size less than or equal to 10 microns).
It is suitable for milling toxic materials since it can be used in a completely enclosed form.
Has a wide application.
It can be used for continuous operation.
It is used in milling highly abrasive materials.
Contamination of product may occur as a result of wear and tear which occurs principally from
the balls and partially from the casing.
High machine noise level especially if the hollow cylinder is mode of metal, but much less if
rubber is used.
Relatively long time of milling.
It is difficult to clean the machine after use.

Wet Chemical Synthesis of Nano-materials: sol-gel process

 Sol-gel technology is a well established colloidal chemistry technology, which offers
possibility to produce various materials with novel, predefined properties in a simple
process and at relatively low process cost.

 The sol is a name of a colloidal solution made of solid particles few hundred nm in
diameter, suspended in a liquid phase. The gel can be considered as a solid
macromolecule immersed in a solvent.
 Sol-gel process consists in the chemical transformation of a liquid (the sol) into a gel
state and with subsequent post-treatment and transition into solid oxide material.

The sol-gel process usually consists of 4 steps:

The desired colloidal particles once dispersed in a liquid to form a sol.
The deposition of sol solution produces the coatings on the substrates by spraying, dipping or

The particles in sol are polymerized through the removal of the stabilizing components and
produce a gel in a state of a continuous network.
The final heat treatments pyrolyze the remaining organic or inorganic components and form
an amorphous or crystalline coating.

 Can produce thin bond-coating to provide excellent adhesion between the metallic
substrate and the top coat.
 Can produce thick coating to provide corrosion protection performance.
 Can easily shape materials into complex geometries in a gel state.
 Can have low temperature sintering capability, usually 200-600°C.
 Can provide a simple, economic and effective method to produce high quality coatings.

Gas Phase synthesis of Nano materials: Furnace

The simplest method to produce nano particles is by heating the desired material in a heat
resistant crucible containing the desired material. This method is appropriate only for materials
that have a high vapour pressure at the heated temperatures that can be as high as 2000°C.
Energy is normally introduced into the precursor by arc heating, electron beam heating or Joule
heating. The atoms are evaporated into an atmosphere, which is either inert (e.g. He) or reactive
(so as to form a compound). To carry out reactive synthesis, materials with very low vapour
pressure have to be fed into the furnace in the form of a suitable precursor such as
organometallics, which decompose in the furnace to produce a condensable material. The hot
atoms of the evaporated matter lose energy by collision with the atoms of the cold gas and
undergo condensation into small clusters via homogeneous nucleation. In case a compound is
being synthesized, these precursors react in the gas phase and form a compound with the material
that is separately injected in the reaction chamber. The clusters would continue to grow if they
remain in the supersaturated region. To control their size, they need to be rapidly removed from
the supersaturated environment by a carrier gas. The cluster size and its distribution are
controlled by only three parameters:
 Rate of evaporation (energy input)
 Rate of condensation (energy removal), and
 Rate of gas flow (cluster removal).

Schematic representation of gas phase process of synthesis of single phase nano materials from a heated

Flame assisted ultrasonic spray pyrolysis

In this process, precusrsors are nebulized and then unwanted components are burnt in a flame to
get the required material, eg. ZrO2 has been obtained by this method from a precursor of Zr(CH3
CH2 CH2O)4. Flame hydrolysis is used for the manufacture of fused silica. In the process,
silicon tetrachloride is heated in an oxy-hydrogen flame to give a highly dispersed silica. The
resulting white amorphous powder consists of spherical particles with sizes in the range 7-40 nm.
The combustion flame synthesis, in which the burning of a gas mixture, e.g. acetylene and
oxygen or hydrogen and oxygen, supplies the energy to initiate the pyrolysis of precursor
compounds, is widely used for the industrial production of powders in large quantities, such as
carbon black, fumed silica and titanium dioxide. However, since the gas pressure during the
reaction is high, highly agglomerated powders are produced which is disadvantageous for
subsequent processing. The basic idea of low pressure combustion flame synthesis is to extend
the pressure range to the pressures used in gas phase synthesis and thus to reduce or avoid the
agglomeration. Low pressure flames have been extensively used by aerosol scientists to study
particle formation in the flame.
A key for the formation of nanoparticles with narrow size distributions is the exact control of the
flame in order to obtain a flat flame front. Under these conditions the thermal history, i.e. time
and temperature, of each particle formed is identical and narrow distributions result. However,
due to the oxidative atmosphere in the flame, this synthesis process is limited to the formation of
oxides in the reactor zone.

Flame assisted ultrasonic spray pyrolysis

Gas Condensation Processing (GPC)

Gas condensation was the first technique used to synthesize nano crystalline metals and alloys.
In this technique a metallic or inorganic material, e.g. a suboxide, is vaporized using thermal
evaporation sources. Cluster form in the vicinity of the source by homogenous nucleation in the
gas phase. The cluster or particle size depends critically on the residence time of the particles in
the growth regime and can be influenced by the gas pressure. With increasing gas pressure,
vapor pressure and mass of the inert gas; the average particle size of the nano particles increases.
A rotating cylindrical device cooled with liquid nitrogen was employed for the particle
collection. The nano particles in the size range from 2-50 nm are extracted from the gas flow and
deposited loosely on the surface of the collection device as a powder of low density. The

quantities of metals are below 1 g/day, while quantities of oxides can be as high as 20 g/day for
simple oxide. The method is extremely slow.

Schematic representation of typical set-up for gas condensation synthesis of nano materials
Chemical Vapour Condensation(CVC).

A schematic of a typical CVC reactor

Chemical vapor condensation (CVC) was developed in Germany in 1994. It involves pyrolysis
of vapors of metal organic precursors in a reduced pressure atmosphere. Particles of ZrO2, Y2O3
and nano whiskers have been produced by CVC method . A metalorganic precursor is introduced
in the hot zone of the reactor using mass flow controller. For instance, hexamethyldisilazane
(CH3)3 Si NHSi (CH3)3 was used to produce SiCxNyOz powder by CVC technique. The
reactor allows synthesis of mixtures of nanoparticles of two phases or doped nanoparticles by
supplying two precursors at the front end of reactor and coated nanoparticles, nZrO2, coated with
n-Al2O3 by supplying a second precursor in a second stage of reactor. The process yields
quantities in excess of 20 g/hr. The yield can be further improved by enlarging the diameter of
hot wall reactor and mass of fluid through the reactor

Scanning Electron Microscopy (SEM)

Fig: Scanning Electron Microscopy (SEM)

What is Scanning Electron Microscopy (SEM)
The scanning electron microscope (SEM) uses a focused beam of high-energy electrons
to generate a variety of signals at the surface of solid specimens. The signals that derive from
electron-sample interactions reveal information about the sample including external morphology
(texture), chemical composition, and crystalline structure and orientation of materials making up
the sample. In most applications, data are collected over a selected area of the surface of the
sample, and a 2-dimensional image is generated that displays spatial variations in these
properties. Areas ranging from approximately 1 cm to 5 microns in width can be imaged in a
scanning mode using conventional SEM techniques (magnification ranging from 20X to
approximately 30,000X, spatial resolution of 50 to 100 nm). The SEM is also capable of
performing analyses of selected point locations on the sample; this approach is especially useful
in qualitatively or semi-quantitatively determining chemical compositions (using EDS),
crystalline structure, and crystal orientations (using EBSD). The design and function of the SEM
is very similar to the EPMA and considerable overlap in capabilities exists between the two

Fundamental Principles of Scanning Electron Microscopy (SEM)

Accelerated electrons in an SEM carry significant amounts of kinetic energy, and this
energy is dissipated as a variety of signals produced by electron-sample interactions when the
incident electrons are decelerated in the solid sample. These signals include secondary electrons
(that produce SEM images), backscattered electrons (BSE), diffracted backscattered electrons
(EBSD that are used to determine crystal structures and orientations of minerals), photons
(characteristic X-rays that are used for elemental analysis and continuum X-rays), visible light
(cathodoluminescence--CL), and heat. Secondary electrons and backscattered electrons are
commonly used for imaging samples: secondary electrons are most valuable for showing
morphology and topography on samples and backscattered electrons are most valuable for
illustrating contrasts in composition in multiphase samples (i.e. for rapid phase discrimination).
X-ray generation is produced by inelastic collisions of the incident electrons with electrons in
discrete ortitals (shells) of atoms in the sample. As the excited electrons return to lower energy
states, they yield X-rays that are of a fixed wavelength (that is related to the difference in energy
levels of electrons in different shells for a given element). Thus, characteristic X-rays are
produced for each element in a mineral that is "excited" by the electron beam. SEM analysis is
considered to be "non-destructive"; that is, x-rays generated by electron interactions do not lead
to volume loss of the sample, so it is possible to analyze the same materials repeatedly.

Scanning Electron Microscopy (SEM) Instrumentation - How Does It Work?

Essential components of all SEMs include the following:
 Electron Source ("Gun")
 Electron Lenses
 Sample Stage
 Detectors for all signals of interest
 Display / Data output devices
 Infrastructure Requirements:
o Power Supply
o Vacuum System
o Cooling system
o Vibration-free floor
o Room free of ambient magnetic and electric fields
SEMs always have at least one detector (usually a secondary electron detector), and most have
additional detectors. The specific capabilities of a particular instrument are critically dependent
on which detectors it accommodates.

The SEM is routinely used to generate high-resolution images of shapes of objects (SEI) and to
show spatial variations in chemical compositions:
1) acquiring elemental maps or spot chemical analyses using EDS,
2)discrimination of phases based on mean atomic number (commonly related to relative density)
using BSE, and
3) compositional maps based on differences in trace element "activitors" (typically transition
metal and Rare Earth elements) using CL.
4) The SEM is also widely used to identify phases based on qualitative chemical analysis and/or
crystalline structure. Precise measurement of very small features and objects down to 50 nm in
size is also accomplished using the SEM.
5) Backescattered electron images (BSE) can be used for rapid discrimination of phases in
multiphase samples.

6) SEMs equipped with diffracted backscattered electron detectors (EBSD) can be used to
examine microfabric and crystallographic orientation in many materials.

1. There is arguably no other instrument with the breadth of applications in the study of solid
materials that compares with the SEM.
2. The SEM contribution is most concerned with geological applications, it is important to note
that these applications are a very small subset of the scientific and industrial applications that
exist for this instrumentation.
3. Most SEM's are comparatively easy to operate, with user-friendly "intuitive" interfaces.
4. For many applications, data acquisition is rapid (less than 5 minutes/image for SEI, BSE, spot
EDS analyses.) Modern SEMs generate data in digital formats, which are highly portable.

1. Samples must be solid and they must fit into the microscope chamber.
2. Maximum size in horizontal dimensions is usually on the order of 10 cm, vertical dimensions
are generally much more limited and rarely exceed 40 mm.
3. For most instruments samples must be stable in a vacuum on the order of 10-5 - 10-6 torr.
Samples likely to outgas at low pressures (rocks saturated with hydrocarbons, "wet" samples
such as coal, organic materials or swelling clays, and samples likely to decrepitate at low
pressure) are unsuitable for examination in conventional SEM's.
4. However, "low vacuum" and "environmental" SEMs also exist, and many of these types of
samples can be successfully examined in these specialized instruments.
5. Most SEMs use a solid state x-ray detector (EDS), and while these detectors are very fast and
easy to utilize, they have relatively poor energy resolution and sensitivity to elements present in
low abundances when compared to wavelength dispersive x-ray detectors (WDS) on most
electron probe microanalyzers (EPMA).
6. An electrically conductive coating must be applied to electrically insulating samples for study
in conventional SEM's, unless the instrument is capable of operation in a low vacuum mode.

Sample Preparation in SEM

Sample preparation can be minimal or elaborate for SEM analysis, depending on the nature of
the samples and the data required. Minimal preparation includes acquisition of a sample that will
fit into the SEM chamber and some accommodation to prevent charge build-up on electrically
insulating samples. Most electrically insulating samples are coated with a thin layer of
conducting material, commonly carbon, gold, or some other metal or alloy. The choice of
material for conductive coatings depends on the data to be acquired: carbon is most desirable if
elemental analysis is a priority, while metal coatings are most effective for high resolution
electron imaging applications. Alternatively, an electrically insulating sample can be examined
without a conductive coating in an instrument capable of "low vacuum" operation.

Tunneling Electron Microscopy (TEM) : The first electron microscope was built 1932 by the
German physicist Ernst Ruska, who was awarded the Nobel Prize in 1986 for its invention. The
first commercial TEM in 1939. 1nm resolution Typical accel. volt. = 100-400 kV (some
instruments - 1-3 MV)

The design of a transmission electron microscope (TEM) is analogous to that of an optical
microscope. In a TEM high-energy (>100 kV) electrons are used instead of photons and
electromagnetic lenses instead of glass lenses. The electron beam passes an electron- transparent

sample and a magnified image is formed using a set of lenses. This image is projected onto a
fluorescent screen or a CCD camera. Whereas the use of visible light limits the lateral resolution
in an optical microscope to a few tenths of a micrometer, the much smaller wavelength of
electrons allows for a resolution of nm in a TEM.

Instrument components

Fig: Components of TEM


Condenser system :(lenses & apertures)for controlling illumination on specimen

Objective lens system: image- forming lens - limits resolution; aperture - controls imaging
Projector lens system: magnifies image or diffraction pattern onto final screen
Intermediate lens: transmitting or magnifying the enlarge image.

Working (imaging):
Image contrast is obtained by interaction of the electron beam with the sample. In the
resulting TEM image denser areas and areas containing heavier elements appear darker due to
scattering of the electrons in the sample. In addition, scattering from crystal planes introduces
diffraction contrast. This contrast depends on the orientation of a crystalline area in the sample
with respect to the electron beam. As a result, in a TEM image of a sample consisting of
randomly oriented crystals each crystal will have its own grey-level. In this way one can
distinguish between different materials, as well as image individual crystals an crystal defects.
Because of the high resolution of the TEM, atomic arrangements in crystalline structures can be
imaged in large detail

Sample preparation in TEM

For TEM observations, thin samples

are required due to the important absorption of
the electrons in the material. High acceleration
voltage reduces the absorption effects but can
cause radiation damage (estimated at 170 kV
for Al). At these acceleration tensions, a
maximum thickness of 60 nm is required for
TEM foil specimens were prepared by
mechanical dimpling down to 20 μm, followed
by argon ion milling operating at an
accelerating voltage of 5 kV and 10° incidence
angle, with a liquid nitrogen cooling stage to
avoid sample heating and microstructural
changes associated with the annealing effect.

 Sampling---0.3mm3 of materials: The

higher the resolution the smaller the
analyzed volume becomes. Drawing
conclusions from a single observation or even single sample is dangerous and can lead to
completely false interpretations
 Interpreting transmission images---2D images of 3D specimens, viewed in transmission,
no depth-sensitivity.
 Electron beam damage and safety---particularly in polymer and ceramics: The high
energy of the electron beam utilized in electron microscopy causes damage by ionization,
radiolysis, and heating
 Specimen preparation---”thin” below 100nm


CRYO-TEM: Using dedicated equipment, it is possible to freeze 0.1 μm thick water films
and study these films at -170˚C in the TEM. This enables imaging of the natural shape of
organic bilayer structures. Also, agglomeration processes in a dispersion can be studied. In
addition, the application of cryogenic conditions facilitates studies of beam-sensitive

HAADF: Another way of obtaining compositional as well as structural information using

TEM is High Angle Annular Dark Field (HAADF) imaging. For this application a dedicated
detector is used that only collects electrons that are elastically scattered over large angles by
the specimen. The intensity that is detected is dependent on the average atomic number Z

ENERGY FILTERED TEM (EFTEM): A special filter on the TEM allows for selection of a
very narrow window of energies in the EELS spectrum. Using the corresponding electrons
for imaging, EFTEM is performed. As a result, a qualitative elemental map is obtained.
EFTEM is the only chemical analysis procedure in the TEM that does not use a scanning
beam. As a consequence, it is much faster.

Difference between SEM & TEM

1. In SEM is based on scattered electrons 1. TEM is based on transmitted
2. The scattered electrons in SEM electrons
produced the image of the sample after 2. In TEM, electrons are directly
the microscope collects and counts the pointed toward the sample.
scattered electrons. 3. TEM seeks to see what is inside or
3. SEM focuses on the sample’s surface beyond the surface.
and its composition. 4. TEM shows the sample as a whole.
4. SEM shows the sample bit by bit 5. TEM delivers a two-dimensional
5. SEM provides a three- dimensional picture.
image 6. TEM has up to a 50 million
6. SEM only offers 2 million as a magnification
maximum level of magnification. 7. The resolution of TEM is 0.5
7. SEM has 0.4 nanometers. angstroms

X- Ray Diffraction (XRD) :

What is X-ray Powder Diffraction (XRD)

X-ray powder diffraction (XRD) is a rapid analytical technique primarily used for phase
identification of a crystalline material and can provide information on unit cell dimensions. The
analyzed material is finely ground, homogenized, and average bulk composition is determined.

Fundamental Principles of X-ray Powder Diffraction (XRD)

Max von Laue, in 1912, discovered that crystalline substances act as three-dimensional
diffraction gratings for X-ray wavelengths similar to the spacing of planes in a crystal lattice. X-
ray diffraction is now a common technique for the study of crystal structures and atomic spacing.
X-ray diffraction is based on constructive interference of monochromatic X-rays and a
crystalline sample. These X-rays are generated by a cathode ray tube, filtered to produce
monochromatic radiation, collimated to concentrate, and directed toward the sample. The
interaction of the incident rays with the sample produces constructive interference (and a
diffracted ray) when conditions satisfy Bragg's Law (nλ=2d sin θ). This law relates the
wavelength of electromagnetic radiation to the diffraction angle and the lattice spacing in a
crystalline sample. These diffracted X-rays are then detected, processed and counted. By
scanning the sample through a range of 2θangles, all possible diffraction directions of the lattice
should be attained due to the random orientation of the powdered material. Conversion of the
diffraction peaks to d-spacings allows identification of the mineral because each mineral has a
set of unique d-spacings. Typically, this is achieved by comparison of d-spacings with standard
reference patterns.

All diffraction methods are based on generation of X-rays in an X-ray tube. These X-rays are
directed at the sample, and the diffracted rays are collected. A key component of all diffraction is
the angle between the incident and diffracted rays. Powder and single crystal diffraction vary in
instrumentation beyond this.
X-ray Powder Diffraction (XRD) Instrumentation - How Does It Work?

X-ray diffractometers consist of three basic elements: an X-ray tube, a sample holder, and
an X-ray detector. X-rays are generated in a cathode ray tube by heating a filament to produce
electrons, accelerating the electrons toward a target by applying a voltage, and bombarding the
target material with electrons. When electrons have sufficient energy to dislodge inner shell
electrons of the target material, characteristic X-ray spectra are produced. These spectra consist
of several components, the most common being Kα and Kβ. Kα consists, in part, of Kα1 and Kα2.
Kα1 has a slightly shorter wavelength and twice the intensity as Kα2.
The specific wavelengths are characteristic of the target material (Cu, Fe, Mo, Cr).
Filtering, by foils or crystal monochrometers, is required to produce monochromatic X-rays
needed for diffraction. Kα1and Kα2 are sufficiently close in wavelength such that a weighted
average of the two is used. Copper is the most common target material for single-crystal
diffraction, with CuKα radiation = 1.5418Å. These X-rays are collimated and directed onto the
sample. As the sample and detector are rotated, the intensity of the reflected X-rays is recorded.
When the geometry of the incident X-rays impinging the sample satisfies the Bragg
Equation, constructive interference occurs and a peak in intensity occurs. A detector records and
processes this X-ray signal and converts the signal to a count rate which is then output to a
device such as a printer or computer monitor.
The geometry of an X-ray diffractometer is such that the sample rotates in the path of the
collimated X-ray beam at an angle θ while the X-ray detector is mounted on an arm to collect the
diffracted X-rays and rotates at an angle of 2θ. The instrument used to maintain the angle and

rotate the sample is termed a goniometer. For typical powder patterns, data is collected at 2θ
from ~5° to 70°, angles that are preset in the X-ray scan.

X-ray powder diffraction is most widely used for the identification of unknown crystalline
materials (e.g. minerals, inorganic compounds). Determination of unknown solids is critical to
studies in geology, environmental science, material science, engineering and biology.
Other applications include:
 characterization of crystalline materials
 identification of fine-grained minerals such as clays and mixed layer clays that are
difficult to determine optically
 determination of unit cell dimensions
 measurement of sample purity
With specialized techniques, XRD can be used to:
 determine crystal structures using Rietveld refinement
 determine of modal amounts of minerals (quantitative analysis)
 characterize thin films samples by:
o determining lattice mismatch between film and substrate and to inferring stress
and strain
o determining dislocation density and quality of the film by rocking curve
o measuring superlattices in multilayered epitaxial structures
o determining the thickness, roughness and density of the film using glancing
incidence X-ray reflectivity measurements
 make textural measurements, such as the orientation of grains, in a polycrystalline sample

 Powerful and rapid (< 20 min) technique for identification of an unknown mineral
 In most cases, it provides an unambiguous mineral determination
 Minimal sample preparation is required
 XRD units are widely available
 Data interpretation is relatively straight forward

 Homogeneous and single phase material is best for identification of an unknown
 Must have access to a standard reference file of inorganic compounds (d-spacings, hkls)
 Requires tenths of a gram of material which must be ground into a powder
 For mixed materials, detection limit is ~ 2% of sample
 For unit cell determinations, indexing of patterns for non-isometric crystal systems is
 Peak overlay may occur and worsens for high angle 'reflections'
Sample Preparation
 Obtain a few tenths of a gram (or more) of the material, as pure as possible
 Grind the sample to a fine powder, typically in a fluid to minimize inducing extra strain
(surface energy) that can offset peak positions, and to randomize orientation. Powder less
than ~10 μm(or 200-mesh) in size is preferred
 Place into a sample holder or onto the sample surface:
o smear uniformly onto a glass slide, assuring a flat upper surface
o pack into a sample container
o sprinkle on double sticky tape

Typically the substrate is amorphous to avoid interference

 Care must be taken to create a flat upper surface and to achieve a random distribution of
lattice orientations unless creating an oriented smear.
 For analysis of clays which require a single orientation, specialized techniques for
preparation of clay samples are given by USGS (United State Geological Survey).
For unit cell determinations, a small amount of a standard with known peak positions (that do
not interfere with the sample) can be added and used to correct peak positions.

Scanning Probe Microscopy (SPM) - principles, Imaging Modes, Applications, Limitations.

Principles: Scanning probe microscopes (SPMs) are a family of tools used to make images of
nanoscale surfaces and structures, including atoms. They use a physical probe to scan back and
forth over the surface of a sample. During this scanning process, a computer gathers data that are
used to generate an image of the surface. In addition to visualizing nanoscale structures, some
kinds of SPMs can be used to manipulate individual atoms and move them to make specific
patterns. SPMs are different from optical microscopes because the user doesn’t “see” the surface
directly. Instead, the tool “feels” the surface and creates an image to represent it.

Working: SPMs are a very powerful family of microscopes, sometimes with a resolution of less
than a nanometer. (A nanometer is a billionth of a meter.) An SPM has a probe tip mounted on
the end of a cantilever. The tip can be as sharp as a single atom. It can be moved precisely and
accurately back and forth across the surface, even atom by atom. When the tip is near the sample
surface, the cantilever is deflected by a force. SPMs can measure deflections caused by many
kinds of forces, including mechanical contact, electrostatic forces, magnetic forces, chemical
bonding, van der Waals forces, and capillary forces. The distance of the deflection is measured
by a laser that is reflected off the top of the cantilever and into an array of photodiodes (similar
to the devices used in digital cameras). SPMs can detect differences in height that are a fraction
of a nanometer, about the diameter of a single atom. The tip is moved across the sample many
times. This is why these are called “scanning” microscopes. A computer combines the data to
create an image. The images are inherently colorless because they are measuring properties other
than the reflection of light. However, the images are often colorized, with different colors
representing different properties (for example, height) along the surface. Scientists use SPMs in a
number of different ways, depending on the information they’re trying to gather from a sample.
The two primary modes are contact mode and tapping mode. In contact mode, the force between
the tip and the surface is kept constant. This allows a scientist to quickly image a surface. In
tapping mode, the cantilever oscillates, intermittently touching the surface. Tapping mode is
especially useful when a scientist is imaging a soft surface.There are several types of SPMs.
Atomic force microscopes (AFMs) measure the electrostatic forces between the cantilever tip
and the sample. Magnetic force microscopes (MFMs) measure magnetic forces. And scanning
tunneling microscopes (STMs) measure the electrical current flowing between the cantilever tip
and the sample.

Advantages of SPM Technology

Scanning Probe Microscopy provides researchers with a larger variety of specimen observation
environments using the same microscope and specimen reducing the time required to prepare
and study specimens. Specialized probes, improvements and modifications to scanning probe
instruments continues to provide faster, more efficient and revealing specimen images with
minor effort and modification.

Disadvantages of SPM Technology

Unfortunately, one of the downsides of scanning probe microscopes is that images are produced
in black and white or grayscale which can in some circumstances exaggerate a specimens actual
shape or size. Computers are used to compensate for these exaggerations and produce real time
color images that provide researchers with real time information including interactions within
cellular structures, harmonic responses and magnetic energy.

Atomic Force Microscopy (AFM) -

Brief history of AFM: Atomic force microscopy (AFM) to investigate the electrically
non-conductive materials, like proteins. In 1986, Binnig and Quate demonstrated for the first
time the ideas of AFM, which used an ultra-small probe tip at the end of a cantilever. In 1987,
Wickramsinghe et al. developed an AFM setup with a vibrating cantilever technique, which used
the light-lever mechanism.


The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan
the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of
curvature on the order of nanometers.

When the tip is brought into proximity of a sample surface, forces between the tip and the
sample lead to a deflection of the cantilever according to Hooke's law. Depending on the
situation, forces that are measured in AFM include mechanical contact force, van der Waals

forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic
force microscope, MFM), Casimir forces, solvation forces, etc.

Along with force, additional quantities may simultaneously be measured through the use
of specialized types of probes (see scanning thermal microscopy, scanning joule expansion
microscopy, photothermal microspectroscopy, etc.).

The AFM can be operated in a number of modes, depending on the application. In

general, possible imaging modes are divided into static (also called contact) modes and a variety
of dynamic (non-contact or "tapping") modes where the cantilever is vibrated or oscillated at a
given frequency.

Fig: AFM Components

Figure: Typical configuration of AFM.


AFM typically consists of the following features.; Cantilever (1) : Small spring-like
cantilever (1) is supported on the support (2) by means of a piezoelectric element (3) so as to
oscillate the cantilever (1) at its eigen frequency ; Sharp tip (4) which is fixed to open end of a
cantilever (1) . Detector (5) configured to detect the deflection and motion of the cantilever (1) .
Sample (6) will be measure by AFM are mounted on Sample stage (8). xyz-drive (7) which
permits a sample (6) and Sample stage (8) to be displaced in x, y, and z directions with respect to
a tip apex(4) Controllers and plotter (not shown). Here, numbers surrounded by parentheses like
"(1)" are sign to indicate feature (see Fig..). Datum coordination system (0) is intended to
indicates the x-y-z direction.

The small spring-like cantilever (1) is carried by the support (2). Optionally, a
piezoelectric element (typically made of a ceramic material) (3) oscillates the cantilever (1). The
sharp tip (4) is fixed to the free end of the cantilever (1). The detector (5) records the deflection
and motion of the cantilever (1). The sample (6) is mounted on the sample stage (8). An xyz
drive (7) permits to displace the sample (6) and the sample stage (8) in x, y, and z directions with
respect to the tip apex (4). Although Fig. 3 shows the drive attached to the sample, the drive can
also be attached to the tip, or independent drives can be attached to both, since it is the relative
displacement of the sample and tip that needs to be controlled. Controllers and plotter are not
shown in Fig.

According to the configuration described above, the interaction between tip and sample,
which can be an atomic scale phenomenon, is transduced into changes of the motion of
cantilever which is a macro scale phenomenon. Several different aspects of the cantilever motion
can be used to quantify the interaction between the tip and sample, most commonly the value of
the deflection, the amplitude of an imposed oscillation of the cantilever, or the shift in resonance
frequency of the cantilever .

Imaging modes

AFM operation is usually described as one of three modes, according to the nature of the
tip motion: contact mode, also called static mode (as opposed to the other two modes, which are
called dynamic modes); tapping mode, also called intermittent contact, AC mode, or vibrating
mode, or, after the detection mechanism, amplitude modulation AFM; non-contact mode, or,
again after the detection mechanism, frequency modulation AFM. It should be noted that despite

the nomenclature, repulsive contact can occur or be avoided both in amplitude modulation AFM
and frequency modulation AFM, depending on the settings.

Contact mode

In contact mode, the tip is "dragged" across the surface of the sample and the contours of
the surface are measured either using the deflection of the cantilever directly or, more
commonly, using the feedback signal required to keep the cantilever at a constant position.
Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers
(i.e. cantilevers with a low spring constant, k) are used to achieve a large enough deflection
signal while keeping the interaction force low.

Close to the surface of the sample, attractive forces can be quite strong, causing the tip to
"snap-in" to the surface. Thus, contact mode AFM is almost always done at a depth where the
overall force is repulsive, that is, in firm "contact" with the solid surface.

Tapping mode

In ambient conditions, most samples develop a liquid meniscus layer. Because of this,
keeping the probe tip close enough to the sample for short-range forces to become detectable
while preventing the tip from sticking to the surface presents a major problem for contact mode
in ambient conditions. Dynamic contact mode (also called intermittent contact, AC mode or
tapping mode) was developed to bypass this problem. Nowadays, tapping mode is the most
frequently used AFM mode when operating in ambient conditions or in liquids.

In tapping mode, the cantilever is driven to oscillate up and down at or near its resonance
frequency. This oscillation is commonly achieved with a small piezo element in the cantilever
holder, but other possibilities include an AC magnetic field (with magnetic cantilevers),
piezoelectric cantilevers, or periodic heating with a modulated laser beam. The amplitude of this
oscillation usually varies from several nm to 200 nm. In tapping mode, the frequency and
amplitude of the driving signal are kept constant, leading to a constant amplitude of the
cantilever oscillation as long as there is no drift or interaction with the surface.

The interaction of forces acting on the cantilever when the tip comes close to the surface,
Van der Waals forces, dipole-dipole interactions, electrostatic forces, etc. cause the amplitude of
the cantilever's oscillation to change (usually decrease) as the tip gets closer to the sample. This
amplitude is used as the parameter that goes into the electronic servo that controls the height of
the cantilever above the sample. The servo adjusts the height to maintain a set cantilever
oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is
therefore produced by imaging the force of the intermittent contacts of the tip with the sample

When operating in tapping mode, the phase of the cantilever's oscillation with respect to
the driving signal can be recorded as well. This signal channel contains information about the
energy dissipated by the cantilever in each oscillation cycle. Samples that contain regions of
varying stiffness or with different adhesion properties can give a contrast in this channel that is

not visible in the topographic image. Extracting the sample's material properties in a quantitative
manner from phase images, however, is often not feasible.

Non-contact mode

In non-contact atomic force microscopy mode, the tip of the cantilever does not contact
the sample surface. The cantilever is instead oscillated at either its resonant frequency (frequency
modulation) or just above (amplitude modulation) where the amplitude of oscillation is typically
a few nanometers (<10 nm) down to a few picometers. The van der Waals forces, which are
strongest from 1 nm to 10 nm above the surface, or any other long-range force that extends
above the surface acts to decrease the resonance frequency of the cantilever.

This decrease in resonant frequency combined with the feedback loop system maintains a
constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance.
Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to
construct a topographic image of the sample surface.

Schemes for dynamic mode operation include frequency modulation where a phase-
locked loop is used to track the cantilever's resonance frequency and the more common
amplitude modulation with a servo loop in place to keep the cantilever excitation to a defined
amplitude. In frequency modulation, changes in the oscillation frequency provide information
about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the
frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide
stability very close to the surface and, as a result, this technique was the first AFM technique to
provide true atomic resolution in ultra-high vacuum conditions.


AFM has several advantages over the scanning electron microscope (SEM). Unlike the
electron microscope, which provides a two-dimensional projection or a two-dimensional image
of a sample, the AFM provides a three-dimensional surface profile. In addition, samples viewed
by AFM do not require any special treatments (such as metal/carbon coatings) that would
irreversibly change or damage the sample, and does not typically suffer from charging artifacts
in the final image. While an electron microscope needs an expensive vacuum environment for
proper operation, most AFM modes can work perfectly well in ambient air or even a liquid
environment. This makes it possible to study biological macromolecules and even living
organisms. In principle, AFM can provide higher resolution than SEM.

It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more
recently, in liquid environments. High resolution AFM is comparable in resolution to scanning
tunneling microscopy and transmission electron microscopy. AFM can also be combined with a
variety of optical microscopy and spectroscopy techniques such as fluorescent microscopy of
infrared spectroscopy, giving rise to scanning near-field optical microscopy, nano-FTIR and
further expanding its applicability. Combined AFM-optical instruments have been applied
primarily in the biological sciences but have recently attracted strong interest in photovoltaics
and energy-storage research, polymer sciences, nanotechnology, and even medical research..


A disadvantage of AFM compared with the scanning electron microscope (SEM) is the
single scan image size. In one pass, the SEM can image an area on the order of square
millimeters with a depth of field on the order of millimeters, whereas the AFM can only image a
maximum scanning area of about 150×150 micrometers and a maximum height on the order of
10-20 micrometers. One method of improving the scanned area size for AFM is by using parallel
probes in a fashion similar to that of millipede data storage.

The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan
images as fast as an SEM, requiring several minutes for a typical scan, while an SEM is capable
of scanning at near real-time, although at relatively low quality. The relatively slow rate of
scanning during AFM imaging often leads to thermal drift in the image. making the AFM less
suited for measuring accurate distances between topographical features on the image. However,
several fast-acting designs were suggested to increase microscope scanning productivity
including what is being termed videoAFM (reasonable quality images are being obtained with
videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced
by thermal drift, several methods have been introduced.

AFM images can also be affected by nonlinearity, hysteresis,[ and creep of the
piezoelectric material and cross-talk between the x, y, z axes that may require software
enhancement and filtering. Such filtering could "flatten" out real topographical features.
However, newer AFMs utilize real-time correction software (for example, feature-oriented
scanning) or closed-loop scanners, which practically eliminate these problems. Some AFMs also
use separated orthogonal scanners.

As with any other imaging technique, there is the possibility of image artifacts, which
could be induced by an unsuitable tip, a poor operating environment, or even by the sample
itself, as depicted on the right. These image artifacts are unavoidable; however, their occurrence
and effect on results can be reduced through various methods. Artifacts resulting from a too-
coarse tip can be caused for example by inappropriate handling or de facto collisions with the
sample by either scanning too fast or having an unreasonably rough surface, causing actual
wearing of the tip.

Due to the nature of AFM probes, they cannot normally measure steep walls or
overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as
well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at
the cost of more expensive cantilevers, lower lateral resolution and additional artifacts.

Other applications in various fields of study

The latest efforts in integrating nanotechnology and biological research have been
successful and show much promise for the future. Since nanoparticles are a potential vehicle of
drug delivery, the biological responses of cells to these nanoparticles are continuously being
explored to optimize their efficacy and how their design could be improved.

Pyrgiotakis et al. were able to study the interaction between CeO2 and Fe2O3 engineered
nanoparticles and cells by attaching the engineered nanoparticles to the AFM tip.

Studies have taken advantage of AFM to obtain further information on the behavior of
live cells in biological media. Real-time atomic force spectroscopy (or nanoscopy) and dynamic
atomic force spectroscopy have been used to study live cells and membrane proteins and their
dynamic behavior at high resolution, on the nanoscale. Imaging and obtaining information on the
topography and the properties of the cells has also given insight into chemical processes and
mechanisms that occur through cell-cell interaction and interactions with other signaling
molecules (ex. ligands).

Evans and Calderwood used single cell force microscopy to study cell adhesion forces,
bond kinetics/dynamic bond strength and its role in chemical processes such as cell signaling.

Scheuring, Lévy, and Rigaud reviewed studies in which AFM to explore the crystal
structure of membrane proteins of photosynthetic bacteria.

Alsteen et al. have used AFM-based nanoscopy to perform a real-time analysis of the
interaction between live mycobacteria and antimycobacterial drugs (specifically isoniazid,
ethionamide, ethambutol, and streptomycine), which serves as an example of the more in-depth
analysis of pathogen-drug interactions that can be done through AFM.

Electron Probe Micro Analyzer (EPMA)

What is Electron probe micro-analyzer (EPMA)

An electron probe micro-analyzer is a microbeam instrument used primarily for the in situ
non-destructive chemical analysis of minute solid samples. EPMA is also informally called an
electron microprobe, or just probe. It is fundamentally the same as an SEM, with the added
capability of chemical analysis. The primary importance of an EPMA is the ability to acquire
precise, quantitative elemental analyses at very small "spot" sizes (as little as 1-2 microns),
primarily by wavelength-dispersive spectroscopy (WDS).

The spatial scale of analysis, combined with the ability to create detailed images of the
sample, makes it possible to analyze geological materials in situ and to resolve complex
chemical variation within single phases (in geology, mostly glasses and minerals). The electron
optics of an SEM or EPMA allow much higher resolution images to be obtained than can be seen
using visible-light optics, so features that are irresolvable under a light microscope can be readily
imaged to study detailed microtextures or provide the fine-scale context of an individual spot

A variety of detectors can be used for:

 imaging modes such as secondary-electron imaging (SEI), back-scattered electron

imaging (BSE), and cathode luminescence imaging (CL),

 acquiring 2D element maps,


 acquiring compositional information by energy-dispersive spectroscopy (EDS) and

wavelength-dispersive spectroscopy (WDS),

 analyzing crystal-lattice preferred orientations (EBSD).

Fundamental Principles of Electron probe micro-analyzer (EPMA)

An electron microprobe operates under the principle that if a solid material is bombarded
by an accelerated and focused electron beam, the incident electron beam has sufficient energy to
liberate both matter and energy from the sample. These electron-sample interactions mainly
liberate heat, but they also yield both derivative electrons and x-rays. Of most common interest
in the analysis of geological materials are secondary and back-scattered electrons, which are
useful for imaging a surface or obtaining an average composition of the material. X-ray
generation is produced by inelastic collisions of the incident electrons with electrons in the inner
shells of atoms in the sample; when an inner-shell electron is ejected from its orbit, leaving a
vacancy, a higher-shell electron falls into this vacancy and must shed some energy (as an X-ray)
to do so.

These quantized x-rays are characteristic of the element. EPMA analysis is considered to
be "non-destructive"; that is, x-rays generated by electron interactions do not lead to volume loss
of the sample, so it is possible to re-analyze the same materials more than one time.

Electron probe micro-analyzer (EPMA) Instrumentation -

Fig: Electron probe micro-analyzer (EPMA)

EPMA consists of four major components, from top to bottom:

1. An electron source, commonly a W-filament cathode referred to as a "gun."


2. A series of electromagnetic lenses located in the column of the instrument, used to

condense and focus the electron beam emanating from the source; this comprises the
electron optics and operates in an analogous way to light optics.
3. A sample chamber, with movable sample stage (X-Y-Z), that is under a vacuum to
prevent gas and vapor molecules from interfering with the electron beam on its way to
the sample; a light microscope allows for direct optical observation of the sample.
4. A variety of detectors arranged around the sample chamber that are used to collect x-rays
and electrons emitted from the sample.

 Quantitative EPMA analysis is the most commonly used method for chemical analysis of
geological materials at small scales.
 In most cases, EPMA is chosen in cases where individual phases need to be analyzed
(e.g., igneous and metamorphic minerals), or where the material is of small size or
valuable for other reasons (e.g., experimental run product, sedimentary cement, volcanic
glass, matrix of a meteorite, archeological artifacts such as ceramic glazes and tools).
 In some cases, it is possible to determine a U-Th age of a mineral such as monazite
without measuring isotopic ratios.
 EPMA is also widely used for analysis of synthetic materials such as optical wafers, thin
films, microcircuits, semi-conductors, and superconducting ceramics.

Sample Preparation
Unlike an SEM, which can give images of 3D objects, analysis of solid materials by EPMA
requires preparation of flat, polished sections. A brief protocol is provided here:
1. Nearly any solid material can be analyzed. In most cases, samples are prepared as
standard-size 27 x 46 mm rectangular sections, or in 1-inch round disks.
2. Rectangular sections of rock or similar materials are most often prepared as 30-micron-
thick sections without cover slips. Alternatively, 1-inch cores can be polished. Chips or
grains can be mounted in epoxy disks, and then polished half way through to expose a
cross-section of the material.
3. The most critical step prior to analysis is giving the sample a fine polish so that surface
imperfections do not interfere with electron-sample interactions. This is particularly
important for samples containing minerals with different hardnesses; polishing should
yield a flat surface of uniform smoothness.
4. Most silicate minerals are electrical insulators. Directing an electron beam at the sample
can lead to electrical charging of the sample, which must be dissipated. Prior to analysis,
samples are typically coated with a thin film of a conducting material (carbon, gold and
aluminum are most common) by means of evaporative deposition. Once samples are
placed in a holder, the coated sample surface must be put in electrical contact with the
holder (typically done with a conductive paint or tape).
5. Choice of coating depends on the type of analysis to be done; for example, most EPMA
chemical analysis is done on samples coated by C, which is thin and light enough that
interference with the electron beam and emitted X-rays is minimal.
6. Samples are loaded into the sample chamber via a vacuum interlock and mounted on the

sample stage. The sample chamber is then pumped to obtain a high vacuum.
7. To begin a microprobe session, suitable analytical conditions must be selected, such as
accelerating voltage and electron beam current, and the electron beam must be properly
8. If quantitative analyses are planned, the instrument first must be standardized for the
elements desired.

 An electron probe is essentially the same instrument as an SEM, but differs in that it is
equipped with a range of crystal spectrometers that enable quantitative chemical analysis
(WDS) at high sensitivity.
 An electron probe is the primary tool for chemical analysis of solid materials at small
spatial scales (as small as 1-2 micron diameter); hence, the user can analyze even minute
single phases (e.g., minerals) in a material (e.g., rock) with "spot" analyses.
 Spot chemical analyses can be obtained in situ, which allows the user to detect even small
compositional variations within textural context or within chemically zoned materials.
 Electron probes commonly have an array of imaging detectors (SEI, BSE, and CL) that
allow the investigator to generate images of the surface and internal compositional
structures that help with analyses.

 Although electron probes have the ability to analyze for almost all elements, they are
unable to detect the lightest elements (H, He and Li); as a result, for example, the "water"
in hydrous minerals cannot be analyzed.
 Some elements generate x-rays with overlapping peak positions (by both energy and
wavelength) that must be separated.
 Microprobe analyses are reported as oxides of elements, not as cations; therefore, cation
proportions and mineral formulae must be recalculated following stoichiometric rules.
 Probe analysis also cannot distinguish between the different valence states of Fe, so the
ferric/ferrous ratio cannot be determined and must be evaluated by other techniques.