Professional Documents
Culture Documents
Related terms:
Capabilities
EPXMA and EELS are complementary techniques for the study of biological speci-
mens. Of the two EPXMA is much simpler, the focusing of the beam onto the area
of interest producing a spectrum in which the presence, or absence, of an element
of interest is easily determined. All the elements present in the area of analysis
are detected in a single analytical run. This has advantages: it eliminates the need
for sequential screening for different elements and can also reveal the presence of
unsuspected elements. However, it must be realized that the technique can be used
to detect nuclides of elements only, and no information about the chemical state of
the element can be inferred.
Although EELS analysis and the associated techniques of EFTEM and ESI have
been considered more difficult to perform than EPXMA, developments both in
instrumentation and software have made these techniques easier to use resulting in
an increase in publications featuring these methodologies especially EFTEM and ESI.
EELS is generally carried out with a field emission source so that probe diameters
of the order of 1 nm can be achieved, making this a very high-resolution method
of analysis. Due to the higher collection efficiency, EELS is also more sensitive
than EPXMA. Recently it has been shown that it is possible to detect single atoms
of iron and calcium in isolated biological macromolecules. As stated previously, the
two techniques are complementary. The elements Na and K, which are important
in many biological systems, are poorly detected in EELS but easily detected with
EPXMA, whereas the superior sensitivity of EELS makes it ideal for the detection of
calcium, which is poorly detected in EPXMA because of overlap of the Ca K by the
K K peak.
The SEM can be used for elemental analysis by using the EDAX mode. In the
SEM-EDAX analytical technique the characteristic X-rays emitted from an elec-
tron-bombarded surface in the SEM are analyzed for their characteristic wavelengths
using a crystal spectrometer to give qualitative elemental analysis (XRF). This tech-
nique allows both the surface morphology and composition to be determined on
the same area.
2.1 Fundamentals
Energy-dispersive electron probe X-ray microanalysis (ED-EPMA) appeared when
the use of a solid-state Si(Li) EDX detector in conjunction with an electron micro-
scope was described for the first time in 1968 by Fitzgerald and co-workers [5].
After a while, ED-EPMA based on SEM/EDX became one of the most common,
non-destructive techniques which could provide information on morphology of
(sub)micron individual particles from their secondary or backscattered electron
images, together with information on their chemical compositions from electron-in-
duced, characteristic X-rays of chemical elements in the particles. When ED-EPMA
measurements are performed in an automated, computer-controlled way, then a
significant number of single aerosol particles can be probed in a reasonable analysis
time [6]. After ultrathin window EDX detectors were commercially available in 1990s,
the detection of low-Z elements such as C, N, O and F together with heavier elements
(Z ≥ 11) became feasible in ED-EPMA. However, accurate quantification of the low-Z
elements had been a problem for a long time as the characteristic X-ray lines of the
low-Z elements undergo extremely strong attenuation while propagating through
the particle volume and the conventional correction procedures such as ZAF (atomic
number (Z), X-ray absorption (A) and secondary fluorescence (F) correction) and ( z)
(the X-ray production and absorption as function of mass depth ( z) in the specimen,
where is the sample density and z is the depth of electron penetration in sample)
methods were not sufficient for the accurate estimation of the matrix effect for
the low-Z elements. In 1999, a quantitative ED-EPMA technique based on a Monte
Carlo simulation coupled with successive approximations, called low-Z particle
EPMA, was developed [7]. In the quantification procedure based on the Monte Carlo
calculation, electron trajectories are simulated, and generated characteristic and
Bremsstrahlung X-rays are calculated for spherical, hemispherical and hexahedral
particles sitting on a flat surface, so that the matrix effect of X-ray lines even from
the low-Z elements can be accurately corrected and the concentrations of chemical
elements of single particles can be determined [7,8]. The quantification procedure
provided results accurate within 12% relative deviations between the calculated and
nominal elemental concentrations when the method was applied to various types
of standard particles such as NaCl, Al2O3, CaSO4·2H2O, Fe2O3, CaCO3 and KNO3
[9]. The low-Z particle EPMA has since played important roles in characterisation
of atmospheric aerosols as many environmentally relevant particles contain low-Z
elements in the form of nitrates, sulphates, oxides or mixtures, including a carbon
matrix, and quantitative information is critical for clear chemical speciation of
individual particles [10–13].
Although the quantification procedure based on the Monte Carlo calculation was
originally developed for SEM/EDX applications with a lower accelerating voltage
(10–30 kV), it could also be an excellent tool for the quantitative single-particle
analysis using TEM/EDX at a higher accelerating voltage (200 kV) [14]. In a Monte
Carlo calculation newly modified for TEM/EDX applications, electron trajectories
for a particle sitting on the thin formvar/carbon film (coated on TEM grids) are
simulated and X-rays generated both from the particle and the film can be calculated.
The accuracy of the quantification procedure, including the correction for C and
O X-ray intensities from the formvar/carbon film, was evaluated by comparing
calculated elemental atomic concentrations with nominal ones for 12 types of
standard particles. The relative differences (denoted as Δ) between the calculated
and nominal elemental atomic concentrations are within 18% with the smallest
4.4% (ΔK in KCl) and the biggest 17.4% (ΔC in CaCO3) for electron beam refractory
particles such as NaCl, KCl, SiO2, Fe2O3, Na2CO3, CaCO3, Na2SO4, K2SO4 and CaSO4,
which is sufficiently accurate for the reliable identification of chemical species in
these particles (Figure 1). Intense and high-energy electron beams can cause oxygen
depletion for nitrates or sulphates through decomposition into (NO2 + O−) and
( + O) or yielding sulphites , respectively [6]. Hence, for electron-beam-sensitive salt
particles, such as NaNO3, Ca(NO3)2·4H2O and (NH4)2SO4, the high electron energy
and current employed in TEM/EDX measurements lead to large deviations (>30%)
in quantification.
Figure 1. Transmission electron microscopy images and X-ray spectra of typical KCl,
Na2CO3, NaNO3 and (NH4)2SO4 standard particles.
Reprinted with permission from Ref. [14], Figure 5, p. 8. Copyright (2010) John Wiley
and Sons.
The problem with many of these analytical tools is that they can only sample a
small area of the substrate, whereas local problems, such as surface inclusions which
generate pinholes in the deposited films, may be restricted to a small area and easily
missed.
Auger electrons are not emitted by helium and hydrogen and the sensitivity increas-
es with atomic number. The detection sensitivity ranges from about 10 at% (atomic
per cent) for lithium to 0.01 at% for uranium. Auger electron spectroscopy can detect
the presence of specific atoms but to quantify the amount requires calibration stan-
dards that are close to the composition of the sample. With calibration, composition
can be established to ±10%. Where there is a mixture of several materials, some of
the Auger peaks can overlap, but by analyzing the whole spectrum the spectrum can
be deconvoluted into individual spectra.
Electron beams can be focused to small diameters so AES can be used to identify
the atomic content of very small (submicron) particles as well as extended surfaces.
The secondary electrons emitted by the probing electron bombardment can be used
to visualize the surface in the same manner as scanning electron microscopy (SEM).
Thus, the probing beam can be scanned over the surface to give an SEM micrograph
of the surface and also an Auger compositional analysis of the surface.
From the Laws of Conservation of Energy and the Conservation of Momentum, the
energy, Et, transferred by the physical collision between hard spheres is given by:
(2.2)
where
i = incident particle
t = target particle
E = energy
M = mass
is the angle of incidence as measured from a line joining their centers of
masses
The maximum energy is transferred when cos = 1 (zero degrees) and when Mi =
Mt.
Most commercial ISS equipment only analyzes for charged particles, and particles
that are neutralized on reflection are lost. The energy of the scattered ion is typically
analyzed by an electrostatic sector analyzer or a cylindrical mirror analyzer. Ions for
bombardment are provided by an ion source. Depth profiling can be done using
sputter profiling techniques.
Quantification Approaches
Common strategies to quantify species by atomic spectrometry imply obtaining
optimum atomic peaks and regressing them (either the heights or areas) against
the concentrations of their calibration solutions. However, classical (univariate) least
squares regression requires the analytical signal not to be affected by concomitants
or other interfering effects. In other words, the signal must be as specific as possible,
but for an eventual constant background. Here, traditional calibration (including
calibration with external standards and the internal standard approach) will not be
reviewed, just stress that in order to apply them, the signal must necessarily not be
interferred.
In the last years, it became recognized that univariate calibration is often more a con
than a pro and different types of multivariate calibration were considered instead.
This is the case for ICP (optical and mass detection), LIBS and several techniques
for analyzing directly solid samples (X-ray fluorescence, EPXMA—electron-probe
X-ray microanalysis-, and laser ablation-ICP-MS). Three recent general reviews (see
Table 8) collected more than 110 applications dealing with the combination of
several multivariate regression methods and several atomic techniques.
An issue that requires further study is the need for simple procedures to calculate
figures of merit (performance characteristics) in multivariate regression. IUPAC
presented an approach based on the so-called net analyte signal.11 When ISO and
European Union set new definitions for the (formerly) limits of detection (they
are called now decision limit and capability of detection)12,13 studies were done to
apply those concepts to multivariate regression as simply as possible, which yielded
a straightforward approach to address this issue in a pragmatic, holistic way.14,15 This
is based on the use of the traditional regression among the reference values of the
calibration solutions and the experimental ones derived from the model and the
consideration of the type I and type II errors.
A question arises: what about the concomitants? The answer needs another para-
digm shift: traditional studies about concomitants rely on a one-at-a-time basis. This
involves studying each potential interferent by means of a series of solutions where
the concentration of the suspicious concomitant varies (the others are kept constant).
Conclusions are derived after their measurement. This approach does not consider
interactions between the concomitants and, so, it seems more reasonable to deploy
a formal experimental design to vary all the concomitants simultaneously. Further, if
the standard solutions employed in the experimental design have different amounts
of analyte, they can also be used as calibrators. As they contain “all” the relevant
concomitants the matrix variability inherent to future samples will be present into
the calibration model.
Regarding AAS, only two references were found since 2007: a comparison of re-
sults obtained using PLS plus FAAS and those from a new method based on the
zero-crossing point-continuous wavelet transformation (an unusual multivariate
regression method) and the determination of Cu by ETAAS in lubricating oils (see
Table 8). For ICP, both optical and mass detection, some applications were reported.
Most applications of PLS were reported for LIBS quantitations, although XRF and
TOF-SIMS (which yield also very complex spectra) benefited from PLS models as
well (see Table 8).
ANNs imply a great deal of computer work as many trials are needed (usually) to get
a reliable model. Also, a great deal of laboratory work is required to get spectra from
a reasonable number of standards. Further, a chemical interpretation is not possible
currently. In addition, it has been reported fairly frequently that ANNs do not always
outperform other classical methods, typically PLS. Error back-propagation ANNs
(BP-ANNs)16 are nowadays the common choice. The term means that the ANN
develops the model iteratively by propagating back the error, in a two steps cycle.
First, the atomic spectrum enters the input layer of the ANN and is transmited
forward throughout the neural “connections” (synapses) until an output is obtained.
That is compared to the target concentration and an error is derived for each sample.
Here the second phase starts. The errors are then transmitted back from the output
to each neuron in the hidden layer(s) of the ANN so that each neuron receives only
a portion of the total error signal (in accordance with the relative contribution the
neuron made to the output).17
Only two applications were found in combination with FAAS. ICP-OES is a potential
interesting field for ANNs and, thus, they were employed recently. As for PLS,
LIBS constitutes an outstanding application field for ANNs. Their corresponding
references are cited in Table 8.
As mentioned above, SVMs do not require a random initialisation and rely on two
main working stages18–20: (i) project the original data (i.e., the spectra) into a space
of higher dimension by calculating additional dimensions; this is termed “nonlinear
mapping” or “feature mapping”; and (ii) construct an optimal separating surface
(separating hyperplane or boundary) which maximizes the margin between the groups
of samples. The mathematical functions by which we project the original spectra into
a higher space (feature space) are termed “kernel functions,” among which the most
common ones are the linear and the radial basis functions (RBF). Good news is that
it is not necessary for the analyst to know in advance the mathematical functions
behind the kernel. Once the type of kernel is selected (e.g., linear, RBF or polynomial)
the nonlinear mapping functions are set automatically.
When SVMs are applied for regression they are frequently termed SVM-R or SVR.
They search for an optimal hyperplane so that all samples become enclosed within
it. This yields a sort of “tube” which encloses the calibration points. This notion
resembles the classical least squares fit in univariate regression, which finds a
regression line equidistant from all the experimental data pairs. A very important
task is to avoid overfitting, which may occur easily. Recent reports on SVM applied
to atomic spectrometry were devoted to LIBS implementations (Table 8).