You are on page 1of 10

Analytical Chemistry

Introduction:
Chemistry as a science subject occupies a prominent position among other branches of sciences– Like
many eminent scientists said “If physics is the philosophy of science and mathematics is the language of
science, chemistry is the life of science”. It is evident from the fact that even the life supporting process
like photosynthesis is a chemical reaction while more amazing facts are contribution chemistry has
made in all walks of life especially in the areas of energy, medicine, food preservation, clothing,
construction materials and many other materials of human comfort. Analytical Chemistry occupies a
central position among the other branches of chemistry and other science subjects or related fields.
Chemical Analysis and types

Analytical chemistry deals with the various types of chemical analysis, the types being decided according
to various criteria. One of such classification is Macro, Semi micro, micro and ultra micro analysis. This is
a classification based on the sample size

Macro (>0.1g), Semi micro (0.01 to 0.1g), Micro (0.0001 to 0.01g) and ultra micro <10-4g.

Depending on the relative amounts of the constituents in a sample another classification also is in wide
use. The following table depicts the range of concentration of constituents in a sample and the
corresponding terms.

Percentage of the constituent Term for the constituent


in a sample
1-100% Major
0.01% -1% Minor
1ppb to 100ppm Trace
Less than 1ppb Ultra trace

Concentration of major constituents are generally assessed by Volumetric and Gravimetric analysis while
concentration assessment of minor, trace and ultra trace constituents require analysis by instruments
with increasing sophistication.

Another classification is as wet chemical and dry chemical analysis. You are already exposed to the terms
like qualitative and quantitative chemical analysis.

Whatever is the method or mode of analysis the data generated must be reliable. Reliability of the data
can be assured by replicate analysis, proper calibration of the instruments, use of standard reference
materials and statistical tests. Nevertheless errors cannot be completely avoided due to certain inherent
uncertainties in the measurements and factors beyond the control of the analyst. However errors can be
minimized by adopting suitable strategies.

Errors in Chemical Analysis:


As “uncertainty” is inherent in nature as per the Heisenberg’s principle, errors in chemical analysis is
quite natural. However it is possible to minimize errors to the maximum extent by adopting suitable
methods of analysis and using sensitive instruments and testing the reliability of the data by comparing
with the data obtained from Standard Reference Materials. In addition to this probable error in a
measurement can often be evaluated so that the limits in which the true value lies can be assessed or
presented with a given level of probability.

Reliability of data can be checked by


1) Doing experiments to find out the errors
2) Comparing the values obtained from standards.
3) Calibration of the instruments
4) Statistical tests

But none of them is perfect.

How much error one can tolerate in an analysis? Depends on the situation!

Important terms: (Mean, Median, Precision)

Replicate analysis is done to find the variation.

Find the mean and median.

Mean

Median – Write the set of data in the increasing order and take the middle value for an odd
number of data set. If data set is in even number, take two of the middle values and take the average.
In ideal cases both mean and median will be equal. When the data set is small, the mean and median
value differs often.

Median normally used to find an “outlier”. Mean will be affected by an outlier data while median will
not be.

Precision: It is the reproducibility of the measurements. It is determined by simply repeating the


measurement on replicate samples.

Precision is described by terms like “standard deviation”, “variance” and “coefficient of variation”. They
are functions of the “deviation from the mean”

Accuracy: It indicates closeness to the “true value” or accepted value. It is expressed in terms of
absolute relative error.

Types of errors:
Absolute error E in a measurement is the difference between the true value/ accepted value and the
measured value.

The sign is retained in absolute error.

Relative error: It is a more useful expressions in %, parts per thousand or ppm etc.

Xi−Xt
X100
Xt

Accuracy can be judged only if we know the true value.

Two types of errors are indicated by these diagrams.

1. Random error or indeterminate error – it makes the data to scatter symmetrically around a
mean value. This type of error is reflected by precision.
2. Systematic (determinate) error -Mean of the data differ from the accepted or true value. (It
makes the measurement in a series of replicate analysis either too high or low). E.g. A volatile
analyte lost while heating the analyte.

Gross error: They differ from the above. Occurs occasionally, often large and frequently they are the
product of human error. Gross errors lead to “outliers” – the result that appears markedly different
from other data in a set of replicate measurements.

Statistical tests can be performed to see whether a result is an “outlier”.

Systematic errors: It has definite value and an assignable cause – same magnitude for replicate
measurement. The resulting “bias” affects the data in the same way in a set with a ‘sign’.

Instrumental error

Sources Method error

Personal errors

Instrumental errors: Arise due to faulty calibration and unstable conditions. E.g. The volume delivered
by pipette, burettes, volumetric flasks etc may not be correct due to various reasons. Variation in
voltage affects functioning of electronic instruments apart from the errors due to faulty calibration.

Method Error: The non-ideal chemical or physical behavior of the analyte system. These may arise due
to the incompleteness or slowness of a reaction, non-specificity of a reagent and side reactions. The
extra amount of titrant required to produce a color change of the indicator in a titration itself introduces
an error.

Personal error: Due to carelessness, in attention and incapability of the analyst. Personal judgment
becomes essential in most cases of measurement and gets affected by number bias. Insensitivity to color
changes, slowness in activation of a timer or sending a signal etc. can also lead to errors of this category.

The effect of systematic errors on experimental results

Systematic errors are either constant or proportional. The absolute error varies due to proportional
error with changing sample size while relative error remains same with sample size.

Constant error: becomes serious when the sample size decreases e.g. Excess reagent required for color
change in a titration.

Proportional error: Error due to interference – e.g.Cu (II) determination by iodimetry in process of Iron
(III) contamination.

Steps for eliminating systematic errors:

Instrumental errors can be eliminated or minimized by periodic calibration of the instrument and
operating the instrument strictly according to the guidelines of the manufacturer.
Personal errors can be minimized by care, self discipline and proper choice of analytical methods.

Methods error

A) By using Standard Reference Materials

(The concentrations of constituents in a SRM are known

1. By analysis (validated method)


2. By two or more independent analytical methods
3. By analysis by a network of competent labs.)

B) If SRM is not available another independent method can be used and use statistical method to
ascertain that the data difference is not due to random error.

C) By using blank determinations. (A blank contains reagents and solvents but no analyte. Sometimes
many constitutes in sample are added to simulate the sample matrix.)

D) By varying the sample size constant error can be easily detected.

Excel works for Mean and Deviation from mean

Random errors
They cannot be totally eliminated caused by the incredible variables and the errors are inevitable – They
are very small individually but their accumulated value makes the value of the replicates fluctuate
around the mean of the set. Such individual errors when combined in all possible combinations result in
a Gaussian curve when the relative frequencies of them are plotted. (Relative frequency vs. against the
possible combinations)

Take the calibration experiment of pipette as an example to illustrate the above Gaussian curve p107-
109 of the text book.

Statistical Methods
Statistical methods are used to evaluate random errors as we assume that random errors follow
Gaussian distribution. (There are experimental results which follow binomial distribution also). But we
often use Gaussian distribution as an approximation of distribution of results and the approximation
becomes better in the limit of large number of experiments.

We normally seek information of a ‘Population’ buy studying a ‘sample’. Population may be finite (real)
or hypothetical (Conceptual). A Gaussian curve can be represented by an equation containing two
parameters namely ‘population mean’, μ and ‘Population standard deviation’σ .

To estimate them we use sample mean and sample standard deviation


as the statistics.

Sample mean = where N is the number of measurements made in the sample.

Population mean where N is the number of measurements made in the population.

The difference between x and μ decreases rapidly when N increases and when N reaches a value of 20-
30, the difference is negligible.

Population standard deviation, , is a measure of the precision of a population data and is given by
where N is the number of data points.

Fig 6-4 (a) illustrates two sets of population where curve B has twice the standard deviation of B. Hence
curve A represents more precision. Fig 6-4(b) has plotted relative frequency against ‘z’ which is a single
Gaussian curve representing all population and the same is called as the Gaussian error curve. The
equation representing the same is give as

σ σ2
Note that the expression contains square of . The is an important
quantity called “Variance”.

The error curve has the following properties.

1. The mean is at the central point of maximum frequency.


2. Symmetric distribution of positive and negative deviations about the maximum.
3. Exponential decrease in frequency as the magnitude of deviation increases.

Area of Gaussian curve forms a predictive tool as 68.3% of the area under the curve falls under one
standard deviation, 95.4% of the area under two standard deviations and 99.7% area under three
standard deviations. Alternatively, in a single measurement, chances of the random uncertainty to be
not more than one standard deviation are 68.3%.
Sample Standard Deviations: It is a measure of precision and is given by the expression

where N-1 is the degree of freedom.


Degree of freedom is the number of independent results used for the computation of the standard
deviation.

Alternate expression is

The following precautions have to be taken while calculating standard deviations.

1. Never round the standard deviation calculation until the end.


2. Use the first equation when 5digits or more are involved. Never use the second in such cases.
Note that computers and calculators generally use the second expression and therefore may be
aware of the inherent round off errors.

it is an estimate of σ 2 (population variable)

Dixon's Q test

In statistics, Dixon's Q test, or simply the Q test, is used for identification and rejection of outliers. Per
Dean and Dixon, and others, this test should be used sparingly and never more than once in a data set.
To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined:

Gap
QCalc =
Range

Where gap is the absolute difference between the outlier in question and the closest number to it. If
Qcalculated > Qtable then reject the questionable point.
Example:

For the data:

Arranged in increasing order:

Outlier is 0.167. Calculate Q:

With 10 observations, Qcalculated (0.455) > Qtable (0.412), so reject it with 90% confidence. However, at 95%
confidence, Qcalculated (0.455) < Qtable (0.466),therefore keep 0.167 at 95% confidence or reject it at 90%
confidence.

This table summarizes the limit values of the test.

Number of values:  3 4 5 6 7 8 9 10

Q90%: 0.941 0.765 0.642 0.560 0.507 0.468 0.437 0.412

Q95%: 0.970 0.829 0.710 0.625 0.568 0.526 0.493 0.466

Q99%: 0.994 0.926 0.821 0.740 0.680 0.634 0.598 0.568

You might also like