You are on page 1of 34

(BCHET-141)

BCHET-141: Sampling and Error in Chemical Analysis


Guess Paper-I

Q. Describe and explain the importance of the concept of “sampling” in analytical methods of
analysis.
Ans. This activity comprises two fairly distinct study topics: Sampling and Statistical analysis of data.
Under “Sampling”, you will be introduced to the concept and challenges of sampling as a means to
acquiring a representative laboratory sample from the original bulk specimen. At the end of the
subtopic on “sampling”, you will not only appreciate that a sampling method adopted by an analyst
is an integral part of any analytical methods, but will also discover that it is usually the most
challenging part of an analysis process.
Another very important stage in any analytical method of analysis is evaluation of results, where
statistical tests (i.e., quantities that describe a distribution of, say, experimentally measured data) are
always carried out to determine confidence in our acquired data. In the latter part of this activity, you
will be introduced to the challenges encountered by an analytical chemist when determining the
uncertainty associated with every measurement during a chemical analysis process, in a bid to
determine the most probable result. You will be introduced to ways of describing and reducing, if
necessary, this uncertainty in measurements through statistical techniques.
A typical analytical method of analysis comprises seven important stages, namely; plan of analysis
(involves determination of sample to be analysed, analyte, and level of accuracy needed); sampling;
sample preparation (involves sample dissolution, workup, reaction, etc.); isolation of analyte (e.g.,
separation, purification, etc.); measurement of analyte; standardization of method (instrumental
methods need to be standardized in order to get reliable results); and evaluation of results (statistical
tests to establish most probable data). Of these stages, sampling is often the most challenging for any
analytical chemist: the ability to acquire a laboratory sample that is representative of the bulk
specimen for analysis.
Therefore, sampling is an integral and a significant part of any chemical analysis and requires special
attention. Furthermore, we know that analytical work in general, results in the generation of
numerical data and that operations such as weighing, diluting, etc., are common to almost every
analytical procedure. The results of such operations, together with instrumental outputs, are often
combined mathematically to obtain a result or a series of results. How these results are reported is
important in determining their significance.
It is important that analytical results be reported in a clear, unbiased manner that is truly reflective of
the very operations that go into the result. Data need to be reported with the proper number of
significant digits and rounded off correctly. In short, at the end of, say a chemical analysis procedure,
the analyst is often confronted with the issue of reliability of the measurement or data acquired, hence
the significance of the stage of evaluation of results, where statistical tests are done to determine
confidence limits in acquired data.
In this present activity, procedures and the quantities that describe a distribution of data will be
covered and the sources of possible error in experimental measurements will be explored.
Sampling errors: Biased or non-representative sampling and contamination of samples during or
after their collection are two sources of sampling error that can lead to significant errors. Now, while
selecting an appropriate method helps ensure that an analysis is accurate, it does not guarantee,
however, that the result of the analysis will be sufficient to solve the problem under investigation or
that a proposed answer will be correct. These latter concerns are addressed by carefully collecting the
samples to be analyzed. Hence the import of studying “proper sampling strategies”. It is important to
note that the final result in the determination of say, the copper content in an ore sample would
typically be a number(s) which indicates the concentration(s) of a compound(s) in the sample.
Uncertainty in measurements: However, there is always some uncertainty associated with each
operation or measurement in an analysis and thus there is always some uncertainty in the final result.

1
(BCHET-141)

Knowing the uncertainty is as important as knowing the final result. Having data that are so
uncertain as to be useless is no better than having no data at all. Thus, there is a need to determine
some way of describing and reducing, if necessary, this uncertainty. Hence the importance of the
study of the subtopic of Statistics, which assists us in determining the most probable result and
provides us the quantities that best describe a distribution of data. This subtopic of Statistics will form
a significant part of this learning activity.
Q. Explain the key concepts of sampling .
Ans. key concepts of sampling
● Accuracy: refers to how closely the measured value of a quantity corresponds to its “true”
value.
● Determinate errors: these are mistakes, which are often referred to as “bias”. In theory, these
could be eliminated by careful technique.
● Error analysis: study of uncertainties in physical measurements.
● Indeterminate errors: these are errors caused by the need to make estimates in the last figure
of a measurement, by noise present in instruments, etc. Such error scan be reduced, but never
entirely eliminated.
● Mean (m): defined mathematically as the sum of the values, divided by the number of
measurements.
● Median: is the central point in a data set. Half of all the values in a set will lie above the
median, half will lie below the median. If the set contains an odd number of datum points,
the median will be the central point of that set. If the set contains an even number of points,
the median will be the average of the two central points. In populations where errors are
evenly distributed about the mean, the mean and median will have the same value.
● Precision: expresses the degree of reproducibility, or agreement between repeated
measurements.
● Range: is sometimes referred to as the spread and is simply the difference between the largest
and the smallest values in a data set.
● Random Error: error that varies from one measurement to another in an unpredictable
manner in a set of measurements.
● Sample: a substance or portion of a substance about which analytical information is sought.
● Sampling: operations involved in procuring a reasonable amount of material that is
representative of the whole bulk specimen. This is usually the most challenging part of
chemical analysis.
● Sampling error: error due to sampling process(es).
● Significant figures: the minimum number of digits that one can use to represent a value
without loss of accuracy. It is basically the number of digits that one is certain about.
● Standard deviation (s): this is one measure of how closely the individual results or
measurements agree with each other. It is a statistically useful description of the scatter of the
values determined in a series of runs.
● Variance (s2): this is simply the square of the standard deviation. It is another method of
describing precision and is often referred to as the coefficient of variation.
Q. Explain the term Normal distribution.
Ans. For many different kinds of measurements, if the only errors which are involved are random
(indeterminate) errors, those measurements obey a normal, or Gaussian, distribution. Let's take that
statement apart to see what it is getting at. If the only errors are random ones, the measurements
should be distributed about the mean in such a manner that more measurements lie close to the mean
than lie far away from it. (Of course the question of what is meant by close and far depends on how
large the random errors in a given experiment are.) If you were to plot the distribution of a large
number of measurements, you would find that they look like what you may know as a "bell shaped
curve". This curve, the normal or Gaussian curve, is given mathematically by the expression:

2
(BCHET-141)

Here is a graph showing two Gaussian curves, both with a mean of 11.237, but one with a standard
deviation of 0.034 and the other 0.068 (which is which?). The mean is the x value corresponding to the
maximum in the curve. (If you have had calculus you know how to prove that this is the case.)

Press the button below to see an interactive demonstration of the relationship between the standard
deviation and the width of the distribution (be sure to close the spreadsheet before going on).
Notice that as the curve gets wider, it also gets shorter. The Gaussian distribution is representing the
whole sample, so that as a greater percentage (or fraction) of the data lie further from the mean (larger
standard deviation), there is a smaller percentage (or fraction) which can lie closer to the mean. You
will explore this aspect of the normal curve, along with several others, now.
Pressing the next button will take you to another spreadsheet that allows you to interactively
determine the fraction of data that lie within a certain range about the mean for normally distributed
data.
This next button will take you to a spreadsheet which is set up to calculate a Gaussian distribution
when provided with a mean and standard deviation. Sheet 2 of the spreadsheet includes some
additional information, and asks several questions.
In the first lesson it was mentioned that we would use one particular form of bar graph, the
histogram. The time has come. A histogram enables us to see how many data in a data set fall in
various subdivisions of the total range of data. This, in turn, gives us a better idea of how the data are
distributed.

The standard deviation is a measure of the sharpness of the curve; as σ increases the curve becomes
flatter and as σ decreases the curve is sharper. Approximately 68 percent of all values lie between ± σ,
95.4 % between ± 2σ, and 99.7 between ± 3σ.
It can be proven that if a measurement is subject to many small sources of random error and
negligible systematic error, the values will be distributed around the mean, which is the best estimate
of the true value. Additionally as the number of trials increases the range of uncertainty decreases as
the square root of the number of datum. This latter quantity, namely σ/sqrt(N) is called the standard
deviation of the mean (SDOM) or the standard error associated with N measurements as stipulated
above.
Thus far we have discussed the normal or Gaussian distribution in general terms and have made an
effort to convey the meaning of the probability distribution function as it applies to experimental

3
(BCHET-141)

measurement. At this point it is appropriate to put these concepts into more exact terms.
Gaussian Distribution – Quantitative.
We begin by defining the quantities used to describe a measurement and show how they are related
to one another and to the Gaussian distribution. In the course of what follows it will be assumed that
systematic errors have been reduced to the point that they are smaller than the random errors and
thus may be ignored. Systematic errors, due to their stealth nature, frequently go unnoticed. An
experimental result that is wrong for no apparent reason may be due to a systematic error. Finding a
systematic error is aided by checking the calibration of all instruments and a careful examination of
the experimental techniqueemployed. Our primary focus will be on random errors and how to
interpret their influence on the experimental results obtained. Systematic errors will be secondary but
you should consider them in your error analysis.
Random errors are small errors that move in both directions and have a tendency to cancel one
another, which makes them tractable using statistical analysis. Experience and intuition suggests that
a better value of an unknown quantity can be obtained by measuring it several times and taking the
average value to obtain an estimate of the actual or so called true value. Systematic errors may not be
analyzed statistically. Systematic and random errors have been defined in an intuitive way and both
must be considered in a well-designed experiment. To further complicate the situation systematic
and random errors occur together and the job of the experimenter is to recognize them and reduce the
systematic errors to the point that they are consistent with the degree of precision the experimenter is
attempting to obtain.
Q. Explain the term F-Test.
Ans. An “F Test” is a catch-all term for any test that uses the F-distribution. In most cases, when
people talk about the F-Test, what they are actually talking about is The F-Test to Compare Two
Variances. However, the f-statistic is used in a variety of tests including regression analysis, the Chow
test and the Scheffe Test (a post-hoc ANOVA test).
General Steps for an F Test
If you’re running an F Test, you should use Excel, SPSS, Minitab or some other kind of technology to
run the test. Why? Calculating the F test by hand, including variances, is tedious and time-
consuming. Therefore you’ll probably make some errors along the way.
If you’re running an F Test using technology (for example, an F Test two sample for variances in
Excel), the only steps you really need to do are Step 1 and 4 (dealing with the null hypothesis).
Technology will calculate Steps 2 and 3 for you.
State the null hypothesis and the alternate hypothesis.
● Calculate the F value. The F Value is calculated using the formula F = (SSE1 – SSE2 / m) / SSE2
/ n-k, where SSE = residual sum of squares, m = number of restrictions and k = number of
independent variables.
● Find the F Statistic (the critical value for this test). The F statistic formula is:
● F Statistic = variance of the group means / mean of the within group variances.
● You can find the F Statistic in the F-Table.
● Support or Reject the Null Hypothesis.
Assumptions: Several assumptions are made for the test. Your population must be approximately
normally distributed (i.e. fit the shape of a bell curve) in order to use the test. Plus, the samples must
be independent events. In addition, you’ll want to bear in mind a few important points:
The larger variance should always go in the numerator (the top number) to force the test into a right-
tailed test. Right-tailed tests are easier to calculate.
● For two-tailed tests, divide alpha by 2 before finding the right critical value.
● If you are given standard deviations, they must be squared to get the variances.
● If your degrees of freedom aren’t listed in the F Table, use the larger critical value. This helps
to avoid the possibility of Type I errors.
Warning: F tests can get really tedious to calculate by hand, especially if you have to calculate the
variances. You’re much better off using technology (like Excel — see below).

4
(BCHET-141)

These are the general steps to follow. Scroll down for a specific example (watch the video underneath
the steps).
Step 1: If you are given standard deviations, go to Step 2. If you are given variances to compare, go to
Step 3.
Step 2: Square both standard deviations to get the variances. For example, if σ1 = 9.6 and σ2 = 10.9,
then the variances (s1 and s2) would be 9.62 = 92.16 and 10.92 = 118.81.
Step 3: Take the largest variance, and divide it by the smallest variance to get the f-value. For
example, if your two variances were s1 = 2.5 and s2 = 9.4, divide 9.4 / 2.5 = 3.76.
Why? Placing the largest variance on top will force the F-test into a right tailed test, which is much
easier to calculate than a left-tailed test.
Step 4: Find your degrees of freedom. Degrees of freedom is your sample size minus 1. As you have
two samples (variance 1 and variance 2), you’ll have two degrees of freedom: one for the numerator
and one for the denominator.
Step 5: Look at the f-value you calculated in Step 3 in the f-table. Note that there are several tables, so
you’ll need to locate the right table for your alpha level. Unsure how to read an f-table? Read What is
an f-table?
Step 6: Compare your calculated value (Step 3) with the table f-value in Step 5. If the f-table value is
smaller than the calculated value, you can reject the null hypothesis
Q. What is Solvent extraction Principle and factors affecting on it?
Ans. Solvent Extraction is a separation method in which a solution (usually aqueous) comes into
contact with a second solvent (usually organic) that is immiscible with the first to pass the solvent into
the second solvent.
This occurs through the partitioning process, which involves the distribution of a solute between two
immiscible liquid phases. This technique is called solvent extraction and also known as liquid-liquid
extraction.
Principle of solvent extraction:
When the solute (liquid or solid) is add to a heterogeneous system of two immiscible liquids (in both
of which the solute is soluble), the solute distributes between the two liquids. This distribution
governed by Nernst distribution law. solvent-extraction-appratus-separatory-funnels
Factors affecting on the solvent extraction:
1. Salting out agents: The extraction of metals may enhance by adding high concentrations of
inorganic salts to the aqueous phase. This process known as salting out effect.
2. pH Value: The process of the liquid extraction is largely influence by the pH. At the difference pH
level, the two metals can be removed.
3. Oxidation state: The selectivity of the extraction maybe sometimes increased by modifying the
oxidation state of metal.
For example, extraction of Fe from chloride solution can be prevent by reducing Fe(III) to Fe(II) which
does not extract.
4. Masking agents: These agents which are metal complexing agents, prevent particular metals from
taking part in their usual reactions and in this way they remove their interference without requiring
an actual separation.
For example, Aluminium can be extract in the presence of iron with 8-quinolinol by masking the iron
with an alkali cyanide to form stable ferrocyanide ion. Cyanide, tartrate and EDTA are commonly use
as masking agents.
5. Modifier: Modifiers are the additives to the organic phase to increase the solubility of the extractant
in diluent. Example of modifiers is alcohol with high molecular weight.
6. Synergistic agents: The Synergistic agents are add to the organic phase, in order to enhance the
extraction. They form complexes, which taken up by the extract-ant. Neutral organophosphorous
compounds added to acidic organophosphours compound exert synergistic effect.

5
(BCHET-141)

organic phase, in order to enhance the extraction. They form complexes, which are taken up by the
extract-ant. Neutral organophosphorous compounds added to acidic organophosphours compound
exert synergistic effect.
Q. Explain the term about Cheletion.
Ans. It involves the formation or presence of two or more separate coordinate bonds between a
polydentate (multiple bonded) ligand and a single central atom. These ligands are called chelants,
chelators, chelating agents, or sequestering agents. They are usually organic compounds, but this is
not a necessity, as in the case of zinc and its use as a maintenance therapy to prevent the absorption of
copper in people with Wilson's disease.
Chelation is useful in applications such as providing nutritional supplements, in chelation therapy to
remove toxic metals from the body, as contrast agents in MRI scanning, in manufacturing using
homogeneous catalysts, in chemical water treatment to assist in the removal of metals, and in
fertilizers.
Numerous biomolecules exhibit the ability to dissolve certain metal cations. Thus, proteins,
polysaccharides, and polynucleic acids are excellent polydentate ligands for many metal ions.
Organic compounds such as the amino acids glutamic acid and histidine, organic diacids such as
malate, and polypeptides such as phytochelatin are also typical chelators. In addition to these
adventitious chelators, several biomolecules are specifically produced to bind certain metals .
In biochemistry and microbiology: Virtually all metalloenzymes feature metals that are chelated,
usually to peptides or cofactors and prosthetic groups. Such chelating agents include the porphyrin
rings in hemoglobin and chlorophyll. Many microbial species produce water-soluble pigments that
serve as chelating agents, termed siderophores. For example, species of Pseudomonas are known to
secrete pyochelin and pyoverdine that bind iron. Enterobactin, produced by E. coli, is the strongest
chelating agent known. The marine mussels use metal chelation esp. Fe3+ chelation with the Dopa
residues in mussel foot protein-1 to improve the strength of the threads that they use to secure
themselves to surfaces.
In geology: In earth science, chemical weathering is attributed to organic chelating agents (e.g.,
peptides and sugars) that extract metal ions from minerals and rocks. Most metal complexes in the
environment and in nature are bound in some form of chelate ring (e.g., with a humic acid or a
protein). Thus, metal chelates are relevant to the mobilization of metals in the soil, the uptake and the
accumulation of metals into plants and microorganisms. Selective chelation of heavy metals is
relevant to bioremediation (e.g., removal of 137Cs from radioactive waste).
Nutritional supplements: In the 1960s, scientists developed the concept of chelating a metal ion prior
to feeding the element to the animal. They believed that this would create a neutral compound,
protecting the mineral from being complexed with insoluble salts within the stomach, which would
render the metal unavailable for absorption. Amino acids, being effective metal binders, were chosen
as the prospective ligands, and research was conducted on the metal–amino acid combinations. The
research supported that the metal–amino acid chelates were able to enhance mineral absorption.
During this period, synthetic chelates such as ethylenediaminetetraacetic acid (EDTA) were being
developed. These applied the same concept of chelation and did create chelated compounds; but
these synthetics were too stable and not nutritionally viable. If the mineral was taken from the EDTA
ligand, the ligand could not be used by the body and would be expelled. During the expulsion
process the EDTA ligand randomly chelated and stripped another mineral from the body
According to the Association of American Feed Control Officials (AAFCO), a metal–amino acid
chelate is defined as the product resulting from the reaction of metal ions from a soluble metal salt
with amino acids, with a mole ratio in the range of 1–3 (preferably 2) moles of amino acids for one
mole of metal.[citation needed] The average weight of the hydrolyzed amino acids must be
approximately 150 and the resulting molecular weight of the chelate must not exceed 800 Da.
Since the early development of these compounds, much more research has been conducted, and has
been applied to human nutrition products in a similar manner to the animal nutrition experiments

6
(BCHET-141)

that pioneered the technology. Ferrous bis-glycinate is an example of one of these compounds that
has been developed for human nutrition.
Dental and oral application
First-generation dentin adhesives were first designed and produced in the 1950s. These systems were
based on a co-monomer chelate with calcium on the surface of the tooth and generated very weak
water resistant chemical bonding (2–3 MPa).
Q. Describe about the Extraction method.
Ans. Extraction methods Maceration: The whole powdered material is allowed to contact with the
solvent which is in a stoppered container for a particular time period with frequent agitation . At the
end of the process the solvent is drained off and the remaining miscella is removed from the plant
material through pressing or centrifuging. Maceration is not an advanced technique since active
ingredients cannot be totally extracted.
Percolation
A percolator which has a narrow cone shaped vessel open at both ends is used for this technique. The
plant material is moistened with the solvent and allowed to place in a percolation chamber. Then the
plant material is rinsed with the solvent for several times until the active ingredient is extracted. The
solvent can be used until its point of saturation .Soxhlet extraction This method is widely used when
the desired compound has a limited solubility in the particular solvent and impurities are less soluble
in the solvent.
The finely ground sample is placed ina porous bag or “thimble” which made out of filter paper or
cellulose. The solvent which the desired compounds are going to extracted is kept in the round
bottom flask.
Supercritical fluid extraction Supercritical gases such as carbon dioxide, nitrogen, methane, ethane,
ethylene, nitrous oxide, sulfur dioxide, propane, propylene, ammonia and sulfur hexafluoride are
used to extract active ingredients. The plant material is kept in a vessel which filled with a gas under
controlled conditions such as temperature and pressure. The active ingredients which dissolved in the
gas separate when both temperature and pressure are lower .The important factor of this technique is
the mass transfer of the solute in the supercritical solvent. Generally, temperature and pressure has
the biggest influence. However the effect of the pressure is more direct.
As the pressure increases, higher densities are achieved by the supercritical fluid. Thus the density of
the medium increases and the solubility of the solute will be increased. In order to get higher yields
the process has to be optimized. Using response surface methodology the optimum parameters can be
found Microwave assisted extraction In this method microwave energy facilitate the separation of
active ingredients from the plant material into the solvent. Microwaves possess electric and magnetic
fields which are perpendicular to each other. The electric filed generates heat via dipolar rotation and
ionic conduction. As high as the dielectric constant of the solvent, the resulting heating is fast. Unlike
the classical methods, microwave assisted extraction heats the whole sample simultaneously. During
the extraction, heat disrupts weak hydrogen bonds due to dipole rotation of molecules and the
migration of dissolved ions increases the penetration of solvent in to the sample or matrix .
Ultrasound assisted extraction. This is an advanced technique which has the capability of extracting
large amount of bioactive compounds within shorter extraction time. The main advantage of this
technique is the increasing the penetration of solvent into the matrix due to disruption of cell walls
produced by acoustical cavitations. And also this achieves at low temperatures and hence this is more
suitable for extraction of thermally unstable compounds. Accelerated solvent extraction
In accelerated solvent extraction technique, solvents are used at elevated temperatures and pressures
to keep the solvent in liquid form during the extraction process. Due to elevated temperature the
capacity of the solvent to solubilize the analytes increases and thus the diffusion rate increases. A
further, higher temperature reduces the viscosity and the solvent can easily penetrate the pores of the
matrix. The pressurized solvent enables more close contact with the analytes and solvent. However,
this method uses less time and less amount of solvent for the extraction of active ingredients.

7
(BCHET-141)

The advantages of this method are extractions for sample sizes 1-100g in minutes, dramatic solvent
reduction and wide range of applications and handling of acidic and alkaline matrices

Q. What is the application of chromatography?


Ans. Pharmaceutical sector
● To identify and analyze samples for the presence of trace elements or chemicals.
● Separation of compounds based on their molecular weight and element composition.
● Detects the unknown compounds and purity of mixture.
● In drug development.
Chemical industry
● In testing water samples and also checks air quality.
● HPLC and GC are very much used for detecting various contaminants such as
polychlorinated biphenyl (PCBs) in pesticides and oils.
● In various life sciences applications
Food Industry
● In food spoilage and additive detection
● Determining the nutritional quality of food
● Forensic Science
● In forensic pathology and crime scene testing like analyzing blood and hair samples of crime
place.
Molecular Biology Studies
● Various hyphenated techniques in chromatography such as EC-LC-MS are applied in the
study of metabolomics and proteomics along with nucleic acid research.
● HPLC is used in Protein Separation like Insulin Purification, Plasma Fractionation, and
Enzyme Purification and also in various departments like Fuel Industry, biotechnology, and
biochemical processes.
Chromatography is a laboratory technique for the separation of a mixture. The mixture is dissolved in
a fluid (gas, solvent, water, ...) called the mobile phase, which carries it through a system (a column, a
capillary tube, a plate, or a sheet) on which is fixed a material called the stationary phase. The
different constituents of the mixture have different affinities for the stationary phase.
The different molecules stay longer or shorter on the stationary phase, depending on their
interactions with its surface sites. So, they travel at different apparent velocities in the mobile fluid,
causing them to separate. The separation is based on the differential partitioning between the mobile
and the stationary phases. Subtle differences in a compound's partition coefficient result in
differential retention on the stationary phase and thus affect the separation.
Chromatography may be preparative or analytical. The purpose of preparative chromatography is to
separate the components of a mixture for later use, and is thus a form of purification. Analytical
chromatography is done normally with smaller amounts of material and is for establishing the
presence or measuring the relative proportions of analytes in a mixture. The two are not mutually
exclusive.
Q. What is adsorption Chromatography? Explain.
Ans. Adsorption chromatography is the oldest types of chromatography technique. It makes use of a
mobile phase which is either in liquid or gaseous form. The mobile phase is adsorbed onto the surface
of a stationary solid phase.
Adsorption Chromatography Principle: Adsorption Chromatography involves the analytical
separation of a chemical mixture based on the interaction of the adsorbate with the adsorbent. The
mixture of gas or liquid gets separated when it passes over the adsorbent bed that adsorbs different
compounds at different rates.
Adsorbent – A substance which is generally porous in nature with a high surface area to adsorb
substances on its surface by intermolecular forces is called adsorbent. Some commonly used

8
(BCHET-141)

adsorbents are Silica gel H, silica gel G, silica gel N, silica gel S, hydrated gel silica, cellulose
microcrystalline, alumina, modified silica gel, etc.
Adsorption Chromatography Procedure: Before starting with the adsorption chromatography
Experiment let us understand the two types of phases and the types of forces involved during the
mixture separation process.
Stationary phase – Adsorbent is the stationary phase in adsorption chromatography. The forces
involved help to remove solutes from the adsorbent so that they can move with the mobile phase.
Mobile phase – Either a liquid or a gas is used as a mobile phase in adsorption chromatography.
Forces involved help to remove solutes from the adsorbent so that they can move with the mobile
phase. When a liquid is used as a mobile phase it is called LSC (Liquid-Solid Chromatography). When
a gas is used as a mobile phase it is called GSC (Gas-Solid Chromatography).
Adsorption Chromatography Experiment (TLC)
● Take a clean and dry chromatographic jar.
● To make sure that the environment in the jar is saturated with solvent vapours, a paper
soaked in the mobile phase is applied to the walls.
● Add the mobile phase to the jar and close it.
● Maintain equilibrium
● Mark the baseline on the adsorbent.
● Apply sample to TLC plate with the help of a capillary tube and allow it to dry.
● Put the plates in the jar and close it.
● Wait until the solvent moves from the baseline.
● Take out the TLC plate and dry it.
● Adsorption Chromatography Applications
● Adsorption chromatography is used for separation of amino acids.
● It is used in the isolation of antibiotics.
● It is used in the identification of carbohydrates.
● It is used to separate and identify fats and fatty acids.
● It is used to isolate and determine the peptides and proteins.

Types of Adsorption Chromatography:


● Thin Layer Chromatography – It is a chromatography technique where the mobile phase
moves over an adsorbent. The adsorbent is a thin layer which is applied to a solid support for
the separation of components. The separation takes place through differential migration
which occurs when the solvent moves along the powder spread on the glass plates.

9
(BCHET-141)

● Mobile phase – This phase in TLC can either be a single liquid or mixture of liquids. Some
commonly used liquids are Ethanol, acetone, methanol, chloroform. Stationary phase –
Adsorbents
● Column chromatography – the technique in which the solutes of a solution are entitled to
travel down a column where the individual components are adsorbed by the stationary
phase. Based on the affinity towards adsorbent the components take positions on the column.
The most strongly adsorbed component is seen at the top of the column.
● Gas-Solid chromatography – The principle of separation in GSC is adsorption. It is used for
solutes which have less solubility in the stationary phase. This type of chromatography
technique has a very limited number of stationary phases available and therefore GSC is not
used widely.

10
(BCHET-141)

BCHET-141: Sampling and Error in Chemical Analysis


Guess Paper-II

Q. Explain the mechanism of separation.


Ans. Chromatographic Mechanisms: Chromatographic techniques are based on four different
sorption mechanisms, surface adsorption, partition, ion exchange and size exclusion.
Surface Adsorption Chromatography: The separation mechanism depends upon differences in
polarity between the different feed components. The more polar a molecule, the more strongly it will
be adsorbed by a polar stationary phase. Similarly, the more non-polar a molecule, the more strongly
it will be adsorbed by non-polar stationary phase. During a surface adsorption chromatography
process, there is competition for stationary phase adsorption sites, between the materials to be
separated and the mobile phase. Feed molecules of low polarity spend proportionally more time in
the mobile phase than those molecules that are highly polar, which are retained longer. Therefore the
components of a mixture are eluted in order of increasing polarity. Almost any polar solid can be
employed as a polar stationary phase. The choice of stationary phase is governed by the polarity of
the feed components. If the feed components are adsorbed too strongly, they may be difficult to
remove. Weakly polar mixtures should be separated on highly active absorbents, or little or no
separation will occur. The choice of mobile phase is equally important. The polarity of the mobile
phase should be chosen to compliment the choice of stationary phase. In general, good separation is
achieved by using fairly polar stationary phases and low polarity mobile phases such as hexane.
Water, it should be noted, is a very polar solvent.
The 2 most common adsorbents used in chromatography are porous alumina and porous silica gel. Of
lesser importance are carbon, magnesium oxide, and various carbonates. Alumina is a polar
adsorbent and is preferred for the separation of components that are weakly or moderately polar,
with the more polar components retained more selectively by the adsorbent, and therefore eluted
from the column last. In addition, alumina is a basic adsorbent, thus preferentially retaining acidic
compounds. Silica gel is less polar than alumina and is an acidic adsorbent, thus preferentially
retaining basic compounds. Carbon is a non-polar (apolar) stationary phase with the highest
attraction for larger non-polar molecules. Adsorbent-type sorbents are better suited for the separation
of a mixture on the basis of chemical type (e.g. olefins, esters, acids, aldehydes, alcohols) than for
separation of individual members of a homologous series. Partition chromatography is often
preferred for the latter, wherein an inert solid (often silica gel) is coated with a liquid phase
Hydrophobic interaction chromatography (HIC) is a special form of surface adsorption
chromatography. The materials to be separated should be at least partially hydrophobic in nature.
Separation is facilitated by differences in the relative strength of interaction between these materials
and a matrix substituted with suitably hydrophobic groups. This type of process is extensively used
for the preparative-scale separation of proteins.
Partition Chromatography: Unique to chromatography is the liquid-supported or liquid-bonded
solids, where the mechanism is absorption into the liquid, also referred to as a partition mode of
separation or partition chromatography. With mobile liquid phases, there is a tendency for the
stationary liquid phase to be stripped or dissolved. Therefore, the stationary liquid phase has to be
chemically bonded to the solid bonding support. In partition chromatography, the stationary liquid
phase is coated onto a solid support such as silica gel, cellulose powder, or kieselguhr (hydrated
silica). Assuming that there is no adsorption by the solid support, the feed components move through
the system at rates determined by their relative solubilities in the stationary and mobile phases. In
general, it is not necessary for the stationary and mobile phases to be totally immiscible, but a low
degree of mutual solubility is desirable. Hydrophilic stationary phase liquids are generally used in
conjunction with hydrophobic mobile phases (referred to as "normal-phase chromatography"), or vice
versa (referred to as a '"reverse- phase chromatography").

11
(BCHET-141)

Suitable hydrophilic mobile phases include water, aqueous buffers and alcohols. Hydrophobic mobile
phases include hydrocarbons in combination with ethers, esters and chlorinated solvents.
Ion Exchange Chromatography (IEC): In this process, the stationary phase consists of an insoluble
porous resinous material containing fixed charge-carrying groups. Counter-ions of opposite charge
are loosely complexed with these groups. Passage of a liquid mobile phase, containing ionised or
partially ionised molecules of the same charge as the counter-ions through the system, results in the
reversible exchange of these ions.
The degree of affinity between the stationary phase and feed ions dictates the rate of migration and
hence degree of separation between the different solute species. The most widely used type of
stationary phase is a synthetic copolymer of styrene and divinyl benzene (DVB), produced as very
small beads in the micrometer range. Careful control over the amount of DVB added dictates the
degree of cross-linking and hence the porosity of the resinous structure. Resins with a low degree of
cross-linking have large pores that allow the diffusion of large ions into the resin beads and facilitate
rapid ion exchange. Highly cross- linked resins have pores of sizes similar to those of small ions. The
choice of a particular resin will very much be dependent upon a given application. Cation (+) or anion
(-) exchange properties can be introduced by chemical modification of the resin. Ion exchange
chromatography has found widespread uses in industrial processes. This technique is used in the
separation of transition metals, the removal of trace metals from industrial effluents and in the
purification of a wide range of organic compounds and pharmaceuticals. The resin matrix is usually
relatively inexpensive when compared with other types of stationary phase. Ion exchange
chromatography is probably the most widely used large-scale chromatographic process, but is limited
to ionisable, water soluble molecules.
Size Exclusion Chromatography (SEC): In this process, also known as gel permeation
chromatography, molecules of a feed material are separated according to their size or molecular
weight. The stationary phase consists of a porous cross-linked polymeric gel. The pores of the gel
vary in size and shape such that large molecules tend to be excluded by the smaller pores and move
preferentially with the mobile phase. The smaller molecules are able to diffuse into and out of the
smaller pores and will thus be retarded in the system.
The very smallest molecules will permeate the gel pores to the greatest extent and will thus be most
retarded by the system. The components of a mixture therefore elute in order of decreasing size or
molecular weight. The stationary phase gels can either be hydrophilic for separations in aqueous or
polar solvents, or hydrophobic for use with non-polar or weakly-polar solvents. Sephadex, a cross-
linked polysaccharide material available in bead form, is widely used with polar/hydrophilic mobile
phases. The degree of cross-linking can be varied to produce beads with a range of pore sizes to
fractionate samples over different molecular weight ranges. Hydrophobic gels are made by cross-
linking polystyrene with DVB and are therefore similar to ion exchange resins but without the ionic
groups. SEC is used extensively in the biochemical industry to remove small molecules and inorganic
salts from valuable higher molecular weight products such as peptides, proteins and enzymes.
Q. How Does Ion Exchange Chromatography Work?
Ans. Ion exchange (IEX) chromatography is a technique that is commonly used in biomolecule
purification. It involves the separation of molecules on the basis of their charge. This technique
exploits the interaction between charged molecules in a sample and oppositely charged moieties in
the stationery phase of the chromatography matrix. This type of separation is difficult using other
techniques as charge is easily manipulated by the pH of buffer used. Two types of ion exchange
separation is possible - cation exchange and anion exchange. In anion exchange the stationary phase
is positively charged whilst in cation exchange it is negatively charged.
Principle of Ion Exchange Chromatography: IEX chromatography is used in the separation of
charged biomolecules. The crude sample containing charged molecules is used as the liquid phase.
When it passes through the chromatographic column, molecules bind to oppositely charged sites in
the stationary phase. The molecules separated on the basis of their charge are eluted using a solution

12
(BCHET-141)

of varying ionic strength. By passing such a solution through the column, highly selective separation
of molecules according to their different charges takes place.
The Technique: Key steps in the ion exchange chromatography procedure are listed below:
● An impure protein sample is loaded into the ion exchange chromatography column at a
particular pH.
● Charged proteins will bind to the oppositely charged functional groups in the resin
● A salt gradient is used to elute separated proteins. At low salt concentrations, proteins having
few charged groups are eluted and at higher salt concentrations, proteins with several
charged groups are eluted.
● Unwanted proteins and impurities are removed by washing the column.
A pH gradient can also be applied to elute individual proteins on the basis of their isoelectric point
(pI) i.e. the point at which the amino acids in a protein carry neutral charge and hence do not migrate
in an electric field. As amino acids are zwitter ionic compounds they contain groups having both
positive and negative charges. Based on the pH of the environment, proteins carry a positive,
negative, or nil charge. At their isoelectric point, they will not interact with the charged moieties in
the column resin and hence are eluted. A decreasing pH gradient can be used to elute proteins using
an anion exchange resin and an increasing pH gradient can be used to elute proteins from cation
exchange resins. This is because increasing the buffer pH of the mobile phase causes the protein to
become less protonated (less positively charged) so it cannot form an ionic interaction with the
negatively charged resin, allowing is elution. Conversely, lowering the pH of the mobile phase will
cause the molecule to become more protonated (less negatively charged_, allowing its elution.
Resin Selection in Ion Exchange Chromatography: Ion exchange resins have positively or negatively
charged functional groups covalently linked to a solid matrix. Matrices are usually made of cellulose,
polystyrene, agarose, and polyacrylamide. Some of the factors affecting resin choice are anion or
cation exchanger, flow rate, weak or strong ion exchanger, particle size of the resin, and binding
capacity. The stability of the protein of interest dictates the selection of an anion or a cation exchanger
– either exchanger may be used if the stability is of no concern.
The Applications of Ion Exchange Chromatography: Ion exchange is the most widely used
chromatographic method for the separation and purification of charged biomolecules such as
polypeptides, proteins, polynucleotides, and nucleic acids. Its widespread applicability, high capacity
and simplicity, and its high resolution are the key reasons for its success as a separation method. Ion
exchange chromatography is widely used in several industrial applications some of which are as
follows:
● Separation and Purification of blood components such as albumin, recombinant growth
factors and enzymes.
● Biotechnology - Analytical applications such as quality control and process monitoring
● Food and clinical research - to study wheat varieties and the correlation of proteinuria with
different renal diseases.
● Fermentation - Cation exchange resins are used to monitor the fermentation process during ß-
galactosidase production.
Q. Explain the methods of thermal analysis.
Ans. Methods of thermal analysis: Three groups of physical parameters to be measured (mass,
temperature or heat flux, mechanical and other parameters) allow for a classification of the TA
methods. The scheme does not mention the reaction calorimetry for which, on the other hand, a large
variety of apparative solutions has been developed.. Undoubtedly, it belongsto the thermal methods
of analysis. Reaction calorimetry is performed in much larger reaction vessels (e.g. glass re-actors
establishing adiabatic conditions) than usual TA orcalorimetry. It contributes primarily to the up-
scaling of chemical processes to be practised in industry.
Finally, the thermometric titration and its potential for the study of biochemical systems have to be
mentioned here. Conventional TA and simultaneous thermal analysis(STA)

13
(BCHET-141)

Previously, doing classical or conventional TA meant to follow the temperature of a sample upon
heating and/or cooling, i.e. to record dependencies T = f(T) or T = f(t). In the field of heterogeneous
equilibria, mostly of inorganic systems, this was rather easy to practise in the student’s lab classes first
for single-phase, then for binary systems. As an example, the system Sn–Pb was comparably easy to
record: mixtures of various compositions are pre-pared in porcelain crucibles, a thermometer or a
thermo-couple is positioned into the mixture, which is then heated up to complete melting, carefully
homogenized and then left for free cooling. Measuring the temperature decrease allows to detect
transition or stop points which reflect the solidification behaviour of the given mixture. This simple
setup allows to establish not too complicated phase diagrams containing eutectic and/or dystectic
points. Similar experimental setups were utilized by the mineralogists working in the nineteenth
century when studying the phase behaviour of minerals. Later on, TA transformed into difference
thermal analysis (DTA), primarily, after introduction of a thermally inert reference substance
subjected to the same heating or cooling program as the sample and, secondarily, by the development
of the difference measuring setup (W. C. Roberts-Austen 1899 [13]).2 Instead of T = f(T) one followed
now DT = f(T) ).
Other than in the schematic representation in the experimental DTA curves exhibit fluent changes
which strongly depend on outer experimental conditions; this is limiting the record of reproducible
data. Therefore, extrapolation procedures are recommended being especially important for the
determination of the beginning of thermal effects. The labelling of the characteristic temperatures is
laid down in international conventions.
An interpretation of DTA effects has to acknowledge that for first-order phase transitions, e.g. a
melting process, only the onset temperature Ton ex is relevant, but not the peak temperature TP. It is,
from a physical point of view, only the point where heat supply and heat consumption are equal.
In case of real” chemical reactions, also the peak maximum is significant as it represents the maximal
heat exchange of the reaction.

If more than one parameter is followed in one measuring device, we are dealing with simultaneous
thermal analysis which is mostly realized in the combination DTA-TG. Simply spoken, a DTA
measuring cell is assembled onto a sensitive balance which was realized at first with the
Derivatograph by the Hungarian Optical Works MOM ]. In the meantime, a large variety of
differently conceived devices only for the combination DTA-TG is commercially available . The
differences in the apparatus design can have both advantages and disadvantages; it is the operator’s
task to evaluate them and to make an appropriate choice (e.g. upward or downward flow of the

14
(BCHET-141)

carrier gas, horizontal or vertical position of the sample holder, suspension or top-loading sample
crucibles).
Presumably each introductory text dealing with the practice of TA presents at least one of these two
substances with a measuring curve’s plot: calcium oxalate monohydrate, CaC2O4·H2O, or copper(II)-
sulphate pentahydrate, CuSO4·5 H2O. Indeed, these substances are very appropriate examples and,
moreover, are the most-recommended testing substances for the daily laboratory practice: their
thermal behaviour is well known, they practically do not change upon storage, and the TA curves
exhibit thermal effects with unequivocal shape and position. Therefore, further information about
these substances should be taken from well-introduced textbooks whereas two less-known
compounds, being, on the other hand, not less-informative, will be elucidated here. Additionally,
these two examples illustrate well the benefit of case-dependently designed measuring programs
[cyclic heating and cooling or gas changes and others] which allow for the determination of
important sample properties by measuring a single sample only.
Q. Explain the classification of electroanalytic.
Ans. Electroanalytical methods are a class of techniques in analytical chemistry which study an
analyte by measuring the potential (volts) and/or current (amperes) in an electrochemical cell
containing the analyte. These methods can be broken down into several categories depending on
which aspects of the cell are controlled and which are measured. The three main categories are
potentiometry (the difference in electrode potentials is measured), coulometry (the cell's current is
measured over time), and voltammetry (the cell's current is measured while actively altering the cell's
potential).
Potentiometry: Potentiometry passively measures the potential of a solution between two electrodes,
affecting the solution very little in the process. One electrode is called the reference electrode and has
a constant potential, while the other one is an indicator electrode whose potential changes with the
composition of the sample. Therefore, the difference of potential between the two electrodes gives an
assessment of the composition of the sample. In fact, since potentiometric measurement is a non-
destructive measurement, assuming that the electrode is in equilibrium with the solution we are
measuring the potential of the solution. Potentiometry usually uses indicator electrodes made
selectively sensitive to the ion of interest, such as fluoride in fluoride selective electrodes, so that the
potential solely depends on the activity of this ion of interest. The time that takes the electrode to
establish equilibrium with the solution will affect the sensitivity or accuracy of the measurement. In
aquatic environments, platinum is often used due to its high electron transfer kinetics,although an
electrode made from several metals can be used in order to enhance the electron transfer kinetics.The
most common potentiometric electrode is by far the glass-membrane electrode used in a pH meter.
A variant of potentiometry is chronopotentiometry which consists in using a constant current and
measurement of potential as a function of time. It has been initiated by Weber.
Coulometry: Coulometry uses applied current or potential to completely convert an analyte from one
oxidation state to another. In these experiments, the total current passed is measured directly or
indirectly to determine the number of electrons passed. Knowing the number of electrons passed can
indicate the concentration of the analyte or, when the concentration is known, the number of electrons
transferred in the redox reaction. Common forms of coulometry include bulk electrolysis, also known
as Potentiostatic coulometry or controlled potential coulometry, as well as a variety of coulometric
titrations.
Voltammetry: Voltammetry applies a constant and/or varying potential at an electrode's surface and
measures the resulting current with a three electrode system. This method can reveal the reduction
potential of an analyte and its electrochemical reactivity. This method in practical terms is
nondestructive since only a very small amount of the analyte is consumed at the two-dimensional
surface of the working and auxiliary electrodes. In practice the analyte solutions is usually disposed
of since it is difficult to separate the analyte from the bulk electrolyte and the experiment requires a
small amount of analyte. A normal experiment may involve 1–10 mL solution with an analyte

15
(BCHET-141)

concentration between 1 and 10 mmol/L. Chemically modified electrodes are employed for analysis of
organic and inorganic samples.
Polarography: Polarography is a subclass of voltammetry that uses a dropping mercury electrode as
the working electrode.
Amperometry: Amperometry is the term indicating the whole of electrochemical techniques in which
a current is measured as a function of an independent variable that is, typically, time or electrode
potential. Chronoamperometry is the technique in which the current is measured, at a fixed potential,
at different times since the start of polarisation. Chronoamperometry is typically carried out in
unstirred solution and at fixed electrode, i.e., under experimental conditions avoiding convection as
the mass transfer to the electrode. On the other hand, voltammetry is a subclass of amperometry, in
which the current is measured by varying the potential applied to the electrode. According to the
waveform that describes the way how the potential is varied as a function of time, the different
voltammetric techniques are defined. Confusion arose recently about the correct use of many terms
proper of electrochemistry/electroanalysis, often owing to the diffusion of electroanalytical techniques
in fields where they constitute an instrument to use, not being the 'core business' of the study. Though
electrochemists are pleased about this, they invite to use the terms properly, in order to avoid fatal
misunderstandings.
Q. What is Conductometric Titration? Explain.
Ans. Conductometric titration is a laboratory method of quantitative analysis used to identify the
concentration of a given analyte in a mixture. Conductometric titration involves the continuous
addition of a reactant to a reaction mixture and the documentation of the corresponding change in the
electrolytic conductivity of the reaction mixture. It can be noted that the electrical conductivity of an
electrolytic solution is dependent on the number of free ions in the solution and the charge
corresponding to each of these ions.
In this type of titration, upon the continuous addition of the titrant (and the continuous recording of
the corresponding change in electrolytic conductivity), a sudden change in the conductivity implies
that the stoichiometric point has been reached. The increase or decrease in the electrolytic
conductivity in the conductometric titration process is linked to the change in the concentration of the
hydroxyl and hydrogen ions (which are the two most conducting ions). The strength of an acid can be
determined via conductometric titration with a standard solution of a base. An example of a curve
plotted for such a titration process is given below. The method of conductometric titration is very
useful in the titration of homogeneous suspensions or coloured solutions as these titrations cannot be
done with the use of normal chemical indicators.
Principle: The principle of the conductometric titration process can be stated as follows – During a
titration process, one ion is replaced with another and the difference in the ionic conductivities of
these ions directly impacts the overall electrolytic conductivity of the solution.
It can also be observed that the ionic conductance values vary between cations and anions. Finally,
the conductivity is also dependant upon the occurrence of a chemical reaction in the electrolytic
solution.
Theory: The theory behind this type of titration states that the end-point corresponding to the
titration process can be determined by means of conductivity measurement. For a neutralization
reaction between an acid and a base, the addition of the base would lower conductivity of the solution
initially. This is because the H+ ions would be replaced by the cationic part of the base.
After the equivalence point is reached, the concentration of the ionic entities will increase. This, in
turn, increases the conductance of the solution. Therefore, two straight lines with opposite slopes will
be obtained when the conductance values are plotted graphically. The point where these two lines
intersect is the equivalence point.
Process
For the conductometric titration of an acid with a base, the general process is as follows:

16
(BCHET-141)

10 ml of the acid must be diluted with approximately 100 ml of distilled water (so that the changes in
the conductance brought on by the addition of the base become small).
● A burette must now be filled with the base and the initial volume must be noted.
● In this step, a conductivity cell must be inserted into the diluted acid solution in a way that
both the electrodes are completely immersed.
● Now, the conductivity cell can be connected to a digital conductometer in order to obtain an
initial reading.
● The base must now be added drop wise into the acid solution. The volume of base added
must be noted along with the corresponding change in the conductance.
● A sharp increase in the conductance of the solution implies that the endpoint has been
reached. However, a few more readings must be taken after the endpoint of the titration.
● These observed values must now be plotted graphically. The equivalence point can be
obtained from the point of intersection between the two lines.
● The strength of the acid can now be calculated via the formula S2 = (V1S1)/10; where S2 is the
strength of the acid, V1 is the volume of base added (as per the equivalence point on the
conductometric titration graph), and S1 is the strength of the base (already known). Here, the
volume of the acid (V2) is equal to 10 ml.
Q. Explain UV Visible Spectrometry.
Ans. Principle of UV Spectroscopy
● Basically, spectroscopy is related to the interaction of light with matter.
● As light is absorbed by matter, the result is an increase in the energy content of the atoms or
molecules.
● When ultraviolet radiations are absorbed, this results in the excitation of the electrons from
the ground state towards a higher energy state.
● Molecules containing π-electrons or non-bonding electrons (n-electrons) can absorb energy in
the form of ultraviolet light to excite these electrons to higher anti-bonding molecular orbitals.
● The more easily excited the electrons, the longer the wavelength of light it can absorb. There
are four possible types of transitions (π–π*, n–π*, σ–σ*, and n–σ*), and they can be ordered as
follows: σ–σ* > n–σ* > π–π* > n–π*
● The absorption of ultraviolet light by a chemical compound will produce a distinct spectrum
which aids in the identification of the compound.
Instrumentation of UV Spectroscopy
Light Source
● Tungsten filament lamps and Hydrogen-Deuterium lamps are most widely used and suitable
light source as they cover the whole UV region.
● Tungsten filament lamps are rich in red radiations; more specifically they emit the radiations
of 375 nm, while the intensity of Hydrogen-Deuterium lamps falls below 375 nm.
Monochromator
● Monochromators generally is composed of prisms and slits.

17
(BCHET-141)

● Most of the spectrophotometers are double beam spectrophotometers.


● The radiation emitted from the primary source is dispersed with the help of rotating prisms.
● The various wavelengths of the light source which are separated by the prism are then
selected by the slits such the rotation of the prism results in a series of continuously
increasing wavelength to pass through the slits for recording purpose.
● The beam selected by the slit is monochromatic and further divided into two beams with the
help of another prism.
Sample and reference cells
● One of the two divided beams is passed through the sample solution and second beam is
passé through the reference solution.
● Both sample and reference solution are contained in the cells.
● These cells are made of either silica or quartz. Glass can’t be used for the cells as it also
absorbs light in the UV region.
Detector
● Generally two photocells serve the purpose of detector in UV spectroscopy.
● One of the photocell receives the beam from sample cell and second detector receives the
beam from the reference.
● The intensity of the radiation from the reference cell is stronger than the beam of sample cell.
This results in the generation of pulsating or alternating currents in the photocells.
Amplifier
● The alternating current generated in the photocells is transferred to the amplifier.
● The amplifier is coupled to a small servometer.
● Generally current generated in the photocells is of very low intensity, the main purpose of
amplifier is to amplify the signals many times so we can get clear and recordable signals.
Recording devices
● Most of the time amplifier is coupled to a pen recorder which is connected to the computer.
● Computer stores all the data generated and produces the spectrum of the desired compound.
Applications of UV Spectroscopy
Detection of Impurities
● It is one of the best methods for determination of impurities in organic molecules.
● Additional peaks can be observed due to impurities in the sample and it can be compared
with that of standard raw material.
● By also measuring the absorbance at specific wavelength, the impurities can be detected.
Structure elucidation of organic compounds
● It is useful in the structure elucidation of organic molecules, such as in detecting the presence
or absence of unsaturation, the presence of hetero atoms.
● UV absorption spectroscopy can be used for the quantitative determination of compounds
that absorb UV radiation.
● UV absorption spectroscopy can characterize those types of compounds which absorbs UV
radiation thus used in qualitative determination of compounds. Identification is done by
comparing the absorption spectrum with the spectra of known compounds.
● This technique is used to detect the presence or absence of functional group in the compound.
Absence of a band at particular wavelength regarded as an evidence for absence of particular
group.
● Kinetics of reaction can also be studied using UV spectroscopy. The UV radiation is passed
through the reaction cell and the absorbance changes can be observed.
● Many drugs are either in the form of raw material or in the form of formulation. They can be
assayed by making a suitable solution of the drug in a solvent and measuring the absorbance
at specific wavelength.
● Molecular weights of compounds can be measured spectrophotometrically by preparing the
suitable derivatives of these compounds.

18
(BCHET-141)

● UV spectrophotometer may be used as a detector for HPLC.


Q. Explain the basic Principles of Instrumentation and application.
Ans. Instrumentation is a collective term for measuring instruments that are used for indicating,
measuring and recording physical quantities. The term has its origins in the art and science of
scientific instrument-making. Instrumentation can refer to devices as simple as direct-reading
thermometers, or as complex as multi-sensor components of industrial control systems. Today,
instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday
household use (e.g., smoke detectors and thermostats).
In some cases the sensor is a very minor element of the mechanism. Digital cameras and wristwatches
might technically meet the loose definition of instrumentation because they record and/or display
sensed information. Under most circumstances neither would be called instrumentation, but when
used to measure the elapsed time of a race and to document the winner at the finish line, both would
be called instrumentation.
Household
A very simple example of an instrumentation system is a mechanical thermostat, used to control a
household furnace and thus to control room temperature. A typical unit senses temperature with a bi-
metallic strip. It displays temperature by a needle on the free end of the strip. It activates the furnace
by a mercury switch. As the switch is rotated by the strip, the mercury makes physical (and thus
electrical) contact between electrodes.
Another example of an instrumentation system is a home security system. Such a system consists of
sensors (motion detection, switches to detect door openings), simple algorithms to detect intrusion,
local control (arm/disarm) and remote monitoring of the system so that the police can be summoned.
Communication is an inherent part of the design.
Kitchen appliances use sensors for control.
● A refrigerator maintains a constant temperature by measuring the internal temperature.
● A microwave oven sometimes cooks via a heat-sense-heat-sense cycle until sensing done.
● An automatic ice machine makes ice until a limit switch is thrown.
● Pop-up bread toasters can operate by time or by heat measurements.
● Some ovens use a temperature probe to cook until a target internal food temperature is
reached.
● A common toilet refills the water tank until a float closes the valve. The float is acting as a
water level sensor.
Automotive
Modern automobiles have complex instrumentation. In addition to displays of engine rotational
speed and vehicle linear speed, there are also displays of battery voltage and current, fluid levels,
fluid temperatures, distance traveled and feedbacks of various controls (turn signals, parking brake,
headlights, transmission position). Cautions may be displayed for special problems (fuel low, check
engine, tire pressure low, door ajar, seat belt unfastened). Problems are recorded so they can be
reported to diagnostic equipment. Navigation systems can provide voice commands to reach a
destination. Automotive instrumentation must be cheap and reliable over long periods in harsh
environments. There may be independent airbag systems which contain sensors, logic and actuators.
Anti-skid braking systems use sensors to control the brakes, while cruise control affects throttle
position. A wide variety of services can be provided via communication links as the OnStar system.
Autonomous cars (with exotic instrumentation) have been demonstrated.
Aircraft
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle deflections that
could be interpreted as altitude and airspeed. A magnetic compass provided a sense of direction. The
displays to the pilot were as critical as the measurements.
A modern aircraft has a far more sophisticated suite of sensors and displays, which are embedded
into avionics systems. The aircraft may contain inertial navigation systems, global positioning
systems, weather radar, autopilots, and aircraft stabilization systems. Redundant sensors are used for

19
(BCHET-141)

reliability. A subset of the information may be transferred to a crash recorder to aid mishap
investigations. Modern pilot displays now include computer displays including head-up displays.
Air traffic control radar is distributed instrumentation system. The ground portion transmits an
electromagnetic pulse and receives an echo (at least). Aircraft carry transponders that transmit codes
on reception of the pulse. The system displays aircraft map location, an identifier and optionally
altitude. The map location is based on sensed antenna direction and sensed time delay. The other
information is embedded in the transponder transmission.
Q. Describe about the Experimental Error and Data Analysis
Ans. Theory: Any measurement of a physical quantity always involves some uncertainty or
experimental error. This means that if we measure some quantity and then repeat the measurement,
we will most certainly obtain a different value the second time around. The question then is: Is it
possible to know the true value of a physical quantity? The answer to this question is that we cannot.
However, with greater care during measurements and with the application of more experimental
methods, we can reduce the errors and, thereby gain better confidence that the measurements are
closer to the true value. Thus, one should not only report result of a measurement but also give some
indication of the uncertainty of the experimental data.
Experimental error measured by its accuracy and precision, is defined as the difference between a
measurement and the true value or the difference between two measured values. These two terms
have often been used synonymously, but in experimental measurements there is an important
distinction between them.
Accuracy measures how close the measured value is to the true value or accepted value. In other
words, how correct the measurement is. Quite often however, the true or accepted value of a physical
quantity may not be known, in which case it is sometimes impossible to determine the accuracy of a
measurement.
Precision refers to the degree of agreement among repeated measurements or how closely two or
more measurements agree with each other. The term is sometimes referred to as repeatability or
reproducibility. Infact, a measurement that is highly reproducible tends to give values which are very
close to each other. The concepts of precision and accuracy are demonstrated by the series of targets
shown in the figure below. If the centre of the target is the “true value”, then A is very precise
(reproducible) but not accurate; target B demonstrates both precision and accuracy (and this is the
goal in a laboratory); average of target C’s scores give an accurate result but the precision is poor; and
target D is neither precise nor accurate. It is important to note that no matter how keenly planned and
executed, all experiments have some degree of error or uncertainty. Thus, one should learn how to
identify, correct, or evaluate sources of error in an experiment and how to express the accuracy and
precision of measurements when collecting data or reporting results.
Types of experimental errors
Three general types of errors are encountered in a typical laboratory experiment measurements:
random or statistical errors, systematic errors, and gross errors.
Random (or indeterminate) errors arises due to some uncontrollable fluctuations in variables that
affect experimental measurements and therefore has no specific cause. These errors cannot be
positively identified and do not have a definite measurable value; instead, they fluctuate in a random
manner. These errors affect the precision of a measurement and are sometimes referred to as two-
sided errors because in the absence of other types of errors, repeated measurements yield results that
fluctuate above and below the true value. With sufficiently large number of experimental
measurements, an evenly distributed data scattered around an average value or mean is achieved.
Thus, precision of measurements subject to random errors can be improved by repeated
measurements. Random errors can be easily detected and can be reduced by repeating the
measurement or by refining the measurement method or technique. Systematic (or determinate)
errors are instrumental, methodology-based, or individual mistakes that lead to “skewed” data, that
is consistently deviated in one direction from the true value. These type of errors arises due to some
specific cause and does not lead to scattering of results around the actual value. Systematic errors can

20
(BCHET-141)

be identified and eliminated with careful inspection of theexperimental methods, or cross-calibration


of instruments.

A determinate error can be further categorized into two: constant determinate error and proportional
determinate error.
Constant determinate error (ecd) gives the same amount of error independent of the concentration of
the substance being analyzed, whereas proportional determinate error (epd) depends directly on the
concentration of the substance being analyzed (i.e., epd = K C), where K is a constant and C is the
concentration of the analyte. Therefore, the total determinate error (Etd) will be the sum of the
proportional and constant determinate errors, i.e., Etd=ecd+epd Gross errors are caused by an
experimenter’s carelessness or equipment failure. As a result, one gets measurements, outliers, that
are quite different from the other sets of similar measurements (i.e., the outliers are so far above or
below the true value that they are usually discarded when assessing data. The “Q-test” (discussed
later) is a systematic way to determine if a data point should be dis- carded or not.
Studying a problem through the use of statistical data analysis often involves four basic steps,
namely; (a) defining the problem, (b) collecting the data (c) analyzing the data, and (d) reporting the
results. In order to obtain accurate data about a problem, an exact definition of the problem must be
made. Otherwise, it would be extremely difficult to gather data without a clear definition of the
problem. On collection of data, one must start with an emphasis on the importance of defining the
population, the set of all elements of interest in a study, about which we seek to make inferences.
Here, all the requirements of sampling, the operations involved in getting a reasonable amount of
material that is representative of the whole population, and experimental design must be met.
Sampling is usually the most difficult step in the entire analytical process of chemical analysis,
particularly where large quantities of samples (a sample is the subset of the population) to be
analysed are concerned. Proper sampling methods should ensure that the sample obtained for
analysis is representative of the material to be analyzed and that the sample that is analyzed in the
laboratory is homogeneous. The more representative and homogeneous the samples are, the smaller
will be the part of the analysis error that is due to the sampling step. Note that, an analysis cannot be
more precise than the least precise operation.
The main idea of statistical inference is to take a random finite sample from a population (since it is
not practically feasible to test the entire population) and then use the information from the sample to
make inferences about particular population characteristics or attributes such as mean (measure of
central ten- dency), the standard deviation (measure of spread), or the proportion of items in the
population that have a certain characteristic. A sample is therefore the only realistic way to obtain
data due to the time and cost constraints. It also saves effort. Furthermore, a sample can, in some

21
(BCHET-141)

cases, provide as much or more accuracy than a corresponding study that would otherwise attempt to
investigate the entire population (careful collection of data from a sample will often provide better
information than a less careful study that attempts to look at everything). Note that data can be either
qualitative, labels or names used to identify an attribute of each element of the population, or
quantitative, numeric values that indicate how much or how many a particular element exists in the
entire population.
Q. Explain the Errors and treatment of Analytical data.
Ans. Errors and treatment of Analytical data
Errors and Sources of Errors:
1- Determinate Errors:-Or Systematic Errors: - have a definite value, an assignable cause, and are of
the same magnitude for replicate measurements made in the same way.
2- Indeterminate Errors: - Or Random Errors
1- Systematic Errors: there are three types of Systematic Errors: -
A- Instrumental Errors: - All measuring devices are sources of systematic errors. For example,
pipets, burets…….
B- Method Errors: - The non ideal chemical or physical behavior of the reagents and reactions
up on which an analysis is based often introduces systematic method errors. For example,
Analysis of Standard Samples:
The best way to estimate the bias of an analytical method is by analyzing standard reference
materials (SRMs), materials that contain one or more analytes at known concentration levels.
Standard reference materials are obtained in several ways.
C- Personal Errors:- Many measurements require personal judgments. For examples include
estimating the position of a pointer between two scale divisions,The color of a solution at the
end point in a titration, or the level of a liquid with respect to a graduation in a pipet or
buret.
The Effect of Systematic Errors up on Analytical Results:-Systematic errors may be either
constant or proportional.
A- Constant Errors All measuring devices are sources of systematic errors. For example, pipets,
burets….
B- Proportional Errors:- A common cause of proportional errors is the presence of interfering
contaminants in the sample.
Blank Determinations:
A blank contains the reagents and solvents used in a determination, but no analyte. Often, many of
the sample constituents are added to simulate the analyte environment, which is called the sample
matrix. In a blank determination, all steps of the analysis are performed on the blank material.
The results are then applied as a correction to the sample measurements. Blank determinations reveal
errors due to interfering contaminants from the reagents and vessels employed in the analysis.
Blanks are also used to correct titration data for the volume of reagent needed to cause an indicator to
change color.
When we use an analytical method we make three separate evaluations of experimental error. First,
before we begin the analysis we evaluate potential sources of errors to ensure they will not adversely
effect our results. Second, during the analysis we monitor our measurements to ensure that errors
remain acceptable. Finally, at the end of the analysis we evaluate the quality of the measurements and
results, and compare them to our original design criteria. This chapter provides an introduction to
sources of error, to evaluating errors in analytical measurements, and to the statistical analysis of
data.

22
(BCHET-141)

BCHET-141: Sampling and Error in Chemical Analysis


Guess Paper-III

Q. Write the Technique of Atomisation and sample Introduction.


Ans. Aqueous Sample Introduction: Most of the FAAS and FAES systems use aqueous sample
introduction through the nebulizer and into the burner head. This works well for samples that are
already in the aqueous phase or can be digested in acid such as soil, atmospheric particles, and tissue
samples. All aqueous samples for FAAS and FAES contain some amount of strong acid, usually in the
1 to 5 percent. This acid acts to keep the metal analytes in the dissolved phase and to avoid adsorption
of metal ions to sample container and instrument surfaces. FAAS and FAES operated under these
conditions suffer from relatively poor detection limits that in some cases can be improved upon with
specialized sample introduction techniques. These techniques, described below, are physical
attachments to the basic AAS unit that may or may not replace the burner head and in some cases
allow solid samples and high particulate containing samples to be analyzed. A few of the techniques
allow the analysis of elements not commonly measured by FAAS or FAES.
Mercury Cold Vapor
Inorganic and organic forms of mercury are ubiquitous in the environment, including water and
food, and comes primarily from the burning of coal. As a result, it is necessary to detect the
concentration of mercury to assess the danger caused by this toxin. Mercury is a neurotoxin and
extremely small concentrations (part per billion or part per trillion) and can have detrimental effects
due to bioaccumulation in the food chain (increases in concentration as one goes from one tropic level
to the next). Several fish species, located in streams downwind from coal burning regions contain
significant, and in some cases, dangerous concentrations of Hg. Flame AAS techniques only yield
detection limits of approximately one part per million which is inadequate for environmental and
food monitoring. The cold vapor technique described below yields detection limits in the parts per
trillion range. Equal or even lower detection limits can be obtained by ICP-MS and atomic
fluorescence techniques (a more advanced technique ).
Hydride Generation
Another external attachment to a FAAS instrument is the hydride generation system that is used to
analyze for arsenic (As), bismuth (Bi), mercury(Hg), antimony (Sb), selenium (Se), and tellurium (Te);
some of these are notable toxins. This technique works in a similar manner to the cold vapor
technique but sodium borohydride is used as the reducing agent to generate a volatile metal hydride
complex. In addition, a flame is used to decompose the metal hydride. Again water, soil, and tissue
samples must be digested to free the metal from organic and inorganic complexes and place it in its
cationic state. The generated metal hydride is passed through the flame as a pulse input where it is
degraded by heat to it gaseous elemental state. In this state the metal will absorb the source radiation
(again from a HCL) and the absorbance reading is directly proportional to the concentration of metal
in the sample. Instrumental calibration and data processing is identical to the cold vapor mercury
technique.
Electrothermal Vaporization (Graphite Furnace Atomic Absorption, GFAA)
The graphite furnace, formally known as an electrothermal vaporization unit, uses a typical FAAS
unit but replaces the burner head with a furnace system. No flame is used in the operation of this
system; instead the metal in the sample is atomized by heating the cell with electrical resistance to
temperatures not obtainable in flame systems. The heart of the system is illustrated . All systems use
an automatic sampler to ensure reproducible results between replicate analyses. In GFAA, samples
are usually digested (in acid) to assure homogeneity of the injected solution. A sample is placed into
the graphite furnace cell through a small hole in the side. Cells range in size depending on the brand
of the instrument but are usually about the diameter of a standard writing pencil (~ 0.5 cm) and 2 to 3
cm in length. Argon gas is passed through the cell to pass vapor and analytes into the radiant beam
and the furnace and sample are then cycled through a three-step heating process. First, the water is

23
(BCHET-141)

driven off by resistance heating at 107 °C (which partially blocks the beam of the HLC source but this
is not recorded). Next, the sample is “ashed” at 480 °C to degrade any organic material in the sample
(again this absorbance signal reduction is not recorded). Finally, the cell is rapidly heated to 2000 °C
where the analytes are placed in their volatile elemental state where they absorb radiant light from
the Hollow Cathode Lamp; this signal is recorded. The system is then prepared for another run by
heating the cell to 2500 °C to remove excess material before cooling the chamber with tap water back
to room temperature. Then another standard or sample can be added and the process is repeated.
Glow-Discharge Atomization for Solid Samples
The glow-discharge device is a highly specialized sample introduction system since it is mostly used
for electrically conductive samples, such as a piece of metal or semiconductors. Like the GFAA unit it
is an attachment that replaces the burner head. A sample is placed in a low-pressure argon chamber
where a potential is placed between the container and the sample. Excited argon atoms sputter atoms
from the surface of the sample, similar to the operation of the hollow cathode lamps. Gaseous phase
metal atoms rise into the path of the source radiation and absorb their characteristic wavelength. The
largest advantage of the glow discharge technique is the direct analysis of solid samples.
Fluorescence: The above discussions have mostly focused on absorption and emission processes and
instruments. Recent advances in atomic fluorescence spectrometry make this technique possible for a
few elements (mercury, arsenic, selenium, tellurium, antimony and bismuth). Again, fluorescence
occurs when an electron is excited to a higher electronic state and decays by resonance, direct line, or
stepwise fluorescence. Instrument components are similar to those discussed above and in the but
the key difference is that the source lamp is located at a 90°angle with respect to the detector in order
to prevent the source radiation from being measured by the detector. Lamps used to excite electrons
include hollow cathode lamps, electrode-less discharge lamps, lasers, plasmas, and xenon arc lamps.
Atomizers include flames, plasmas, electrothermal atomizers, and glow discharge chambers; thus
samples can be introduced as cold vapors, liquids, hydrides, and solids. Sub-parts per billion
detection limits are obtainable from these instruments.
Q. What do you mean by the term Keto-Enol Tautomers?
Ans. Because of the acidity of α hydrogens carbonyls undergo keto-enol tautomerism. Tautomers are
rapidly interconverted constitutional isomers, usually distinguished by a different bonding location
for a labile hydrogen atom and a differently located double bond. The equilibrium between tautomers
is not only rapid under normal conditions, but it often strongly favors one of the isomers (acetone, for
example, is 99.999% keto tautomer). Even in such one-sided equilibria, evidence for the presence of
the minor tautomer comes from the chemical behavior of the compound. Tautomeric equilibria are
catalyzed by traces of acids or bases that are generally present in most chemical samples.
Aldehydes and ketones are somewhat lycanthropic chemical species. Take acetone. It behaves as a
garden-variety polar aprotic solvent, which makes it a useful medium for SN2 reactions; it reacts
readily with nucleophiles like enolates, Grignards, and amines; and is several pKa units less acidic
than alcohols (~20 vs. 16). This chemical behavior reflects the fact that it spends the vast majority of its
time as a ketone, with an electrophilic carbonyl carbon. It’s nice and stable. You use it to wash
glassware with, or as paint thinner.

24
(BCHET-141)

But every couple of blue moons (for acetone in water about 1/6600 th of the time at 23 °C) acetone
undergoes a transformation to its alter ego, the enol form. [EDIT: equilibrium constant is 1 x 10^-8 in
water) And as its name suggests, the enol form – which is an isomer, not a resonance form – has the
characteristics of both alkenes and alcohols: it can involve itself in hydrogen bonding via the OH
group, it is acidic at oxygen, and it reacts with electrophiles (like aldehydes, for example, in the Aldol
reaction). In short, the enol form differs from the keto form in its polarity, acidity, and nucleophilicity
just like werewolves differ from ordinary folks in their copious body hair, nocturnal
rambunctiousness, and peculiar dietary habits.
The reason for the equilibrium lying to the left is due to bond energies. The keto form has a C–H, C–
C, and C=O bond whereas the enol has a C=C, C–O an O–H bond. The sum of the first three is about
359 kcal/mol (1500 kJ/mol) and the second three is 347 kcal/mol (1452 kJ/mol). The keto form is
therefore more thermodynamically stable by 12 kcal/mol (48 kJ/mol).
Although the keto form is most stable for aldehydes and ketones in most situations, there are several
factors that will shift the equilibrium toward the enol form. The same factors that stabilize alkenes or
alcohols will also stabilize the enol form. There are two strong factors and three subtle factors.
Biggies (2):
1. Aromaticity. Phenols can theoretically exist in their keto forms, but the enol form is greatly
favored due to aromatic stabilization.
2. Hydrogen Bonding. Nearby hydrogen bond acceptors stabilize the enol form. When a Lewis
basic group is nearby, the enol form is stabilized by internal hydrogen bonding.
3. Solvent. Solvent can also play an important role in the relative stability of the enol form. For
example, in benzene, the enol form of 2,4-pentanedione predominates in a 94:6 ratio over the
keto form, whereas the numbers almost reverse completely in water. What’s going on? In a
polar protic solvent like water, the lone pairs will be involved in hydrogen bonding with the
solvent, making them less available to hydrogen bond with the enol form.
4. Conjugation . π systems are a little like Cheerios in milk: given the choice, they want to
connect together than hang out in isolation. So in the molecule depicted, the more favorable
tautomer will be the one on the left, where the double bond is a connected by conjugation to
the phenyl.
5. Substitution. In the absence of steric factors, increasing substitution at carbon will stabilize
the enol form. Enols are alkenes too – so any factors that stabilize alkenes, will stabilize enols
as well. All else being equal, double bonds increase in thermodynamic stability as
substitution is increased. So in the above example, the enol on the left should be the more
stable one. As you might suspect, “all things being equal” sounds like a big caveat. It is – all
else is rarely equal. But that’s a topic for another day – or, more likely, another course.
Q. Explain the classification of electromagnetic radiation.
Ans. Electromagnetic (EM) radiation is a form of energy that is all around us and takes many forms,
such as radio waves, microwaves, X-rays and gamma rays. Sunlight is also a form of EM energy, but
visible light is only a small portion of the EM spectrum, which contains a broad range of
electromagnetic wavelengths.
Electromagnetic theory: Electricity and magnetism were once thought to be separate forces.
However, in 1873, Scottish physicist James Clerk Maxwell developed a unified theory of
electromagnetism. The study of electromagnetism deals with how electrically charged particles
interact with each other and with magnetic fields.
There are four main electromagnetic interactions:
● The force of attraction or repulsion between electric charges is inversely proportional to the
square of the distance between them.
● Magnetic poles come in pairs that attract and repel each other, much as electric charges do.
● An electric current in a wire produces a magnetic field whose direction depends on the
direction of the current.
● A moving electric field produces a magnetic field, and vice versa.

25
(BCHET-141)

● Maxwell also developed a set of formulas, called Maxwell's equations, to describe these
phenomena.
Waves and fields: EM radiation is created when an atomic particle, such as an electron, is accelerated
by an electric field, causing it to move. The movement produces oscillating electric and magnetic
fields, which travel at right angles to each other in a bundle of light energy called a photon. Photons
travel in harmonic waves at the fastest speed possible in the universe: 186,282 miles per second
(299,792,458 meters per second) in a vacuum, also known as the speed of light. The waves have
certain characteristics, given as frequency, wavelength or energy.
Electromagnetic waves are formed when an electric field (shown in red arrows) couples with a
magnetic field (shown in blue arrows). Magnetic and electric fields of an electromagnetic wave are
perpendicular to each other and to the direction of the wave.
Electromagnetic waves are formed when an electric field (shown in red arrows) couples with a
magnetic field (shown in blue arrows). Magnetic and electric fields of an electromagnetic wave are
perpendicular to each other and to the direction of the wave. (Image credit: NOAA.)
A wavelength is the distance between two consecutive peaks of a wave. This distance is given in
meters (m) or fractions thereof. Frequency is the number of waves that form in a given length of time.
It is usually measured as the number of wave cycles per second, or hertz (Hz). A short wavelength
means that the frequency will be higher because one cycle can pass in a shorter amount of time,
according to the University of Wisconsin. Similarly, a longer wavelength has a lower frequency
because each cycle takes longer to complete.
The EM spectrum: EM radiation spans an enormous range of wavelengths and frequencies. This
range is known as the electromagnetic spectrum. The EM spectrum is generally divided into seven
regions, in order of decreasing wavelength and increasing energy and frequency. The common
designations are: radio waves, microwaves, infrared (IR), visible light, ultraviolet (UV), X-rays and
gamma rays. Typically, lower-energy radiation, such as radio waves, is expressed as frequency;
microwaves, infrared, visible and UV light are usually expressed as wavelength; and higher-energy
radiation, such as X-rays and gamma rays, is expressed in terms of energy per photon.
The electromagnetic spectrum is generally divided into seven regions, in order of decreasing
wavelength and increasing energy and frequency: radio waves, microwaves, infrared, visible light,
ultraviolet, X-rays and gamma rays.
The electromagnetic spectrum is generally divided into seven regions, in order of decreasing
wavelength and increasing energy and frequency: radio waves, microwaves, infrared, visible light,
ultraviolet, X-rays and gamma rays. (Image credit: Biro Emoke Shutterstock)
Radio waves: Radio waves are at the lowest range of the EM spectrum, with frequencies of up to
about 30 billion hertz, or 30 gigahertz (GHz), and wavelengths greater than about 10 millimeters (0.4
inches). Radio is used primarily for communications including voice, data and entertainment media.
Microwaves: Microwaves fall in the range of the EM spectrum between radio and IR. They have
frequencies from about 3 GHz up to about 30 trillion hertz, or 30 terahertz (THz), and wavelengths of
about 10 mm (0.4 inches) to 100 micrometers (μm), or 0.004 inches. Microwaves are used for high-
bandwidth communications, radar and as a heat source for microwave ovens and industrial
applications.
Infrared: Infrared is in the range of the EM spectrum between microwaves and visible light. IR has
frequencies from about 30 THz up to about 400 THz and wavelengths of about 100 μm (0.004 inches)
to 740 nanometers (nm), or 0.00003 inches. IR light is invisible to human eyes, but we can feel it as
heat if the intensity is sufficient.
Visible light: Visible light is found in the middle of the EM spectrum, between IR and UV. It has
frequencies of about 400 THz to 800 THz and wavelengths of about 740 nm (0.00003 inches) to 380 nm
(.000015 inches). More generally, visible light is defined as the wavelengths that are visible to most
human eyes.
Ultraviolet: Ultraviolet light is in the range of the EM spectrum between visible light and X-rays. It
has frequencies of about 8 × 1014 to 3 × 1016 Hz and wavelengths of about 380 nm (.000015 inches) to

26
(BCHET-141)

about 10 nm (0.0000004 inches). UV light is a component of sunlight; however, it is invisible to the


human eye. It has numerous medical and industrial applications, but it can damage living tissue.
X-rays: X-rays are roughly classified into two types: soft X-rays and hard X-rays. Soft X-rays comprise
the range of the EM spectrum between UV and gamma rays. Soft X-rays have frequencies of about 3 ×
1016 to about 1018 Hz and wavelengths of about 10 nm (4 × 10−7 inches) to about 100 picometers
(pm), or 4 × 10−8 inches. Hard X-rays occupy the same region of the EM spectrum as gamma rays. The
only difference between them is their source: X-rays are produced by accelerating electrons, while
gamma rays are produced by atomic nuclei.
Gamma-rays: Gamma-rays are in the range of the spectrum above soft X-rays. Gamma-rays have
frequencies greater than about 1018 Hz and wavelengths of less than 100 pm (4 × 10−9 inches).
Gamma radiation causes damage to living tissue, which makes it useful for killing cancer cells when
applied in carefully measured doses to small regions. Uncontrolled exposure, though, is extremely
dangerous to humans.
Q. What is the factors of Conductometry?
Ans. Conductometry: Conductometry is define as the calculation of electrical conductivity of a
solution.
Conductance: The current flow through the conductor is called conductance.In others words it is
defined as the reciprocal of the resistance. The unit for the conductance is seimens (s).
Conductometry-representation
Principle of conductometry: In this method the main principle is the movements of the ions creates
the electrical conductivity. The movements of the ions is depends on the concentration of ions.
Theory of conductometry: The theory is mainly based on Ohm’s law which states that the current(I)
is directly proportional to the electromotive force(E) and inversely proportional to the resistance (R)
of the conductor.
I = E/R
conductance is
C=I/R
Factors effecting the Conductance measurement:
● Temperature: Conductivity of the electrolytes increases with increase in temperature because
of the ions mobility by the increasing temperature.
● Concentration of the sample solution: The concentration of the solution, is inversely
proportional to the conductivity of the sample solution. The conductivity decreases as the
concentration increases. The filtered solution is therefore used for the calculation of conducti-
vity.
● Number of ions present in the sample solution: This is based on the dissociation of compound
into ions. That is mainly of the number of ions present in the solution. The number of ions
present in the solution is directly proportional to the conductance. Strong electrolytes
completely dissociates into ions and have high conductance.
● Charge of the ions: The negative charge of the ions increase increase the conductivity, Where
positive charged ions decrease the conductivity.
● Size of the ions: The conductivity is inversely proportional to the size of the ions.The increase
in the size increase the conductivity.
Conductometry has notable application in analytical chemistry, where conductometric titration is a
standard technique. Conductometry is often applied to determine the total conductance of a solution
or to analyze the end point of titrations that include ions.
The principle of conductometric titration is based on the fact that during the titration, one of the ions
is replaced by the other and invariably these two ions differ in the ionic conductivity with the result
that conductivity of the solution varies during the course of titration.
Q. Explain pH titration.
Ans. Frequently an acid or a base is quantitatively determined by titration using pH meter to detect
the equivalence point rather than using a visual indicator. This has the advantage that one actually

27
(BCHET-141)

monitors the change in pH at the equivalence point rather than just observing the change in color of a
visual indicator. This eliminates any indicator blank error. Some laboratory workers complain that
this method is more tedious than methods using visual indicators; they soon find, however, that after
running one titration to find out the approximate location of the equivalence point, they only need to
concern themselves with the dropwise addition of titrant close to the equivalence point on subsequent
titrations.
The titration of a mixture of phosphoric acid and hydrochloric acid is complicated by the fact that
phosphoric acid is a triprotic acid with Ka1 = 7.5x10-3, Ka2 = 6.2x10-8, and Ka3 = 4.8x10-13. Ka1 is
sufficiently large that the first proton from phosphoric acid cannot be differentiated from strong acids
like hydrochloric acid. The second dissociation of phosphoric acid is varies significantly from the first.
The second proton can be neutralized and differentiated from the first phosphoric acid proton and the
strong acid proton. The itration curve for a mixture of phosphoric and hydrochloric acids are
illustrated here. .
The first break in the mixed acid curve indicates the amount of hydrochloric acid plus the amount of
the phosphoric acid. The amount of phosphoric acid in the sample is indicated by the difference
between first and second breaks in the titration curve. The first equivalence point volume (25.0 mL)
permits calculation of the total meq of HCl + (meq H3PO4)/3 since the first proton of H3PO4 is
neutralized. In this example
25.0 mL x 0.100 N = 2.50 meq
Taking the difference between the first and second equivalence point volumes (10.0 mL) one obtains:
10.0 mL x 0.100 N x 3 eq/mole
= 3.00 meq H3PO4
From these two equations one can calculate that the sample contains 1.50 meq HCl and 1.00 mmol or
3.00 meq H3PO4. (Note: the normal concentration, N (eq/L), of phosphoric acid is 3-times the formal
concentration, F (f.w./L), since it has 3 protons per mole)
This type of analysis is ideally suited for the determination of strong acid impurities in a weak acid
and is unaffected by colored or suspended materials in the solution provided that these materials are
not acids or bases. Interference in the analysis would be other weak or strong acids mixed into the
sample. High concentrations of sodium ion or potassium ion in the sample can cause an error in the
reading of the glass electrode, (i.e., the absolute pH values may be in error) but generally will not
affect locating the equivalence points.
Titrations with the pH Meter
● Check out a pH meter, combination electrode, magnetic stirrer and stirring bar from the
instructor.
● Read the instructions on the use of the pH meter.
● Standardize the pH meter using the buffer supplied.
● Clean the electrode thoroughly with distilled water; drying is not necessary.
● Immerse the electrode in the solution to be titrated; it should not go to the bottom of the
titration vessel.
● Start the stirring motor, being careful that the stirring bar does not break the glass electrode.
You should allow room for the stirrer to rotate below the tip of the electrode.
● Set the mode to pH and begin the titration.
● Record pH and mL of titrant added.
● Watch for the region where the pH begins to change rapidly with each added portion of
titrant. As the pH begins to change more rapidly, add the titrant in smaller portions.
● When you have passed the equivalence point by several mL, there is no reason to continue
any further in the titration.
Titration of the H3PO4-HCl mixtures
● Turn in a clean 250.0 mL volumetric flask to the laboratory instructor. Be sure that the flask is
clearly marked with your name and section number.
● Dilute the sample in the flask to the mark with boiled, distilled water.

28
(BCHET-141)

● Into 250 mL beakers pipet 50.00 mL portions of the acid sample. Dilute the samples with 100
mL of distilled water and stir with a magnetic stirrer. Titrate with the standard NaOH using
the pH meter to detect the equivalence point.(See below) Perform an approximate titration
first, adding the titrant in 0.5 mL portions. Then on subsequent titrations add the NaOH in
one portion up to within 2 mL of the equivalence point. Then add the titrant in 0.10 mL, 0.05
mL, or 1 drop portions.
● Perform the titration accurately on three portions of the acid mixture.

Q. Explain ion exchange materials.


Ans. Ion exchange is a chemical reaction in which free mobile ions of a solid, the ion exchanger, are
exchanged for different ions of similar charge in solution. The exchanger must have an open network
structure, either organic or inorganic, which carries the ions and which allows ions to pass through it.
An ion exchanger is a water-insoluble substance which can exchange some of its ions for similarly
charged ions contained in a medium with which it is in contact; this definition is all embracing.
Referring to a “substance” rather than a compound includes many exchangers—some of them are
natural products which do not have a well defined composition. The term “medium” acknowledges
that ion exchange can take place in solution both aqueous and nonaqueous, in molten salts, or even in
contact with vapours. The definition is not limited to solids, since some organic solvents which are
immiscible with water can extract ions from aqueous solution by an ion exchange mechanism.
The definition also indicates something about the process of ion exchange. Basically it consists of
contact between the exchanger and the medium in which the exchange takes place. These are usually
a solid ion exchanger and an aqueous solution. The fact that ions are exchanged implies that the
exchanger must be ionized, but only one of the ions in the exchanger is soluble. That ion can
exchange, while the other, being insoluble, cannot do so.
There is no strict difference between ion exchange resin and chelating resins because some polymers
can act as chelating or nonchelating substance depending on the chemical environment. Ion exchange
resembles sorption because both are surface phenomenon, and in both cases a solid takes up a
dissolved species. The characteristic difference between these two phenomena is in stoichiometric
nature of ion exchange. Every ion removed from the solution is replaced by an equivalent amount of
same charge. In sorption on the other hand a solute is usually taken up nonstoichiometrically without
being replaced.
Conventionally, ion exchange materials are classified into two categories, cation exchangers and
anion exchangers, based on the kind of ionic groups attached to the material.
Cation exchange materials containing negatively charged groups like sulphate, carboxylate,
phosphate, benzoate, and so forth are fixed to the backbone material and allow the passage of cations
but reject anions, while anion exchange materials containing positively charged groups like amino
group, alkyl substituted phosphine (), alkyl substituted sulphides (), and so forth are fixed to the
backbone materials and allow passage of anions but reject cations. There are also amphoteric
exchangers that are able to exchange both cations and anions simultaneously. However, the
simultaneous exchange of cations and anions can be more efficiently performed in mixed beds that
contain a mixture of anion and cation exchange resins or passing the treated solution through several
different ion exchange materials. There is one other class of ion exchanger known as chelating ion
exchanger. Many ions accept lone pair of electrons from ligands establishing covalent like bonds,
called coordination bonds. Depending on the number of coordination bonds, the ligand is called
monodentate, bidentate, or polydentate. Coordination interactions are highly specific. An example of
a coordination compound is the coordination of a metal ion with ethylene diamine tetra acetic acid
(EDTA).

29
(BCHET-141)

Natural Organic Products


Several natural organic materials possess ion exchange properties or can be given to them by simple
chemical treatment. Plant and animal cells act as ion exchangers by virtue of the presence of carboxyl
groups of amphoteric proteins. These carboxyl groups, (–CO2H), and phenolic groups, (–OH), are
weakly acidic and will exchange their hydrogen ions for other cations under neutral or alkaline
conditions. The humins and humic acids found in natural soil “humus” are examples of this class of
exchanger; the partially decayed and oxidized plant products contain acid groupings. Several organic
products are marketed which are based on treated cellulose, either in fibre form for use in ion
exchange columns or as filter papers for ion exchange separations on paper. Many ion exchangers
have been prepared from other natural materials such as wood, fibres, peat, and coal by oxidation
with nitric acid or, better still, with concentrated sulphuric acid when the strongly acid sulphonic acid
group, (–SO3H) is introduced into the material. The latter process was particularly successful with
sulphonated coals. These can exchange in acid solution, because the exchanging group itself is
ionized under these conditions, whereas the weaker carboxylic and phenolic groups are not. All these
materials have certain disadvantages; however, they tend to colour the solutions which are treated,
and their properties are difficult to reproduce because of the difficulty of controlling the treatment
they are given.

Q. Explain the term Development of Chromatograms.


Ans. The first paper on the subject appeared in 1903, written by Mikhail Semyonovich Tsvet (1872-
1919), a Russian-Italian biochemist, who also coined the word chromatography. Tsvet had managed
to separate a mixture of plant pigments, including chlorophyll, on a column packed with finely
ground calcium carbonate, using petroleum ether as the mobile phase. As the colored mixture passed
down the column, it separated into individual colored bands (the term chromatography comes from
the Greek words chroma, meaning color, and graphein, meaning writing, or drawing). Although
occasionally used by biochemists, chromatography as a science lagged until 1942, when A. J. P.
Martin (1910-2002) and R. L. M. Synge (1914-1994) developed the first theoretical explanations for the
chromatographic separation process. Although they eventually received the Nobel Prize in chemistry
for this work, chromatography did not come into wide use until 1952, when Martin, this time
working with A. T. James, described a way of using a gas instead of a liquid as the mobile phase, and
a highly viscous liquid coated on solid particles as the stationary phase.
Gas-liquid chromatography (now called gas chromatography) was an enormous advance. Eventually,
the stationary phase could be chemically bonded to the solid support, which improved the

30
(BCHET-141)

temperature stability of the column's packing. Gas chromatographs could then be operated at high
temperatures, so even large molecules could be vaporized and would progress through the column
without the stationary phase vaporizing and bleeding off. Additionally, since the mobile phase was a
gas, the separated compounds were very pure; there was no liquid solvent to remove. Subsequent
research on the technique produced many new applications.
The shapes of the columns themselves began to change, too. Originally vertical tubes an inch or so in
diameter, columns began to get longer and thinner when it was found that this increased the
efficiency of separation. Eventually, chemists were using coiled glass or fused silica capillary tubes
less than a millimeter in diameter and many yards long. Capillaries cannot be packed, but they are so
narrow that the stationary phase can simply be a thin coat on the inside of the column.
A somewhat different approach is the set of techniques known as "planar" or "thin layer"
chromatography (TLC), in which no column is used at all. The stationary phase is thinly coated on a
glass or plastic plate. A spot of sample is placed on the plate, and the mobile phase migrates through
the stationary phase by capillary action.
In the mid-1970s, interest in liquid mobile phases for column chromatography resurfaced when it was
discovered that the efficiency of separation could be vastly improved by pumping the liquid through
a short packed column under pressure, rather than allowing it to flow slowly down a vertical column
by gravity alone. High-pressure liquid chromatography, also called high performance liquid
chromatography (HPLC), is now widely used in industry. A variation on HPLC is Supercritical Fluid
Chromatography (SFC). Certain gases (carbon dioxide, for example), when highly pressurized above
a certain temperature, become a state of matter intermediate between gas and liquid. These
"supercritical fluids" have unusual solubility properties, some of the advantages of both gases and
liquids, and appear very promising for chromatographic use.
Most chemical compounds are not highly colored, as were the ones Tsvet used. A chromatographic
separation of a colorless mixture would be fruitless if there were no way to tell exactly when each
pure compound eluted from the column. All chromatographs thus must have a device attached, and
some kind of recorder to capture the output of the detector—usually a chart recorder or its
computerized equivalent. In gas chromatography, several kinds of detectors have been developed;
the most common are the thermal conductivity detector, the flame ionization detector, and the
electron capture detector. For HPLC, the UV detector is standardized to the concentration of the
separated compound. The sensitivity of the detector is of special importance, and research has
continually concentrated on increasing this sensitivity, because chemists often need to detect and
quantify exceedingly small amounts of a material.
Within the last few decades, chromatographic instruments have been attached to other types of
analytical instrumentation so that the mixture's components can be identified as well as separated
(this takes the concept of the "detector" to its logical extreme). Most commonly, this second
instrument has been a mass spectrometer, which allows identification of compounds based on the
masses of molecular fragments that appear when the molecules of a compound are broken up.
Currently, chromatography as both science and practical tool is intensively studied, and several
scientific journals are devoted exclusively to chromatographic research.
Q. Describe about the Extraction of organic chemistry.
Ans. The invention provides a method to extract organic compounds from aqueous mixtures, using a
specially selected organic compound as an extraction liquid. The method can be applied to remove
compounds such as acetic acid or ethanol from complex aqueous mixtures, including fermentation
reactions or broths, and can be used for in situ extraction of products or by-products from a
fermentation reaction. Some suitable extraction liquids for use in these methods include diethylene
glycol dibutyl ether, tripropionin, and di(ethylene glycol) diisobutyl ether.
1. A process for the extraction of organic compounds comprising an amino acid, a lactam, a phenol
compound or mixtures thereof from aqueous solutions and/or suspensions of same, which consists
essentially of:

31
(BCHET-141)

treating said solutions and/or suspensions with a liquid organic carboxylic acid that is immiscible
with water at 20° C. in an amount sufficient to form an organic phase, containing the organic
compounds, and
isolating the organic phase from the aqueous phase to extract said organic compounds from said
solutions or suspensions.
2. The process according to claim 1, wherein the organic acid has a solubility equal to or less than 5
grams per liter in water at 20° C.
3. The process according to claim 2, wherein the acid comprises n-heptanoic, n-octanoic, 2-
ethylhexanoic, isopropyl acetic, n-nonanoic acid or mixtures thereof.
4. The process of claim1 wherein the solution is agitated to facilitate contact of the acid with organic
compound.
5. The process of claim 1 wherein the treating step comprises a countercurrent extraction in a vertical
column.
6. The process according to claim 1 wherein the extraction is carried out at a temperature of between
25° and 60° C.
7. The method of claim 1 wherein the amino acid is 11-aminoundecanoic acid or 12-aminododecanoic
acid, the lactam is dodecanolactam, and the phenol compound is 2,3-dichlorophenol or
orthochorophenol.
8. A process for the extraction of organic compounds comprising, amino acids, lactams, or phenols
from an aqueous solution containing same which consists essentially of
treating said solution with a sufficient amount of n-heptanoic, n-octanoic, 2-ethyl hexanoic, isopropyl
acetic or n-nonanoic acid or mixtures therein to form an organic phase containing the organic
compound and the acid, and separating the organic phase from the aqueous solution to extract said
organic compounds from said solution.
9. The process according to claim 8 which further comprises recovering the organic compounds from
the organic phase.
10. The process according to claim 8 wherein the extraction is carried out at a temperature of between
25° and 60° C.
11. The process according to claim 8 wherein the treating step is repeated at least once to increase the
amount of organic compound to be extracted.
12. The method of claim 8 wherein the amino acid is 11-aminoundecanoic acid or 12-
aminododecanoic acid, the lactam is dodecanolactam, and the phenol compound is 2,3-
dichlorophenol or orthochorophenol.
13. A process for reducing the amount of organic compounds comprising heptylamine, N,N-dimethyl
benzylamine, hexylamine, di-N-hexylamine, an amino acid, a lactam, a phenol compound, or
mixtures thereof from wastewater containing same, which consists essentially of treating the
wastewater with a sufficient amount of a carboxylic acid which is immisicible with water and which
is a liquid at the treatment temperature to form an organic phase containing the acid and a portion of
the organic compounds, and separating the organic phase from the wastewater to reduce the content
of said organic compounds by removal therefrom.
14. The process of claim 13 wherein the solution is agitated to facilitate contact of the acid with the
organic compound.
15. The process of claim 13 wherein the acid is biodegradable so that any residual acid remaining in
the wastewater can be discharged into the environment.
16. The process of claim 13 wherein the treating step comprises a countercurrent extraction in a
vertical column.
17. The process according to claim 13 wherein the extraction is carried out at a temperature of
between 25° and 60° C.
18. The process according claim 13 wherein the treating step is repeated at least once to increase the
amount of organic compound to be extracted.

32
(BCHET-141)

19. The process according to claim 13 wherein the acid comprises n-heptanoic, n-octanoic, 2-
ethylhexanoic, isopropyl acetic, n-nonanoic acid or mixtures thereof.
20. The method of claim 13 wherein the amino acid is 11-aminoundecanoic acid or 12-
aminododecanoic acid, the lactam is dodecanolactam, and the phenol compound is 2,3-
dichlorophenol or orthochorophenol.
21. A process for the extraction of organic compounds of heptylamine, N,N-dimethylbenzylamine,
hexylamine, di-n-hexylamine, 11-aminoundecanoic acid, 12-aminododecanoic acid, dodecanolactam,
2,3-dichlorophenol, ortho-chlorophenol or mixtures thereof from aqueous solutions and/or
suspensions of same, which consists essentially of:
Treating said solutions and/or suspensions with a liquid organic carboxylic acid that is immiscible
with water at 20° C. in an amount sufficient to form an organic phase containing the organic
compounds; and isolating the organic phase from the aqueous phase to extract said organic
compounds from said solutions or suspension.
22. The process of claim 21, wherein the organic acid has a solubility equal to or less than 5 grams per
liter in water at 20° C.
23. The process of claim 21, wherein the acid is n-heptanoic, n-octanoic, 2-ethylhexanoic, ispropyl
acetic, n-nonanoic acid or mixtures thereof.
24. The process of claim 21 wherein the solution is agitated to facilitate contact of the acid with the
organic compound.
25. The process of claim 21 wherein the treating step comprises a countercurrent extraction in a
vertical column, wherein the extraction is carried out at a temperature of between 25° and 60° C.
Q. Describe about the term confidence intervals.
Ans. Confidence intervals and hypothesis tests are similar in that they are both inferential methods
that rely on an approximated sampling distribution. Confidence intervals use data from a sample to
estimate a population parameter. Hypothesis tests use data from a sample to test a specified
hypothesis. Hypothesis testing requires that we have a hypothesized parameter.
The simulation methods used to construct bootstrap distributions and randomization distributions
are similar. One primary difference is a bootstrap distribution is centered on the observed sample
statistic while a randomization distribution is centered on the value in the null hypothesis.
We learned confidence intervals contain a range of reasonable estimates of the population parameter.
All of the confidence intervals we constructed in this course were two-tailed. These two-tailed
confidence intervals go hand-in-hand with the two-tailed hypothesis tests we learned in. The
conclusion drawn from a two-tailed confidence interval is usually the same as the conclusion drawn
from a two-tailed hypothesis test. In other words, if the the 95% confidence interval contains the
hypothesized parameter, then a hypothesis test at the 0.05 level will almost always fail to reject the
null hypothesis. If the 95% confidence interval does not contain the hypothesize parameter, then a
hypothesis test at the 0.05 level will almost always reject the null hypothesis.
Example: Mean
This example uses the Body Temperature dataset built in to StatKey for constructing a bootstrap
confidence interval and conducting a randomization test. Let's start by constructing a 95% confidence
interval using the percentile method in StatKey:
Bootstrap sampling distribution in StatKey displaying the 95% confidence interval using the
percentile method
The 95% confidence interval for the mean body temperature in the population is [98.044, 98.474].Now,
what if we want to know if there is evidence that the mean body temperature is different from 98.6
degrees? We can conduct a hypothesis test. Because 98.6 is not contained within the 95% confidence

33
(BCHET-141)

interval, it is not a reasonable estimate of the population mean. We should expect to have a p value
less than 0.05 and to reject the null hypothesis.
Selecting the Appropriate Procedure Section
The decision of whether to use a confidence interval or a hypothesis test depends on the research
question. If we want to estimate a population parameter, we use a confidence interval. If we are given
a specific population parameter (i.e., hypothesized value), and want to determine the likelihood that a
population with that parameter would produce a sample as different as our sample, we use a
hypothesis test. Below are a few examples of selecting the appropriate procedure.

34

You might also like