Professional Documents
Culture Documents
Maths-104
Mathematics
INDEX
Page No
Module 1 Determinants 4
Module 3 Integrals 32
Module 5 Trigonometry 89
CONTENTS
Module 1 Determinants
Module 1.1 Properties of the Determinant
Module 1.2 Determinants of Matrices of Higher Order
Module 1.3 Applications of Determinants and Matrices
Module 3 INTEGRALS
Module 3.1 Basic Integration
Module 3.2 Solving Indefinite Integration
Module 3.3 Integrals are Summations
Module 3.4 Area Under The Curve
Module 3.5 Integration as an Inverse Process of Differentiation
Module 3.6 Definite Integrals
Module 3.7 Properties of Indefinite Integration
Module 3.8 Examples of Indefinite Integration
Module 3.9 The Fundamental Theorem of Calculus
Module 3.10 Table of Indefinite Integrals
Module 3.11 The Total Change Theorem
Module 3.12 The Substitution Rule
Module 3.13 Integrals of Symmetric Functions
Module 3.14 Integration By Parts
Module 3.15 Trigonometric Integrals
Module 3.16 Trigonometric Substitution
Module 3.17 Integration of Rational Functions By Partial Fractions
Module 3.18 Comparison between Differentiation and integration
The term 'determinant' was first introduced by Gauss in Disquisitiones arithmeticae (1801) while
discussing quadratic forms. He used the term because the determinant determines the
properties of the quadratic form. However the concept is not the same as that of our
determinant. In the same work Gauss lays out the coefficients of his quadratic forms in
rectangular arrays. He describes matrix multiplication (which he thinks of as composition so he
has not yet reached the concept of matrix algebra) and the inverse of a matrix in the particular
context of the arrays of coefficients of quadratic forms.
A system of algebraic equations can be expressed in the form of matrices. This means, a
system of linear equations like;
To every square matrix A = [aij] of order n, we can associate a number (real or complex) called
determinant of the square matrix A, where aij = (i, j)th element of A. This may be thought of as a
function which associates each square matrix with a unique number (real or complex). If M is
the set of square matrices, K is the set of numbers (real or complex) and f : M → K is defined by
f (A) = k, where A M and k K, then f (A) is called the determinant of A. It is also denoted by
|A| or det A or Δ.
04
Determinant of a matrix of order one: Let A = [a ] be the matrix of order 1, then determinant of
A is defined to be equal to a
3 corresponding to each of three rows (R1, R2 and R3) and three columns (C1, C2 and C3)
giving the same value as shown below.
05
Consider the determinant of square matrix A = [ay]3X3
Step 1 Multiply first element a11 of R1 by (–1)(1 + 1) [(–1) sum of suffices in a 11] and with the
second order determinant obtained by deleting the elements of first row (R1) and first
column (C1) of | A | as a11 lies in R1 and C1, i.e.,
sum of suffices in a
Step 2 Multiply 2nd element a12 of R1 by (–1)1 + 2 [(–1) 12] and the second
order determinant obtained by deleting elements of first row (R1) and 2nd column (C2)
of | A | as a12 lies in R1 and C2, i.e.,
sum of suffices in a
Step 3 Multiply third element a13 of R1 by (–1)1 + 3 [(–1) 13] and the second
order determinant obtained by deleting elements of first row (R1) and third column (C3)
of | A | as a13 lies in R1 and C3, i.e.,
Step 4 Now the expansion of determinant of A, that is, | A | written as sum of all three
terms obtained in steps 1, 2 and 3 above is given by
06
By applying all four steps together we get;
07
Evaluate the determinant:
08
Answer: Note that in the third column, two entries are zero. So expanding along third column
(C3), we get
1. Any matrix A and its transpose have the same determinant, meaning
This is interesting since it implies that whenever we use rows, a similar behavior will result if we
use columns. In particular we will see how row elementary operations are helpful in finding the
determinant. Therefore, we have similar conclusions for elementary column operations.
2. The determinant of a triangular matrix is the product of the entries on the diagonal, that is
3. If we interchange two rows, the determinant of the new matrix is the opposite of the old one,
that is
4. If we multiply one row with a constant, the determinant of the new matrix is the determinant of
the old one multiplied by the constant, that is
In particular, if all the entries in one row are zero, then the determinant is zero.
5. If we add one row to another one multiplied by a constant, the determinant of the new matrix
is the same as the old one, that is
09
Note that whenever you want to replace a row by something (through elementary operations),
do not multiply the row itself by a constant. Otherwise, you will easily make errors (due to
Property 4).
6. We have
Example. Evaluate
Let us transform this matrix into a triangular one through elementary operations. We will keep
the first row and add to the second one the first multiplied by 1/2. We get
10
Therefore, we have
As we said before, the idea is to assume that previous properties satisfied by the determinant of
matrices of order 2, are still valid in general. In other words, we assume:
1.Any matrix A and its transpose have the same determinant, meaning
det A = det AT
2. The determinant of a triangular matrix is the product of the entries on the diagonal.
3.If we interchange two rows, the determinant of the new matrix is the opposite of the old one.
4.If we multiply one row with a constant, the determinant of the new matrix is the determinant of
the old one multiplied by the constant.
5.If we add one row to another one multiplied by a constant, the determinant of the new matrix
is the same as the old one.
6.We have
11
12
Module 1.3 Applications of Determinants and Matrices
In this section, we shall discuss application of determinants and matrices for solving the system
of linear equations in two or three variables and for checking the consistency of the system of
linear equations.
Consistent system A system of equations is said to be consistent if its solution (one or more)
exists.
Inconsistent system A system of equations is said to be inconsistent if its solution does not
exist.
13
Module 2 Differential Equations:
Module 2.1 Basic Concepts: The following equations are familiar to us;
The equations involve independent and/or dependent variable (variables) only but equation
involves variables as well as derivative of the dependent variable y with respect to the
independent variable x. Such an equation is called a differential equation. In general, an
equation involving derivative (derivatives) of the dependent variable with respect to
independent variable (variables) is called a differential equation. A differential equation
involving derivatives of the dependent variable with respect to only one independent variable
is called an ordinary differential equation, e.g.,
The above equations involve the highest derivative of first, second and third order
respectively.
14
Module 2.3 Degree of a differential equation
To study the degree of a differential equation, the key point is that the differential equation
must be a polynomial equation in derivatives, i.e., y′, y″, y″′ etc. Consider the following
differential equations:
Example: Find the order and degree, if defined, of each of the following differential
equations:
Answer: (i) The highest order derivative present in the differential equation is dy/dx, so its
order is one. It is a polynomial equation in y′ and the highest power raised to dy/dx is one, so
its degree is one.
(ii) The highest order derivative present in the given differential equation is d2y / dx2 so its
order is two. It is a polynomial equation in d2y / dx2 and dy/dx and the highest power
raised to d2y / dx2 is one, so its degree is one.
(iii) The highest order derivative present in the differential equation is y’’’ , so its order is
three. The given differential equation is not a polynomial equation in its derivatives and so its
degree is not defined.
15
Now consider the differential equation d2y / dx2 + y = 0
In contrast to the first two equations, the solution of this differential equation is a function φ
that will satisfy it i.e., when the function φ is substituted for the unknown y (dependent
variable) in the given differential equation, L.H.S. becomes equal to R.H.S..
The curve y = φ (x) is called the solution curve (integral curve) of the given differential
equation. Consider the function given by
where a, b ∈ R. When this function and its derivative are substituted in equation,
Let a and b be given some particular values say a = 2 and b = /4 , then we get a
When this function and its derivative are substituted in equation again L.H.S. = R.H.S..
Therefore φ1 is also a solution of equation (3).
16
Module 2.4 Linear Differential Equations:
The first special case of first order differential equations that we will look is the linear first
order differential equation. In this case, unlike most of the first order cases that we will look
at, we can actually derive a formula for the general solution. The general solution is derived
below. However, I would suggest that you do not memorize the formula itself. Instead of
memorizing the formula you should memorize and understand the process that I'm going to
use to derive the formula. Most problems are actually easier to work by using the process
instead of using the formula.
So, let's see how to solve a linear first order differential equation. Remember as we go
through this process that the goal is to arrive at a solution that is in the form
In order to solve a linear first order differential equation we MUST start with the differential
equation in the form shown below. If the differential equation is not in this form then the
process we’re going to use will not work.
(1)
Where both p(t) and g(t) are continuous functions. Recall that a quick and dirty definition of
a continuous function is that a function will be continuous provided you can draw the graph
from left to right without ever picking up your pencil/pen. In other words, a function is
continuous if there are no holes or breaks in it.
Now, we are going to assume that there is some magical function somewhere out there in
the world, , called an integrating factor. Do not, at this point, worry about
what this function is or where it came from. We will figure out what is once we
have the formula for the general solution in hand.
So, now that we have assumed the existence of multiply everything in (1) by
Now, this is where the magic of comes into play. We are going to assume that
17
Again do not worry about how we can find a \\\ that will satisfy the above equation. As
we will see, provided p(t) is continuous we can find it. So substituting the above equation into
the first equation we now arrive at.
At this point we need to recognize that the left side of (4) is nothing more than the following
product rule. So we can replace the left side of (4) with this product rule. Upon doing this (4)
becomes
Now, recall that we are after y(t). We can now do something about that. All we need to do is
integrate both sides then use a little algebra and we'll have the solution. So, integrate both
sides of to get.
Note the constant of integration, c, from the left side integration is included here. It is vitally
important that this be included. If it is left out you will get the wrong answer every time.
The final step is then some algebra to solve for the solution, y(t).
Now, from a notational standpoint we know that the constant of integration, c, is an unknown
constant and so to make our life easier we will absorb the minus sign in front of it into the
constant and use a plus instead. This will NOT affect the final answer for the solution. So
with this change we have
Again, changing the sign on the constant will not affect our answer. If you choose to keep
the minus sign you will get the same value of c as I do except it will have the opposite sign.
Upon plugging in c we will get exactly the same answer.
There is a lot of playing fast and loose with constants of integration in this section, so you will
need to get used to it. When we do this we will always to try to make it very clear what is
going on and try to justify why we did what we did.
So, now that we’ve got a general solution to the equation below we need to go back and
determine just what this magical function is. This is actually an easier process that you
might think. We’ll start with.
18
Divide both sides by
Now, hopefully you will recognize the left side of this from your Calculus I class as nothing
more than the following derivative.
As with the process above all we need to do is integrate both sides to get.
You will notice that the constant of integration from the left side, k, had been moved to the
right side and had the minus sign absorbed into it again as we did earlier. Also note that
we’re using k here because we’ve already used c and in a little bit we’ll have both of them in
the same equation. So, to avoid confusion we used different letters to represent the fact that
they will, in all probability, have different values.
Now, it’s time to play fast and loose with constants again. It is inconvenient to have the k in
the exponent so we’re going to get it out of the exponent in the following way
Now, let’s make use of the fact that k is an unknown constant. If k is an unknown constant
then so so we might as well just rename it k and make our life easier. This will give us
the following.
So, we now have a formula for the general solution and a formula for the integrating factor.
We do have a problem however. We’ve got two unknown constants and the more unknown
constants we have the more trouble we’ll have later on. Therefore, it would be nice if we
could find a way to eliminate one of them (we’ll not be able to eliminate both….).
This is actually quite easy to do. First, substitute the equations and rearrange the
constants.
19
So, the above equation can be written in such a way that the only place the two
unknown constants show up is a ratio of the two. Then since both c and k are
unknown constants so is the ratio of the two constants. Therefore we’ll just call the
ratio c and then drop k out of the above equation since it will just get absorbed into c
eventually.
Now, the reality is that in the above equation it is not as useful as it may seem. It is
often easier to just run through the process that got us to rather than using the
formula. We will not use this formula in any of my examples. We will need to use
regularly, as that formula is easier to use than the process to derive it.
Solution Process
The solution process for a first order linear differential equation is as follows.
Let’s work a couple of examples. Let’s start by solving the differential equation that
we derived back in the Direction Field section
20
Example Find the solution to the following differential equation.
Answer: First we need to get the differential equation in the correct form.
Note that officially there should be a constant of integration in the exponent from the
integration. However, we can drop that for exactly the same reason that we dropped
the k from the equation given above
Now multiply all the terms in the differential equation by the integrating factor and do
some simplification.
Integrate both sides and don't forget the constants of integration that will arise from
both integrals.
Okay. It’s time to play with constants again. We can subtract k from both sides to
get.
Both c and k are unknown constants and so the difference is also an unknown
constant. We will therefore write the difference as c. So, we now have
From this point on we will only put one constant of integration down when we
integrate both sides knowing that if we had written down one for each integral, as we
should, the two would just end up getting absorbed into each other.
The final step in the solution process is then to divide both sides by or to
multiply both sides by . Either will work, but I usually prefer the multiplication
route. Doing this gives the general solution to the differential equation.
From the solution to this example we can now see why the constant of integration is
so important in this process. Without it, in this case, we would get a single, constant
21
solution, v(t)=50. With the constant of integration we get infinitely many solutions,
one for each value of c.
Back in the direction field section where we first derived the differential equation
used in the last example we used the direction field to help us sketch some
solutions. Let's see if we got them correct. To sketch some solutions all we need to
do is to pick different values of c to get a solution. Several of these are shown in the
graph below.
So, it looks like we did pretty good sketching the graphs back in the direction field
section.
Now, recall from the Definitions section that the Initial Condition(s) will allow us to
zero in on a particular solution. Solutions to first order differential equations (not just
linear as we will see) will have a single unknown constant in them and so we will
need exactly one initial condition to find the value of that constant and hence find the
solution that we were after. The initial condition for first order differential equations
will be of the form
Recall as well that a differential equation along with a sufficient number of initial
conditions is called an Initial Value Problem (IVP).
Answer: To find the solution to an IVP we must first find the general solution to the
differential equation and then use the initial condition to identify the exact solution
that we are after. So, since this is the same differential equation as we looked at in
Example 1, we already have its general solution.
Now, to find the solution we are after we need to identify the value of c that will give
us the solution we are after. To do this we simply plug in the initial condition which
will give us an equation we can solve for c. So let's do this
22
So, the actual solution to the IVP is.
Answer: Rewrite the differential equation to get the coefficient of the derivative a
one.
Can you do the integral? If not rewrite tangent back into sines and cosines and then
use a simple substitution. Note that we could drop the absolute value bars on the
secant because of the limits on x. In fact, this is the reason for the limits on x.
This is an important fact that you should always remember for these problems. We
will want to simplify the integrating factor as much as possible in all cases and this
fact will help with that simplification.
Now back to the example. Multiply the integrating factor through the differential
equation and verify the left side is a product rule. Note as well that we multiply the
integrating factor through the rewritten differential equation and NOT the original
differential equation. Make sure that you do this. If you multiply the integrating
factor through the original differential equation you will get the wrong solution!
23
Note the use of the trig formula that made the integral easier.
Next, solve for the solution.
We are now going to start looking at nonlinear first order differential equations. The
first type of nonlinear first order differential equations that we will look at is separable
differential equations.
A separable differential equation is any differential equation that we can write in the
following form.
Note that in order for a differential equation to be separable all the y's in the
differential equation must be multiplied by the derivative and all the x's in the
differential equation must be on the other side of the equal sign.
Solving separable differential equation is fairly easy. We first rewrite the differential
equation as the following
So, after doing the integrations in the above equation you will have an implicit
solution that you can hopefully solve for the explicit solution, y(x). Note that it won't
always be possible to solve for an explicit solution.
Recall from the Definitions section that an implicit solution is a solution that is not in
the form
24
while an explicit solution has been written in that form.
We will also have to worry about the interval of validity for many of these solutions.
Recall that the interval of validity was the range of the independent variable, x in this
case, on which the solution is valid. In other words, we need to avoid division by
zero, complex numbers, logarithms of negative numbers or zero, etc. Most of the
solutions that we will get from separable differential equations will not be valid for all
values of x.
Solve the following differential equation and determine the interval of validity for the
solution.
Answer: It is clear, hopefully, that this differential equation is separable. So, let’s
separate the differential equation and integrate both sides. As with the linear first
order officially we will pick up a constant of integration on both sides from the
integrals on each side of the equal sign. The two can be moved to the same side an
absorbed into each other. We will use the convention that puts the single constant
on the side with the x’s.
So, we now have an implicit solution. This solution is easy enough to get an explicit
solution, however before getting that it is usually easier to find the value of the
constant at this point. So apply the initial condition and find the value of c.
Plug this into the general solution and then solve to get an explicit solution.
Now, as far as solutions go we’ve got the solution. We do need to start worrying
about intervals of validity however.
25
Module 2.6 Linear Second Order Equations:
we will be looking exclusively at linear second order differential equations. The most
general linear second order differential equation is in the form.
In fact, we will rarely look at non-constant coefficient linear second order differential
equations. In the case where we assume constant coefficients we will use the
following differential
equation.
Where possible we will use the first equation just to make the point that certain
facts, theorems, properties, and/or techniques can be used with the non-constant
form. However, most of the time we will be using the second equation as it can be
fairly difficult to solve second order non-constant coefficient differential equations.
Initially we will make our life easier by looking at differential equations with g(t) = 0.
When g(t) = 0 we call the differential equation homogeneous and when we
call the differential equation non-homogeneous.
So, let’s start thinking about how to go about solving a constant coefficient,
homogeneous, linear, second order differential equation. Here is the general
constant coefficient, homogeneous, linear, second order differential equation.
It’s probably best to start off with an example. This example will lead us to a very
important fact that we will use in every problem from this point on. The example will
also give us clues into how to go about solving these in general.
Answer: We can get some solutions here simply by inspection. We need functions
whose second derivative is 9 times the original function. One of the first functions
that I can think of that comes back to itself after two derivatives is an exponential
function and with proper exponents the 9 will get taken care of as well.
These two functions are not the only solutions to the differential equation however.
Any of the following are also solutions to the differential equation.
26
In fact if you think about it any function that is in the form
It’s time to start solving constant coefficient, homogeneous, linear, second order
differential equations. So, let’s recap how we do this from the last section. We start
with the differential equation.
Solve the characteristic equation for the two roots, r1 and r2. This gives the two
solutions
Now, if the two roots are real and distinct (i.e. ) it will turn out that these
two solutions are “nice enough” to form the general solution
Its roots are r1 = - 8 and r2 = -3 and so the general solution and its derivative is.
27
Now, plug in the initial conditions to get the following system of equations.
Solving this system gives and . The actual solution to the differential
equation is then
Now, recall that we arrived at the characteristic equation by assuming that all
solutions to the differential equation will be of the form
Plugging our two roots into the general form of the solution gives the following
solutions to the differential equation.
Now, these two functions are “nice enough” (there’s those words again… we’ll get
around to defining them eventually) to form the general solution. We do have a
problem however. Since we started with only real numbers in our differential
equation we would like our solution to only involve real numbers. The two solutions
above are complex and so we would like to get our hands on a couple of solutions
(“nice enough” of course…) that are real.
Now, split up our two solutions into exponentials that only have real exponents and
exponentials that only have imaginary exponents. Then use Euler’s formula, or its
variant, to rewrite the second exponential.
28
This doesn’t eliminate the complex nature of the solutions, but it does put the two
solutions into a form that we can eliminate the complex parts.
Recall from the basics section that if two solutions are “nice enough” then any
solution can be written as a combination of the two solutions. In other words,
Using this let’s notice that if we add the two solutions together we will arrive at.
This is a real solution and just to eliminate the extraneous 2 let’s divide everything by
a 2. This gives the first real solution that we’re after.
Now, we can arrive at a second solution in a similar manner. This time let’s subtract
the two original solutions to arrive at.
On the surface this doesn’t appear to fix the problem as the solution is still complex.
However, upon learning that the two constants, c1 and c2 can be complex numbers
we can arrive at a real solution by dividing this by 2i. This is equivalent to taking
We now have two solutions (we’ll leave it to you to check that they are in fact
solutions) to the differential equation.
It also turns out that these two solutions are “nice enough” to form a general solution.
29
the general solution to the differential equation is.
Answer: The characteristic equation for this differential equation and its roots are.
The general solution to this differential equation and its derivative is.
So, the constants drop right out with this system and the actual solution is.
The whole point of this is to notice that systems of differential equations can arise
quite easily from naturally occurring situations. Developing an effective predator-
prey system of differential equations is not the subject of this chapter. However,
systems can arise from nth order linear differential equations as well. Before we get
30
into this however, let’s write down a system and get some terminology out of the
way.
Now, as mentioned earlier, we can write an nth order linear differential equation as a
system. Let’s see how that can be done.
Write the following 2nd order differential equations as a system of first order, linear
differential equations.
Answer: We can write higher order differential equations as a system with a very
simple change of variable. We’ll start by defining the following two new functions.
Note the use of the differential equation in the second equation. We can also
convert the initial conditions over to the new functions.
Putting all of this together gives the following system of differential equations.
31
Module 3 INTEGRALS:
Integration is an important concept in mathematics and, together with its inverse, differentiation,
is one of the two main operations in calculus. Given a function ƒ of a real variable x and an
interval [a, b] of the real line, the definite integral.
An integral is an infinite sum of infinitesimal values. An easier thing to say, is to say that a
DEFINITE integral represents the area underneath a curve, given by y = f(x).
Module 3.1 Basic Integration: The symbol for integration is a large stretched out S. It may also
contain a lower or upper bound which signify the area you want to search for. Here is a general
formula for integration:
which means on the interval [a, b], integrate the function f(x). The dx is merely a notation. It
stands for differentaion in practical terms.
So, what exactly is integration doing? Its purpose is to find the area between points a and b of a
curve on a graph. This (the integration) will be an approximation of the area, not the exact area.
This means a general integral and is not looking for a particular area under a curve. You will see
examples of each of these as we proceed.
An indefinite integral is one where you are not looking for a particular area under a curve but an
antiderivative of a given function. Here is the form for an indefinite integral:
The above F(x) means the antiderivative of the function f(x). The C stands for any constant
because the derivative of F(x) is the original function so the constant term, whatever it is, will
disappear. Here is an example:
32
Finding the antiderivative of f(x) will produce:
And no matter what constant we add to the antiderivative, it will go away when we take the
derivative of the antiderivative. All of the below mean the exact same thing for the given function
f(x):
When we take the derivative of the F(x) according to each of the corresponding letters above,
we get:
33
So to summarize, all an indefinite integral is asking for is to find an anti-derivative to a given
function.
In calculus, integration is used for finding the areas under a curve on a given graph. There is a
lot more to it than that as you will see as these tutorials move forward. One rule of thumb that
you need is to know how to use anti-derivatives. This was reviewed in the previous section of
this manual. Here is a list of very common derivatives of various functions used in calculus:
It is interesting to note that all trigonometric functions that begin with a "c" above, have a
negative derivative. This trick may help you out quite a bit when trying to determine the
derivative in an integral.
Module 3.3 Integrals are Summations: The gravitational force between two masses is
described by the equation:
34
What if there is more than two masses? What if there were many masses named m1, m2, m3,
m4,... mn and we wanted to know the force at m1? What equation do we use then? Luckily for us,
we can take the vector sum of forces due to the pairs of masses, add them up and we get the
total force on m1. So we would find the force due to the pairs m1 and m2 (F1), m1 and m3 (F2), m1
and m4 (F3), etc., and add them together.
In the equation above, the right hand side can be represented by the symbol: ∑F hich means
"the sum of the forces" Σ, the greek letter "sigma", being the mathematical symbol for
summation. So the net force is simply the sum of the individual forces.
So what happens if instead of masses, we have a continuous line of mass instead? We can still
take the same approach but we quickly run into a serious problem, take a look at the image
below:
Lets say we want to find the gravitational force at P due to a line of mass of length L. If we take
the approach above and try to express the total force as a summation of forces from point
masses on the line, we run into a serious problem, there are an infinite number of point masses.
That would imply an infinite net force (if you add them up), which is clearly not correct so
obviously this approach won't work. We need a new approach that can sum the forces due to an
infinite number of points on a continuous line of length L. The answer is the definite integral:
35
So what is a definite integral? A definite integral is the sum of an infinite number of infinitely
small continuous "elements" over a specified range. In our line example above, dx would be a
tiny (infinitely small (1/∞)) line segment with mass of λdx (lamda is the mass/distance), f(x)
describes how each infinitely small point mass (dm=λdx) effects P (the gravitational equation
from above describes this) and a and b would be the length of the continuous line of point
masses we are summing over, in this case L1 to L2. Notice the integral sign:
Looks like a stretched out S. This is no accident, since the stretched out S is meant to represent
a continuous sum, just as the greek ∑ is meant to represent a discrete sum.
The value of an integral is often described as "the area under the curve". The curve in this
expression is the function f(x) being integrated. I said above that the Integral is a summation, so
why should a summation be equal to the area under a curve? To answer this, lets take a look at
a simple function.
For a simple function, say f(x)=3, we would would have the following values:
f(x)=3
f(1)=3
f(2)=3
f(3)=3
f(4)=3
f(5)=3
etc.
Graphically we can represent this with a straight line like the one below:
So lets say we're interested in summing f(x) over the range x=0 to x=5 above. We see that the
value of f(x) = 3 for all values of x, so the sum is 3 + 3 + 3 + 3 + 3 = 15. So "area under the
curve" means the same thing as summing since the "height" of the function f(x) is simply it's
value at a particular point.
Unfortunately, most functions f(x) aren't so easy to add up since they are varying continuously.
Take a look at this example:
36
The curve in the example above is changing continuously, so for any given x we will have a
slightly different f(x). This makes adding the values of the function up very difficult, even for the
smallest of ranges. So we use a clever trick. Since we can add up discrete values, like in our
f(x)=3 function above, why don't we just chop up the curve into discrete values that we can add
up? The more we chop it up, the closer we'll get to an accurate sum. So the most accurate sum
would be if we chopped our function up an infinite number of times, which would require infintely
small cuts. You can see in the example above, using limits as a way of getting to "infinitely small
cuts" that's exactly what Integrals do.
An Example
Take a look at the example below. The relation between Electric Potential and charge is:
V=kQ/r
where V is the Electric Potential, k is 1/4πε0 (the electrostatic constant), Q is charge, and r is
distance from the charge. That's for a point charge, but what if you have a line charge like
below?
37
The trick is to divide the line into little "charge elements" dq and add them up from -a to b. We
can express dq = λ dx. The lamda (λ) is simply charge per length, which, when multiplied by a
infinitely small length (dx) will give an infinitely small charge (dq). By summing up all the
contributions of those small charges we can get the total potential due to the line of charges
38
For sake of easiness the following table will represent the above equation in terms of:
The formal definition of a definite integral is stated in terms of the limit of a Riemann sum.
Riemann sums are covered in the calculus lectures and in the textbook. For simplicity's sake,
we will use a more informal definiton for a definite integral. We will introduce the definite integral
defined in terms of area.
Let f(x) be a continuous function on the interval [a,b]. Consider the area bounded by the curve,
the x-axis and the lines x=a and x=b. The area of the region that lies above the x-axis should be
treated as a positive (+) value, while the area of the region that lies below the x-axis should be
treated as a negative (-) value.
The image below illustrates this concept. The positive area, above the x-axis, is shaded green
and labelled "+", while the negative area, below the x-axis, is shaded red and labelled "-".
39
The integral of the function f(x) from a to b is equal to the sum of the individual areas bounded
by the function, the x-axis and the lines x=a and x=b. This integral is denoted by
where f(x) is called the integrand, a is the lower limit and b is the upper limit. This type of
integral is called a definite integral. When evaluated, a definite integral results in a real number.
It is independent of the choice of sample points (x, f(x)).
Property 1: Constants
Property 2: Exponents
40
Property 3: Logarithms
Properties 3, 5 and 6 will be shown in greater detail when we get to the techniques of
integration. For now, we will see some general examples of indefinite integration.
Example 1
41
Example 2
Example 3
We can do some cleaning up in the above antiderivative and get an answer of:
Example 4
42
The above example now leads to some integrals that can be found easily just by using basic
properties of antiderivatives. These are listed below:
Example 5
Recall that the derivative of ex is itself so the antiderivative produces the same result.
43
Example 6
Example 7
This example may seem tricky at first but recall that there is still a constant term in the
exponent; here it is a -1. So therefore, use the same property as the above and get a result of:
Example 8
I can probably bet you said "how on earth do I solve this one?" Well, just look at one of the above
properties for the arctan function. This is actually in the form of an arctan with the value of a
being 3 (3 squared produces 9). You get answer of:
The Fundamental Theorem of Calculus defines the relationship between the processes of
differentiation and integration. That relationship is that differentiation and integration are inverse
processes.
44
The Fundamental Theorem of Calculus :
If f is continuous on [a,b], the definite integral with integrand f(x) and limits a and b is simply
equal to the value of the anti-derivative F(x) at b minus the value of F at a. This property allows
us to easily solve definite integrals, if we can find the antiderivative function of the integrand.
Parts one and two of the Fundamental Theorem of Calculus can be combined and simplified
into one theorem.
45
Module 3.10 Table of Indefinite Integrals
The following tables list the formulas for antidifferentiation. These formulas allow us to
determine the function that results from an indefinite integral. Since the formulas are for the
most general indefinite integral, we add a constant C to each one. With these formulas and the
Fundamental Theorem of Calculus, we can evaluate simple definite integrals.
The total change theorem is an adaptation of the second part of the Fundamental Theorem of
Calculus. The Total Change Theorem states: the integral of a rate of change is equal to the
total change.
If we know that the function f(x) is the derivative of some function F(x), then the definite integral
of f(x) from a to b is equal to the change in the function F(x) from a to b.
46
Module 3.12 The Substitution Rule
With our current knowledge of integration, we can't find the general equation of this indefinite
integral. There are no antidifferentiation formulas for this type of integral. However, from our
knowledge of differentiation, specifically the chain rule, we know that 4x3 is the derivative of the
function within the square root, x4 + 7. We must also account for the chain rule when we are
performing integration. To do this, we use the substitution rule.
The Substitution Rule states: if u = g(x) is a differentiable function and f is continuous on the
range of g, then
Note: Recall that if u = g(x), then du = g'(x)dx. If we substitute u into the left side of the equation
for g(x) and du for g'(x)dx, then we get the integral on the right side of the equation.
From our previous example, if we let u = (x4+7), then du = 4x3dx. If we substitutite these values
into the integral, we get an integral that can be solved using the antidifferentiation formulas.
However, this answer is still in terms of u. We must substitute u = (x4+7) into the resulting
function, so that it is a function of x, rather than u.
The substitution rule also applies to definite integrals. The Substitution Rule for Definite
Integrals states: If f is continuous on the range of u = g(x) and g'(x) is continuous on [a,b], then
47
Module 3.13 Integrals of Symmetric Functions
These properties of integrals of symmetric functions are very helpful when solving integration
problems. Some of the more challenging problems can be solved quite simply by using this
property.
Similar to integrals solved using the substitution method, there are no general equations for this
indefinite integral. However there do not appear to be any clear substitutions that could be made
to simplify this integral. This brings us to an integration technique known as integration by
parts, which will call upon our knowledge of the Product Rule for differentiation.
To make it easier to remember it is commonly written in the following notation. Let u=f(x) and
v=g(x). Then the differentiables are du=f'(x)dx and dv=g'(x)dx, so by the substitution rule, the
formula for integration by parts becomes:
48
From our previous example, if we let u=x and dv=cosx, then du=dx and v=sinx. If we substitute
these values into the formula we have:
The easy mistake is to simply make the substitution u=sinx, but then du=cosxdx. So in order to
integrate powers of sine we need an extra cosx factor. Similarily, in order to integrate powers of
cosine we need an extra sinx factor. Thus for this example knowing we need an extra sinx factor
to integrate powers of cosine we can separate one sine factor and convert the remaining sin4x
to an expression involving cosine using the identity sin2x + cos2x = 1.
Now by using our knowledge of substitution we can evaluate the integral by letting u=cosx, then
du=-sinxdx and
If we were to use the method from the previous example and separate one cosine factor we
would be left with a factor of cosine of odd degree which isn't easily converted to sine. We must
now consider the half angle formulas
49
Using the half angle formula for cos2x, we have:
(a)
If the power of sine is odd (m=2k+1), save one sine factor and use the identity sin2x +
cos2x = 1 to convert the remaining factors in terms of cosine.
50
Now that we have learned strategies for solving integrals with factors of sine and cosine we can
use similar techniques to solve integrals with factors of tangent and secant. Using the identity
sec2x = 1 + tan2x we are able to convert even powers of secant to tangent and vice versa. Now
we will consider two examples to illustrate two common strategies used to solve integrals of the
form
Observing that (d/dx)tanx=sec2x we can separate a factor of sec2x and still be left with an even
power of secant. Using the identity sec2x = 1 + tan2x we can convert the remaining sec2x to an
expression involving tangent. Thus we have:
Note: Suppose we tried to use the substitution u=secx, then du=secxtanxdx. When we separate
out a factor of secxtanx we are left with an odd power of tangent which is not easily converted to
secant.
Since (d/dx)secx=secxtanx we can separate a factor of secxtanx and still be left with an even
power of tangent which we can easily convert to an expression involving secant using the
identity sec2x = 1 + tan2x. Thus we have:
51
Then substitute u=secx to obtain:
Note: Suppose we tried to use the substitution u=tanx, then du=sec2xdx. When we separate out
a factor of sec2x we are left with an odd power of secant which is not easily converted to
tangent.
(a)
If the power of secant is even (n=2k, k>2) save a factor of sec2x and use the identity
sec2x = 1 + tan2x to express the remaining factors in terms of tanx.
Integrals of cotangent and cosecant are very similar to those with tangent and secant.
it is easy to see that integrals of the form can be solved by nearly identical
52
Unlike integrals with factors of both tangent and secant, integrals that have factors of only
tangent, or only secant do not have a general strategy for solving. Use of trig identities,
substitution and integration by parts are all commonly used to solve such integrals. For
example,
If we make the substitution u=secx, then du=secxtanxdx, and we are left with the simple integral
Another problem that may be encountered when solving trigonometric integrals are integrals of
the form
Using the product formulas which are deduced from the addition/subtraction rules we have
the corresponding identities
Sometimes trigonometric substitutions are very effective even when at first it may not be so
clear why such a substitution be made. For example, when finding the area of a circle or an
ellipse you may have to find an integral of the form where a>0.
53
It is difficult to make a substitution where the new variable is a function of the old one, (for
example, had we made the substitution u = a2 - x2, then du= -2xdx, and we are unable to cancel
out the -2x.) So we must consider a change in variables where the old variable is a function of
the new one. This is where trigonometric identities are put to use. Suppose we change the
variable from x to by making the substitution x = a sin θ. Then using the trig identity
we can simplify the integral by eliminating the root sign.
By changing x to a function with a different variable we are essentially using the The
Substitution Rule in reverse. If x=g(t) then by restricting the boundaries on g we can assure
that g has an inverse function; that is, g is one-to-one. In the example above we would require
If we look at the Substitution Rule and replace u with x and x with t, we obtain
Integration of rational functions by partial fractions is a fairly simple integrating technique used
to simplify one rational function into two or more rational functions which are more easily
integrated.
Think back to the steps taken when adding or subtracting fractions that do not have the same
denominator. First you find the lowest common multiple of the two denominators and then cross
multiply with the numerators accordingly. eg.
54
Well the same process applies when dealing with polynomial fractions. eg.
Now by reversing this process we can simplify a function such as into two fractions
This process is possible when the function is proper; that is the degree of the numerator is less
than the degree of the denominator. If the function is improper; that is the degree of the
numerator is greater than or equal to the degree of the denominator, then we must first use long
division to divide the denominator into the numerator until we obtain a remainder, such that it's
degree is less than the denominator. Then if possible the above process is used to simplify the
proper function.
To complete some of the problems in this section it will be useful to know the table integral
In general there are 4 cases to consider to express a rational function as the sum of two or more
partial fractions.
Case 1
The denominator is a product of distinct linear factors (no factor is repeated or a constant
mulptiple of another).
For example,
Since the degree of the numerator is less than the degree of the denominator we don't need to
divide. The denominator can be factored as follows:
55
Since the denominator has distinct linear factors we can write the rational fraction as the sum of
two or more partial fractions as follows:
From this equation we can match terms of the same degree to determine the coefficients by
solving the following system of equations:
Case 2
For example,
56
Since the degree of the numerator is greater than the degree of the denominator we must
factorize by long division.
Since the linear factor (x-2) occurs twice, the partial fraction decomposition is:
From this equation we can match terms of the same degree to determine the coefficients by
solving the following system of equations:
57
Case 3
The denominator contains irreducible quadratic factors, none of which are repeated.
When reducing such functions to partial fractions if there is a term in the denominator of the
form ax2 + bx + c, where b2 - 4ac < 0, then the numerator for that partial fraction will be of the
form Ax + B.
For example,
Since the degree of the numerator is less than the degree of the denominator we do not have to
divide first.
From this equation we can match terms of the same degree to determine the coefficients by
solving the following system of equations:
58
Case 4
Functions of this form are the same as those in case 3 only there is a term in the denominator
that is repeated or is a constant multiple of another.
For example,
59
If we were to expand the denominator we would see that its degree is greater than the the
degree of the numerator so we do not have to divide first.
From this equation we can match terms of the same degree to determine the coefficients by
solving the following system of equations:
60
61
Module 3.18 Comparison between Differentiation and integration:
3. We have already seen that all functions are not differentiable. Similarly, all functions are not
integrable. We will learn more about non-differentiable functions and non-integrable functions in
higher classes.
4. The derivative of a function, when it exists, is a unique function. The integral of a function is
not so. However, they are unique up to an additive constant, i.e., any two integrals of a function
differ by a constant.
6. We can speak of the derivative at a point. We never speak of the integral at a point, we speak
of the integral of a function over an interval on which the integral is defined.
7. The derivative of a function has a geometrical meaning, namely, the slope of the tangent to
the corresponding curve at a point. Similarly, the indefinite integral of a function represents
geometrically, a family of curves placed parallel to each other having parallel tangents at the
points of intersection of the curves of the family with the lines orthogonal (perpendicular) to the
axis representing the variable of integration.
8. The derivative is used for finding some physical quantities like the velocity of a moving
particle, when the distance traversed at any time t is known. Similarly, the integral is used in
calculating the distance traversed when the velocity at time t is known.
62
63
Example 2: Find the following intergrals;
64
Module 4 Permutation & Combination:
Combination: Combination means selection of things. The word selection is used, when
the order of things has no importance.
An arrangement that can be formed by taking some or all of a finite set of things (or
objects) is called a Permutation. Order of the things is very important in case of
permutation. A permutation is said to be a Linear Permutation if the objects are
arranged in a line. A linear permutation is simply called as a permutation. A permutation
is said to be a Circular Permutation if the objects are arranged in the form of a circle.
The number of (linear) permutations that can be formed by taking r things at a time from
a set of n distinct things (r = n) is denoted by
.
Number of permutations of n different things, taken all at a time, when m specified
things always come together is .
65
The number of permutations of n dissimilar things taken r at a time when k(< r)
particular things always occur is
The number of permutations of n different things, taken not more than r at a time, when
each thing may occur any number of times Is
9. The number of permutations of n different things taken not more than r at a time
66
Module 4.3 Permutations when all the objects are distinct
Theorem 1 The number of permutations of n different objects taken r at a time, where 0 < r ≤ n
and the objects do not repeat is n ( n – 1) ( n – 2). . .( n – r + 1), which is denoted by n Pr.
Proof There will be as many permutations as there are ways of filling in r vacant places
... ……… by
← r vacant places →
the n objects. The first place can be filled in n ways; following which, the second place
can be filled in (n – 1) ways, following which the third place can be filled in (n – 2) ways,..., the
rth place can be filled in (n – (r – 1)) ways. Therefore, the number of ways of filling in r vacant
places in succession is n(n – 1) (n – 2) . . . (n – (r – 1)) or n ( n – 1) (n – 2) ... (n – r + 1)
This expression for nPr is cumbersome and we need a notation which will help to reduce the
size of this expression. The symbol n! (read as factorial n or n factorial ) comes to our rescue. In
the following text we will learn what actually n! means.
Factorial notation: The notation n! represents the product of first n natural numbers, i.e., the
product 1 × 2 × 3 × . . . × (n – 1) × n is denoted as n!. We read this symbol as ‘n factorial’. Thus,
1 × 2 × 3 × 4 . . . × (n – 1) × n = n !
1=1!
1×2=2!
1× 2 × 3 = 3 !
1 × 2 × 3 × 4 = 4 ! and so on.
We define 0 ! = 1
We can write 5 ! = 5 × 4 ! = 5 × 4 × 3 ! = 5 × 4 × 3 × 2 !
= 5 × 4 × 3 × 2 × 1!
(i) 5 ! = 1 × 2 × 3 × 4 × 5 = 120
(ii) 7 ! = 1 × 2 × 3 × 4 × 5 × 6 ×7 = 5040
and (iii) 7 ! – 5! = 5040 – 120 = 4920.
67
Example: Suppose we have to form a number of consisting of three digits using the
digits 1,2,3,4, To form this number the digits have to be arranged. Different numbers will
get formed depending upon the order in which we arrange the digits. This is an example
of Permutation.
Now suppose that we have to make a team of 11 players out of 20 players, This is an
example of combination, because the order of players in the team will not result in a
change in the team. No matter in which order we list out the players the team will
remain the same! For a different team to be formed at least one player will have to be
changed.
Addition rule : If an experiment can be performed in ‘n’ ways, & another experiment can
be performed in ‘m’ ways then either of the two experiments can be performed in (m+n)
ways. This rule can be extended to any finite number of experiments.
Example: Suppose there are 3 doors in a room, 2 on one side and 1 on other side.
A man want to go out from the room. Obviously he has ‘3’ options for it. He can come
out by door ‘A’ or door ‘B’ or door ’C’.
Multiplication Rule : If a work can be done in m ways, another work can be done in ‘n’
ways, then both of the operations can be performed in m x n ways. It can be extended
to any finite number of operations.
68
Example.: Suppose a man wants to cross-out a room, which has 2 doors on one side
and 1 door on other site. He has 2 x 1 = 2 ways for it.
Ex. 5! = 5 x 4 x 3 x 2 x 1 =120
Note 0! = 1
Or (n-1)! = [n x (n-1)!]/n = n! /n
Putting n = 1, we have
O! = 1!/1
or 0 = 1
Permutation
Number of permutations of ‘n’ different things taken ‘r’ at a time is given by:-
n
Pr = n!/(n-r)!
Clearly the first place can be filled up in ‘n’ ways. Number of things left after filling-up the
first place = n-1
So the second-place can be filled-up in (n-1) ways. Now number of things left after
filling-up the first and second places = n - 2
69
Thus number of ways of filling-up first-place = n
By multiplication – rule of counting, total no. of ways of filling up, first, second -- rth-
place together :-
Hence:
n
Pr = n (n-1)(n-2) --------------(n-r+1)
n
Pr = n!/(n-r)!
Number of permutations of ‘n’ different things taken all at a time is given by:-
n
Pn = n!
Proof :
Now we have ‘n’ objects, and n-places.
n
Pn = n!
70
Concept.
n
We have Pr = n!/n-r
Putting r = n, we have :-
n
Pr = n! / (n-r)
n
But Pn = n!
Examples
Q. How many different signals can be made by 5 flags from 8-flags of different colours?
= 8!/(8-5)!
= 8 x 7 x 6 x 5 x 4 = 6720
Q. How many words can be made by using the letters of the word “SIMPLETON” taken
all at a time?
= 9! = 362880.
Number of permutations of n-thing, taken all at a time, in which ‘P’ are of one type, ‘g’ of
them are of second-type, ‘r’ of them are of third-type, and rest are all different is
given by :-
n!/p! x q! x r!
71
Example: In how many ways can the letters of the word “Pre-University” be arranged?
13!/2! X 2! X 2!
Number of permutations of n-things, taken ‘r’ at a time when each thing can be repeated
r-times is given by = nr.
Proof.
Hence total number of ways in which first, second ----r th, places can be filled-up
= n x n x n ------------- r factors.
= nr
Example: A child has 3 pocket and 4 coins. In how many ways can he put the coins
in his pocket.
Answer: First coin can be put in 3 ways, similarly second, third and forth coins also
can be put in 3 ways.
(a) If clockwise and anti clock-wise orders are different, then total number of
circular-permutations is given by (n-1)!
(b) If clock-wise and anti-clock-wise orders are taken as not different, then total
number of circular-permutations is given by (n-1)!/2!
72
Proof(a):
(a) Let’s consider that 4 persons A,B,C, and D are sitting around a round table
Thus, we use that if 4 persons are sitting at a round table, then they can be shifted four
times, but these four arrangements will be the same, because the sequence of A, B, C,
D, is same. But if A, B, C, D, are sitting in a row, and they are shifted, then the four
linear-arrangement will be different.
Hence if we have ‘4’ things, then for each circular-arrangement number of linear-
arrangements =4
Similarly, if we have ‘n’ things, then for each circular – agreement, number of linear –
arrangement = n.
73
Total number of linear–arrangements
= n. (number of circular-arrangements)
n = 1( n!)/n
Proof (b) When clock-wise and anti-clock wise arrangements are not different, then
observation can be made from both sides, and this will be the same. Here two
permutations will be counted as one. So total permutations will be half, hence
in this case.
Circular–permutations = (n-1)!/2
(a) If clock-wise and anti-clockwise orders are taken as different, then total number
of circular-permutations = nPr/r
(b) If clock-wise and anti-clockwise orders are taken as not different, then total
n
number of circular – permutation = Pr/2r
Example: How many necklace of 12 beads each can be made from 18 beads of
different colours?
= 18!/(6 x 24)
Restricted – Permutations
(a) Number of permutations of ‘n’ things, taken ‘r’ at a time, when a particular thing is
to be always included in each arrangement
= r n-1 Pr-1
(b) Number of permutations of ‘n’ things, taken ‘r’ at a time, when a particular thing is
fixed: = n-1 Pr-1
74
(c) Number of permutations of ‘n’ things, taken ‘r’ at a time, when a particular thing is
never taken: = n-1 Pr.
(d) Number of permutations of ‘n’ things, taken ‘r’ at a time, when ‘m’ specified things
always come together = m! x ( n-m+1) !
(e) Number of permutations of ‘n’ things, taken all at a time, when ‘m’ specified things
always come together = n ! - [ m! x (n-m+1)! ]
Example: How many words can be formed with the letters of the word ‘OMEGA’ when:
Ans.
(iii) Three vowels (O,E,A,) can be arranged in the odd-places (1st, 3rd and 5th) =
3! ways.
75
And two consonants (M,G,) can be arranged in the even-place (2nd, 4th)
= 2 ! ways
= 36 ways
= 120-36 = 84 ways.
Number of Combination of ‘n’ different things, taken ‘r’ at a time is given by:-
n
Cr= n! / r ! x (n-r)!
Proof: Each combination consists of ‘r’ different things, which can be arranged among
themselves in r! ways.
n
=> Total number of permutations = r! Cr ---------------(1)
= nPr -------(2)
76
n
Pr = r! . nCr
or n!/(n-r)! = r! . nCr
n
or Cr = n!/r!x(n-r)!
= n!/(n-r)!xr!
Restricted – Combinations
(a) Number of combinations of ‘n’ different things taken ‘r’ at a time, when ‘p’
particular things are always included = n-pCr-p.
(b) Number of combination of ‘n’ different things, taken ‘r’ at a time, when ‘p’
particular things are always to be excluded = n-pCr
Answer:
(i) A particular player is always chosen, it means that 10 players are selected out of
the remaining 14 players.
14
=. Required number of ways = C10 = 14C4
= 14!/4!x19! = 1365
(ii) A particular players is never chosen, it means that 11 players are selected out of 14
players.
14
=> Required number of ways = C11
= 14!/11!x3! = 364
77
(iii) Number of ways of selecting zero or more things from ‘n’ different things is given
by:- 2n-1
=>Total number of ways of selecting one or more things out of n different things
= 2n – 1 [ nC0=1]
Example: John has 8 friends. In how many ways can he invite one or more of them to
dinner?
Ans. John can select one or more than one of his 8 friends.
(iv) Number of ways of selecting zero or more things from ‘n’ identical things is given by
:- n+1
Example: In how many ways, can zero or more letters be selected form the letters
AAAAA?
78
Selecting three 'A's = 1
(V) Number of ways of selecting one or more things from ‘p’ identical things of one
type ‘q’ identical things of another type, ‘r’ identical things of the third type and ‘n’
different things is given by :-
Example: Find the number of different choices that can be made from 3 apples, 4
bananas and 5 mangoes, if at least one fruit is to be chosen.
Answer:
=> 2n = 20=1
(VI) Number of ways of selecting ‘r’ things from ‘n’ identical things is ‘1’.
Example: In how many ways 5 balls can be selected from ‘12’ identical red balls?
Ans. The balls are identical, total number of ways of selecting 5 balls = 1.
79
Example: How many numbers of four digits can be formed with digits 1, 2, 3, 4 and 5?
5
Required number is P4 = 5!/1! = 5 x 4 x 3 x 2 x 1
(a) Number of permutations of ‘n’ things, taken ‘r’ at a time, when a particular thing is
to be always included in each arrangement
= r n-1 Pr-1
(b) Number of permutations of ‘n’ things, taken ‘r’ at a time, when a particular thing is
fixed: = n-1 Pr-1
(c) Number of permutations of ‘n’ things, taken ‘r’ at a time, when a particular thing is
never taken: = n-1 Pr.
(d) Number of permutations of ‘n’ things, taken ‘r’ at a time, when ‘m’ specified things
always come together = m! x ( n-m+1) !
(e) Number of permutations of ‘n’ things, taken all at a time, when ‘m’ specified things
always come together = n ! - [ m! x (n-m+1)! ]
Example: How many words can be formed with the letters of the word ‘OMEGA’ when:
Ans.
80
(i) When ‘O’ and ‘A’ occupying end-places
(iii) Three vowels (O,E,A,) can be arranged in the odd-places (1st, 3rd and 5th) =
3! ways.
And two consonants (M,G,) can be arranged in the even-place (2nd, 4th)
= 2 ! ways
= 36 ways
= 120-36 = 84 ways.
81
Number of Combination of ‘n’ different things, taken ‘r’ at a time is given by:-
n
Cr= n! / r ! x (n-r)!
Proof: Each combination consists of ‘r’ different things, which can be arranged among
themselves in r! ways.
n
=> Total number of permutations = r! Cr ---------------(1)
= nPr -------(2)
n
Pr = r! . nCr
or n!/(n-r)! = r! . nCr
n
or Cr = n!/r!x(n-r)!
n
or Cr = n!/r!x(n-r)! and nCn-r = n!/(n-r)!x(n-(n-r))!
= n!/(n-r)!xr!
Restricted – Combinations
(a) Number of combinations of ‘n’ different things taken ‘r’ at a time, when ‘p’
particular things are always included = n-pCr-p.
(b) Number of combination of ‘n’ different things, taken ‘r’ at a time, when ‘p’
particular things are always to be excluded = n-pCr
82
Example: In how many ways can a cricket-eleven be chosen out of 15 players? if
Ans: (i) A particular player is always chosen, it means that 10 players are selected
out of the remaining 14 players.
14
=. Required number of ways = C10 = 14C4
= 14!/4!x19! = 1365
(ii) A particular players is never chosen, it means that 11 players are selected out of 14
players.
14
=> Required number of ways = C11
= 14!/11!x3! = 364
(iii) Number of ways of selecting zero or more things from ‘n’ different things is given
by:- 2n-1
=>Total number of ways of selecting one or more things out of n different things
= 2n – 1 [ nC0=1]
83
Example: John has 8 friends. In how many ways can he invite one or more of them to
dinner?
Ans. John can select one or more than one of his 8 friends.
(iv) Number of ways of selecting zero or more things from ‘n’ identical things is given by
:- n+1
Example: In how many ways, can zero or more letters be selected form the letters
AAAAA?
(V) Number of ways of selecting one or more things from ‘p’ identical things of one
type ‘q’ identical things of another type, ‘r’ identical things of the third type and ‘n’
different things is given by :-
84
Example: Find the number of different choices that can be made from 3 apples, 4
bananas and 5 mangoes, if at least one fruit is to be chosen.
Ans:
=> 2n = 20=1
(VI) Number of ways of selecting ‘r’ things from ‘n’ identical things is ‘1’.
Example: In how many ways 5 balls can be selected from ‘12’ identical red balls?
Ans. The balls are identical, total number of ways of selecting 5 balls = 1.
Example: How many numbers of four digits can be formed with digits 1, 2, 3, 4 and 5?
5
Required number is P4 = 5!/1! = 5 x 4 x 3 x 2 x 1
85
Restricted – Combinations
(a) Number of combinations of ‘n’ different things taken ‘r’ at a time, when ‘p’
particular things are always included = n-pCr-p.
(b) Number of combination of ‘n’ different things, taken ‘r’ at a time, when ‘p’
particular things are always to be excluded = n-pCr
Ans:
(i) A particular player is always chosen, it means that 10 players are selected out of
the remaining 14 players.
14
=. Required number of ways = C10 = 14C4
= 14!/4!x19! = 1365
(ii) A particular players is never chosen, it means that 11 players are selected out of 14
players.
14
=> Required number of ways = C11
= 14!/11!x3! = 364
(iii) Number of ways of selecting zero or more things from ‘n’ different things is given
by:- 2n-1
86
Number of ways of selecting ‘n’ things out of ‘n’ things = nCn
=>Total number of ways of selecting one or more things out of n different things
= 2n – 1 [ nC0=1]
Example: John has 8 friends. In how many ways can he invite one or more of them to
dinner?
Ans. John can select one or more than one of his 8 friends.
(iv) Number of ways of selecting zero or more things from ‘n’ identical things is given by
:- n+1
Example: In how many ways, can zero or more letters be selected form the letters
AAAAA?
87
(V) Number of ways of selecting one or more things from ‘p’ identical things of one
type ‘q’ identical things of another type, ‘r’ identical things of the third type and ‘n’
different things is given by :-
Example: Find the number of different choices that can be made from 3 apples, 4
bananas and 5 mangoes, if at least one fruit is to be chosen.
Ans:
=> 2n = 20=1
(VI) Number of ways of selecting ‘r’ things from ‘n’ identical things is ‘1’.
Example: In how many ways 5 balls can be selected from ‘12’ identical red balls?
Ans. The balls are identical, total number of ways of selecting 5 balls = 1.
Example: How many numbers of four digits can be formed with digits 1, 2, 3, 4 and 5?
5
Required number is P4 = 5!/1! = 5 x 4 x 3 x 2 x 1
88
Module 5 Trigonometry
The word ‘trigonometry’ is derived from the Greek words ‘trigon’ and metron’ and it
means ‘measuring the sides of a triangle’. The subject was originally developed to
solve geometric problems involving triangles. It was studied by sea captains for
navigation, surveyor to map out the new lands, by engineers and others. Currently,
trigonometry is used in many areas such as the science of seismology, designing
electric circuits, describing the state of an atom, predicting the heights of tides in the
ocean, analysing a musical tone and in many other areas.
Figure A
called the initial side and the final position of the ray after rotation is called the
terminal side of the angle. The point of rotation is called the vertex. If the direction of
rotation is anticlockwise, the angle is said to be positive and if the direction of
rotation is clockwise, then the angle is negative (Figure A).
89
The measure of an angle is the amount of rotation performed to get the terminal side
from the initial side. There are several units for measuring angles. The definition of
an angle suggests a unit, viz. one complete revolution from the position of the initial
side as indicated in Figure B
Figure B
This is often convenient for large angles. For example, we can say that a rapidly
spinning wheel is making an angle of say 15 revolution per second. We shall
describe two other units of measurement of an angle which are most commonly
used, viz. degree measure and radian measure.
Degree measure If a rotation from the initial side to terminal side is [1/360th ] of
a revolution, the angle is said to have a measure of one degree, written as 1°. A
degree is divided into 60 minutes, and a minute is divided into 60 seconds . One
sixtieth of a degree is called a minute, written as 1′, and one sixtieth of a minute is
called a second, written as 1″. Thus, 1° = 60′, 1′ = 60″ . Some of the angles whose
measures are 360°,180°, 270°, 420°, – 30°, – 420° are shown in Figure C
Figure C
90
Figure D
Module 5.2 Radian measure: There is another unit for measurement of an angle,
called the radian measure. Angle subtended at the centre by an arc of length 1 unit
in a unit circle (circle of radius 1 unit) is said to have a measure of 1 radian. In the
Figure D(i) to (iv), OA is the initial side and OB is the terminal side. The figures show
the angles whose measures are 1 radian, –1 radian, 1½ radian and –1½ radian.. We
know that the circumference of a circle of radius 1 unit is 2π. Thus, one complete
revolution of the initial side subtends an angle of 2π radian. More generally, in a
circle of radius r, an arc of length r will subtend an angle of 1 radian. It is well-known
that equal arcs of a circle subtend equal angle at the centre. Since in a circle of
radius r, an arc of length r subtends an angle whose measure is l/r radian, an arc of
length l will subtend an angle whose measure is radian. Thus, if ia circle of radius r,
an arc of length l subtends an angle θ radian at the centre, we have θ = l/r or l = r
θ.
Module 5.3 Relation between radian and real numbers: Consider the unit circle
with centre O. Let A be any point on the circle. Consider OA as initial side of an
angle. Then the length of an arc of the circle will give the radian measure of the
angle which the arc will subtend at centre of the circle. Consider the line PAQ which
is tangent to the circle at A. Let the point A represent the at the real number zero, AP
represents positive real number and AQ represents negative real numbers. If we
rope the line AP in the anticlockwise direction along the circle, and AQ in the
clockwise direction, then every real number will correspond to a radian measure and
conversely. Thus, radian measures and real numbers can be considered as one and
the same (Figure E).
91
Figure E
Module 5.4 Relation between degree and radian: Since a circle subtends at the
centre an angle whose radian measure is 2π and its degree measure is 360°, it
follows that 2π radian = 360° or π radian = 180° The above relation enables us to
express a radian measure in terms of degree measure and a degree measure in
terms of radian measure Figure E. Using approximate value of π as 22/7, we have
The relation between degree measures and radian measure of some common
angles are given in the following table:
92
Module 5.5 Notational Convention: Since angles are measured either in degrees
or in radians, we adopt the convention that whenever we write angle θ°, we mean
the angle whose degree measure is θ and whenever we write angle β, we mean the
angle whose radian measure is β.
Note that when an angle is expressed in radians, the word ‘radian’ is frequently
omitted. Thus, are written with the understanding that π and / 4 are radian
measures. Thus, we can say that
E.g., 1 Convert 40° 20′ into radian measure. We know that 180° = π radian.
E.g., 2 Convert 6 radians into degree measure. We know that π radian = 180°.
Find the radius of the circle in which a central angle of 60° intercepts an arc of length
37.4 cm (use = 22/7).
93
E.g., 3 If the arcs of the same lengths in two circles subtend angles 65°and 110° at
the centre, find the ratio of their radii.
Module 5.6 Trignomteric Functions: There are six trigonometric functions: sine,
cosine, tangent, cotangent, secant, and cosecant; abbreviated as sin, cos, tan,
cot, sec, and csc respectively. These are functions of a single real variable that is
normally an angle measurement given in terms of radians or degrees.
Consequently,v alues such as sin 2.7 rad or tan 33 ◦ (read: the sine of 2.7 radians or
the tangent of 33 degrees) often appear in trigonometric expressions. In the first
case,the radian identifier (rad) is frequently suppressed for simplicity and sin 2.7 rad
is shortened to sin 2.7. When variables such as t or α (α is the Greek letter alpha)
denote angles,measuremen t identifiers are usually omitted. Consequently,the
reader will encounter expressions such as sin t and tan α. In such cases the context
must make the choice of measurement clear. In this tutorial we will normally use
Greek letters to denote angles measured in degrees while most other variables
generally denote radian measurement.
From Figure F given belowe, all angles which are integral multiples of /2 are called
quadrantal angles. The coordinates of the points A, B, C and D are, respectively, (1,
0), (0, 1), (–1, 0) and (0, –1). Therefore, for quadrantal angles, we have
94
Figure F
95
96
Signs of Trignometric functions
97
Module 5.7 The Six Trigonometric Functions
The two basic trigonometric functions are: sine (which we have already studied), and
cosine. By taking ratios and reciprocals of these functions, we obtain four other
functions, called tangent, secant, cosecant, and cotangent.
Cosine
Let us go back to the bicycle introduced in the preceding section, and recall that the
sine of t, sin t, was defined as the y-coordinate of a marker on the wheel. The cosine
of t, denoted by cos t, is defined in almost the same way, except that this time, we
use the x-coordinates of the marker on the wheel. (See the figure.)
First notice that the coordinates of the point P in the above diagram are (cos t, sin t),
and that the distance from P to the origin is 1 unit. So we have:
sin2t + cos2t = 1,
ant so we have found a relationship between the sine and cosine function.
98
Let us now turn attention to the graph of the cosine function. The graph, as you
might expect, is almost identical to that of the sine function, except for a "phase shift"
(see the figure).
Relationships Between Sine and Cosine: The cosine curve is obtained from the
sine curve by shifting it to the left a distance of /2. Conversely, we can obtain the
sine curve from the cosine curve by shifting it /2 units to the right.
cos t = sin(t + /2)
sin t = cos(t /2)
Alternative formulation : We can also obtain the cosine curve by first inverting the
sine curve vertically (replace t by t) and then shifting to the right a distance of /2.
This gives us two alternative formulas (which are easier to remember)
cos t = sin( /2 t)
sin t = cos( /2 t)
As it was said above, we can take ratios and reciprocals of sine and cosine to obtain
four new functions. Here they are.
sin x
tan x = tangent
cos x
cos x 1
cotan x = = cotangent
sin x tan x
1
sec x = secant
cos x
1
cosec x = cosecant
sin x
99
The Trigonometric Functions as Ratios in a Right Triangle
Let us go back to the figure that defines the sine and cosine, but this time, let us
think of these two quantities as lengths of sides of a right triangle:
We are also thinking of the quantity t as a measure of the angle shown rather than
the length of an arc. Looking at the figure, we find that
100
opposite opposite
sin t = length of side opposite the angle t = =
1 hypotenuse
adjacent adjacent
cos t = length of side adjacent to the angle t = =
1 hypotenuse
sin t opposite
tan t = =
cos t adjacent
opposite
sin t = y-coordinate of point P sin t =
hypotenuse
adjacent
cos t = x-coordinate of point P cos t =
hypotenuse
sin t opposite
tan t = tan t =
cos t adjacent
cos t adjacent
cotan t = cotan t =
sin t opposite
1 hypotenuse
sec t = sec t =
cos t adjacent
1 hypotenuse
cosec t = cosec t =
sin t opposite
101
Module 5.8 Trigonometric equations:
102
CHE – 103
Chemistry
INDEX
MODULE 5 ELECTROCHEMISTRY 73 - 82
1
CONTENTS
MODULE 1 ATOMIC STRUCTURE
2
MODULE 4 FUNDAMENTALS OF ORGANIC CHEMISTRY
Module 4A Introduction
Module 4B Empirical and Molecular Formulas
Module 4C Structural Isomerism
Module 4D Naming of Alkanes
Module 4E The Importance of Functional Groups
MODULE 5 ELECTROCHEMISTRY
3
MODULE 1
ATOMIC STRUCTURE
_____________________________________________
Module 1A - ATOMIC STRUCTURE -INTRODUCTION
The existence of atoms has been proposed since the time of early Indian and Greek
philosophers (400 B.C.) who were of the view that atoms are the fundamental building blocks
of matter. According to them, the continued subdivisions of matter would ultimately yield
atoms which would not be further divisible. The word ‘atom’ has been derived from the Greek
word ‘a-tomio’ which means ‘uncutable’ or ‘non-divisible’. These earlier ideas were mere
speculations and there was no way to test them experimentally. These ideas remained dormant
for a very long time and were revived again by scientists in the nineteenth century.
An atom is composed of three different types of particles; protons, neutrons, and electrons.
Because atoms are neutrally charged, an atom will always have the same number of protons
and electrons. This idea that atoms are neutral is by definition. Atoms always have the same
number of negative and positive charges. By the way, atoms become ions when they become
charged. This change can occur when an atom has either lost or gained some electrons. So
ions do not have the same number of protons and electrons.
Figure 1
4
The example atom to the left or above, represents Helium which is usually a gas. Helium has
2 protons, 2 neutrons, and 2 electrons. The protons and neutrons form the nucleus and the
electrons form a cloud that surrounds the nucleus
(see Figure 1).
Now what is it about this atom that makes it a Helium atom? It is the number of protons that
determines what kind of element the atom will be. A Helium atom will always have 2 protons.
If it is a non-charged atom, it will also have 2 electrons.
There are; However, several varieties of Helium and they differ in the number of neutrons.
Scientists call these varieties isotopes. Most of the Helium that we breath in our air is Helium
4 which has 2 neutrons. There are also a small number of Helium 3 atoms in our air as well.
Both of these varieties of Helium are stable. They do not break down. So they are not
radioactive. The atomic theory of matter was first proposed on a firm scientific basis by John
Dalton, a British school teacher in 1808. His theory, called Dalton’s atomic theory, regarded
the atom as the ultimate particle of matter.
Dalton’s atomic theory was able to explain the law of conservation of mass, law of constant
composition and law of multiple proportion very successfully. However, it failed to explain
the results of many experiments, for example, it was known that substances like glass or
ebonite when rubbed with silk or fur generate electricity. Many different kinds of sub-atomic
particles were discovered in the twentieth century. However, in this section we will talk about
only two particles, namely electron and proton.
Electrons—very small subatomic particles with negative charge, move through this volume
of space outside the nucleus. This volume of space is organized into shells, subshells, and
orbitals. That involves quantum mechanics, which we will postpone as long as possible. The
nucleus—the central portion of the atom containing most of the mass but least of its volume.
5
It is composed of Protons—positively charged subatomic particles. Each element has a
unique number of protons. Neutrons—neutral subatomic particles are called nuetrons. The
atoms of an element may have different numbers of neutrons. If so, they are said to be
isotopes of one another. For example, carbon has three different naturally occurring isotopes.
All of the isotopes of carbon have six protons in their nuclei. One of the isotopes has six
neutrons, one has seven, and one has eight.
Atomic number—the number of proton in an atom of an element. For example, all of the
atoms of the element iron have 26 protons, so the mass number of iron is 26. The atomic
number is unique for each element. During nuclear reactions, an element may change into
another. For clarity and as an aid to balancing nuclear reaction equations, the atomic number
may be written as part of the atomic symbol. If so, it appears at the bottom left, as 26Fe.
Atomic weight—the average mass of the atoms of an element taking into account the masses
of the various isotopes and their relative abundance. For example, oxygen has an atomic
weight of 15.9994. This means that the average of the atoms’ masses is 15.9994 atomic mass
units (amu). A proton has a mass of slightly more than one amu as does a neutron. An
electron’s mass is so small that it would close to two thousand electrons to be as massive as a
proton. Most of the naturally occurring atoms of oxygen have eight protons, eight neutrons,
6
and eight electrons giving a mass of close to 16 amu. Many students assume that the mass
number (always an integer) is a rounded form of the atomic weight (never an integer). NOPE.
The atomic weight is an average while the mass number is a particle count. The atomic weight
can be found on the periodic table. It is never written as part of an atomic symbol.
Law of conservation of matter (or mass): During any ordinary chemical reaction, mass is
neither lost nor gained. This was known in Dalton’s time. It is explained by the idea that
atoms are finite and indestructible, so a sample of matter can’t lose or gain mass because it
doesn’t lose or gain atoms.
Law of constant composition: A compound always contains the same types of elements in
the same relative proportions by mass. For example, water is always formed from hydrogen
and oxygen and no other elements, and, water has 8 grams of oxygen for every gram of
hydrogen. This law was known in Dalton’s day. It is explained by the idea that chemical
reactions are groupings of atoms.
Law of multiple proportions: If two or more elements can combine to form two or more
compounds, for a fixed amount of one of the elements, the other(s) will be present in the
compounds in small whole number ratios. For example, carbon reacts with oxygen to form
two compounds, carbon monoxide and carbon dioxide. So, what is the difference between the
law of constant composition and the law of multiple proportions? The law of constant
composition compares the amounts of two elements in one compound while the law of
multiple proportions compares the amounts of two elements in two compounds. Actually, by
reducing one of the elements to unity, the law of multiple proportions finds a whole number
ratio of the other one element in two compounds (and, therefore, is much more complicated).
Atomic Number and Mass Number: The presence of positive charge on the nucleus is due
to the protons in the nucleus. As established earlier, the charge on the proton is equal but
opposite to that of electron. The number of protons present in the nucleus is equal to atomic
number (Z ). For example, the number of protons in the hydrogen nucleus is 1, in sodium
atom it is 11, therefore their atomic numbers are 1 and 11 respectively. In order to keep the
electrical neutrality, the number of electrons in an atom is equal to the number of protons
(atomic number, Z ). For example, number of electrons in hydrogen atom and sodium atom
7
are 1 and 11 respectively. Atomic number (Z) = number of protons in the nucleus of an atom
= number of electrons in a nuetral atom. While the positive charge of the nucleus is due to
protons, the mass of the nucleus, due to protons and neutrons. It is known that protons and
neutrons present in the nucleus are collectively known as nucleons. The total number of
nucleons is termed as mass number (A) of the atom. mass number (A) = number of protons
(Z) + number of neutrons (n).
Isobars and Isotopes: The composition of any atom can be represented by using the normal
element symbol (X) with super-script on the left hand side as the atomic mass number (A) and
subscript (Z) on the left hand side as the atomic number (i.e., AZX). Isobars are the atoms with
same mass number but different atomic number for example, 6C and 7N. On the other hand,
atoms with identical atomic number but different atomic mass number are known as Isotopes.
In other words (according to equation 2.4), it is evident that difference between the isotopes is
due to the presence of different number of neutrons present in the nucleus. For example,
considering of hydrogen atom again, 99.985% of hydrogen atoms contain only one proton.
This isotope is called protium (11H). Rest of the percentage of hydrogen atom contains two
other isotopes, the one containing 1 proton and 1 neutron is called deuterium (12D, 0.015%)
and the other one possessing 1 proton and 2 neutrons is called tritium (13T). The latter isotope
is found in trace amounts on the earth. Other examples of commonly occuring isotopes are:
carbon atoms containing 6, 7 and 8 neutrons besides 6 protons (12C, 13C, 14C); chlorine atoms
containing 18 and 20 neutrons besides 17 protons ( 35Cl, 37Cl ). Lastly an important point to
mention regarding isotopes is that chemical properties of atoms are controlled by the number
of electrons, which are determined by the number of protons in the nucleus. Number of
neutrons present in the nucleus have very little effect on the chemical properties of an
element. Therefore, all the isotopes of a given element show same chemical behaviour.
8
structure of atom was obtained from the experiments on electrical discharge through gases.
Before we discuss these results we need to keep in mind a basic rule regarding the behaviour
of charged particles : “Like charges repel each other and unlike charges attract each other”.
In mid 1850s many scientists mainly Faraday began to study electrical discharge in partially
evacuated tubes, known as cathode ray discharge tubes. It is depicted in Figure 2. A cathode
ray tube is made of glass containing two thin pieces of metal, called electrodes, sealed in it.
The electrical discharge through the gases could be observed only at very low pressures and at
very high voltages. The pressure of different gases could be adjusted by evacuation. When
sufficiently high voltage is applied across the electrodes, current starts flowing through a
stream of particles moving in the tube from the negative electrode (cathode) to the positive
electrode (anode). These were called cathode rays or cathode ray particles. The flow of current
from cathode to anode was further checked by making a hole in the anode and coating the
tube behind anode with phosphorescent material zinc sulphide. When these rays, after passing
through anode, strike the zinc sulphide coating, a bright spot on the coating is developed(same
thing happens in a television set) The results of these experiments are summarised below;
(i) The cathode rays start from cathode and move towards the anode. (ii) These rays
themselves are not visible but their behaviour can be observed with the help of certain kind of
materials (fluorescent or phosphorescent) which glow when hit by them. Television picture
9
tubes are cathode ray tubes and television pictures result due to fluorescence on the television
screen coated with certain fluorescent or phosphorescent materials. (iii) In the absence of
electrical or magnetic field, these rays travel in straight lines. (iv) In the presence of electrical
or magnetic field, the behaviour of cathode rays are similar to that expected from negatively
charged particles, suggesting that the cathode rays consist of negatively charged particles,
called electrons. (v) The characteristics of cathode rays (electrons) do not depend upon the
material of electrodes and the nature of the gas present in the cathode ray tube. Thus, we can
conclude that electrons are basic constituent of all the atoms.
Charge to Mass Ratio of Electron: In 1897, British physicist J.J. Thomson measured the
ratio of electrical charge (e) to the mass of electron (m e ) by using cathode ray tube and
applying electrical and magnetic field perpendicular to each other as well as to the path of
electrons. Thomson argued that the amount of deviation of the particles from their path in the
presence of electrical or magnetic field depends upon: (i) the magnitude of the negative charge
on the particle, greater the magnitude of the charge on the particle, greater is the interaction
with the electric or magnetic field and thus greater is the deflection. (ii) the mass of the
particle — lighter the particle, greater the deflection. (iii) the strength of the electrical or
magnetic field — the deflection of electrons from its original path increases with the increase
in the voltage across the electrodes, or the strength of the magnetic field.
Electrical discharge carried out in the modified cathode ray tube led to the discovery of
particles carrying positive charge, also known as canal rays. The characteristics of these
positively charged particles are listed below. (i) unlike cathode rays, the positively charged
particles depend upon the nature of gas present in the cathode ray tube. These are simply the
positively charged gaseous ions. (ii) The charge to mass ratio of the particles is found to
depend on the gas from which these originate. (iii) Some of the positively charged particles
carry a multiple of the fundamental unit of electrical charge. (iv) The behaviour of these
particles in the magnetic or electrical field is opposite to that observed for electron or cathode
rays. The smallest and lightest positive ion was obtained from hydrogen and was
called proton. This positively charged particle was characterised in 1919. Later, a need was
10
felt for the presence of electrically neutral particle as one of the constituent of atom. These
particles were discovered by Chadwick (1932) by bombarding a thin sheet of beryllium by α-
particles. When electrically neutral particles having a mass slightly greater than that of the
protons was emitted. He named these particles as neutrons.
There were a few important theories that tried to explain what was happening on a
fundamental level during chemical changes. Some chemistry professors will want their
students to know which ancient Greek first postulated the existence of the atom, for example.
Although everybody agrees, in principle, that a foundation in the history of chemistry is nice,
the guys who actually expect you to know this stuff coming in to General Chemistry are in the
minority. However, everyone will expect you to know Dalton and his theory. That is because
half of the first semester lab experiments are based on Dalton’s model of the atom.
Postulates of Dalton’s theory: a) Matter is made up of tiny, indivisible particles called atoms,
b) The atoms of any given element are identical to one another and different from the atoms of
any other element, c) Atoms are neither created nor destroyed during ordinary chemical
reactions, nor are they converted to another type of atom, d) During chemical changes atoms
are regrouped to form chemical compounds which always contain the same type and relative
proportion of atoms.
11
results of later experiments. Thomson was awarded Nobel Prize for physics in 1906, for his
theoretical and experimental investigations on the conduction of electricity by gases.
Henri Becqueral (1852-1908) observed that there are certain elements which emit radiation on
their own and named this phenomenon as radioactivity and the elements known as radioactive
elements. This field was developed by Marie Curie, Piere Curie, Rutherford and Fredrick
Soddy. It was observed that three kinds of rays i.e., α, β- and γ-rays are emitted. Rutherford
found that α-rays consists of high energy particles carrying two units of positive charge and
four unit of atomic mass. He concluded that α- particles are helium nuclei as when α- particles
combined with two electrons yielded helium gas. β-rays are negatively charged particles
similar to electrons. The γ-rays are high energy radiations like X-rays, are neutral in nature
and do not consist of particles. As regards penetrating power, α-particles are the least,
followed by β-rays (100 times that of α–particles) and γ-rays (1000 times of that α-particles).
Rutherford and his students (Hans Geiger and Ernest Marsden) bombarded very thin gold foil
with α–particles. Rutherford’s famous –particle scattering experiment is given below:
A stream of high energy α–particles from a radioactive source was directed at a thin foil
(thickness 100 nm) of gold metal. The thin gold foil had a circular fluorescent zinc sulphide
screen around it. Whenever α–particles struck the screen, a tiny flash of light was produced at
that point. The results of scattering experiment were quite unexpected. According to Thomson
model of atom, the mass of each gold atom in the foil should have been spread evenly over the
12
entire atom, and α– particles had enough energy to pass directly through such a uniform
distribution of mass. It was expected that the particles would slow down and change directions
only by a small angles as they passed through the foil. It was observed that : i) most of the α–
particles passed through the gold foil un-deflected. ii) a small fraction of the α–particles was
deflected by small angles. iii) a very few α– particles (1 in 20,000) bounced back, that is,
were deflected by nearly 180°. On the basis of the observations, Rutherford drew the
following conclusions regarding the structure of atom : a) A few positively charged α–
particles were deflected. The deflection must be due to enormous repulsive force showing that
the positive charge of the atom is not spread throughout the atom as Thomson had presumed.
The positive charge has to be concentrated in a very small volume that repelled and deflected
the positively charged α– particles, b) Calculations by Rutherford showed that the volume
occupied by the nucleus is negligibly small as compared to the total volume of the atom. The
radius of the atom is about 10–10 m, while that of nucleus is 10–15 m. One can appreciate this
difference in size by realising that if a cricket ball represents a nucleus, then the radius of atom
would be about 5 km. On the basis of above observations and conclusions, Rutherford
proposed the nuclear model of atom (after the discovery of protons). According to this model :
a) The positive charge and most of the mass of the atom was densely concentrated in
extremely small region. This very small portion of the atom was called nucleus by Rutherford;
b) The nucleus is surrounded by electrons that move around the nucleus with a very high
speed in circular paths called orbits. Thus, Rutherford’s model of atom resembles the solar
system in which the nucleus plays the role of sun and the electrons that of revolving planets;
c) Electrons and the nucleus are held together by electrostatic forces of attraction.
Neils Bohr (1913) was the first to explain quantitatively the general features of hydrogen atom
structure and its spectrum. Though the theory is not the modern quantum mechanics, it can
still be used to rationalize many points in the atomic structure and spectra. Bohr’s model for
hydrogen atom is based on the following postulates ai) The electron in the hydrogen atom can
move around the nucleus in a circular path of fixed radius and energy. These paths are
called orbits, stationary states or allowed energy states. These orbits are arranged
concentrically around the nucleus, b) The energy of an electron in the orbit does not change
13
with time. However, the electron will move from a lower stationary state to a higher stationary
state when required amount of energy is absorbed by the electron or energy is emitted when
electron moves from higher stationary state to lower stationary state. The energy change does
not take place in a continuous manner. Thus an electron can move only in those orbits for
which its angular momentum is integral multiple of h/2π that is why only certain fixed orbits
are allowed. The details regarding the derivation of energies of the stationary states used by
Bohr, are quite complicated and will be discussed in higher classes. However, according to
Bohr’s theory for hydrogen atom: a) The stationary states for electron are numbered n =
1,2,3.......... These integral numbers (Section 2.6.2) are known as Principal quantum numbers.
b) The radii of the stationary states are expressed as : r n = n2 a0 (2.12) where a0 = 52,9 pm.
Thus the radius of the first stationary state, called the Bohr radius, is 52.9 pm. Normally the
electron in the hydrogen atom is found in this orbit (that is n=1). As n increases the value
of r will increase. In other words the electron will be present away from the nucleus. c) The
most important property associated with the electron, is the energy of its stationary state. It is
given by the expression.
Classical mechanics, based on Newton’s laws of motion, successfully describes the motion of
all macroscopic objects such as a falling stone, orbiting planets etc., which have essentially a
particle-like behaviour as shown in the previous section. However it fails when applied to
microscopic objects like electrons, atoms, molecules etc. This is mainly because of the fact
that classical mechanics ignores the concept of dual behaviour of matter especially for sub-
atomic particles and the uncertainty principle. The branch of science that takes into account
this dual behaviour of matter is called quantum mechanics.
Quantum mechanics is a theoretical science that deals with the study of the motions of the
microscopic objects that have both observable wave like and particle like properties. It
specifies the laws of motion that these objects obey. When quantum mechanics is applied to
macroscopic objects (for which wave like properties are insignificant) the results are the same
as those from the classical mechanics.
14
Quantum mechanics was developed independently in 1926 by Werner Heisenberg and Erwin
Schrödinger. Here, however, we shall be discussing the quantum mechanics which is based on
the ideas of wave motion. The fundamental equation of quantum mechanics was developed by
Schrödinger and it won him the Nobel Prize in Physics in 1933. This equation which
incorporates wave-particle duality of matter as proposed by de Broglie is quite complex and
knowledge of higher mathematics is needed to solve it. You will learn its solutions for
different systems in higher classes. For a system (such as an atom or a molecule whose energy
does not change with time) the Schrödinger equation is written as Hµ ψ = Eψ where Hµ is a
mathematical operator called Hamiltonian. Schrödinger gave a recipe of constructing this
operator from the expression for the total energy of the system. The total energy of the system
takes into account the kinetic energies of all the sub-atomic particles (electrons, nuclei),
attractive potential between the electrons and nuclei and repulsive potential among the
electrons and nuclei individually. Solution of this equation gives E and ψ.
James Maxwell (1870) was the first to give a comprehensive explanation about the interaction
between the charged bodies and the behaviour of electrical and magnetic fields on
macroscopic level. He suggested that when electrically charged particle moves under
accelaration, alternating electrical and magnetic fields are produced and transmitted. These
fields are transmitted in the forms of waves called electromagnetic wavesor electromagnetic
radiation.
15
Light is the form of radiation known from early days and speculation about its nature dates
back to remote ancient times. In earlier days (Newton) light was supposed to be made of
particles (corpuscules). It was only in the 19th century when wave nature of light was
established. Maxwell was again the first to reveal that light waves are associated with
oscillating electric and magnetic character. Although electromagnetic wave motion is complex
in nature, we will consider here only a few simple properties. (i) The oscillating electric and
magnetic fields produced by oscillating charged particles are perpendicular to each other and
both are perpendicular to the direction of propagation of the wave. Simplified picture of
electromagnetic wave. (ii) Unlike sound waves or water waves, electromagnetic waves do not
require medium and can move in vacuum. It is now well established that there are many types
of electromagnetic radiations, which differ from one another in wavelength (or frequency).
These constitute what is called electromagnetic spectrum (Figure 3). Different regions of the
spectrum are identified by different names. Some examples are: radio frequency region
around 106 Hz, used for broadcasting; microwave region around 1010 Hz used for radar;
infrared region around 1013 Hz used for heating; ultraviolet region around 1016Hz a
component of sun’s radiation. The small portion around 1015 Hz, is what is ordinarily
called visible light. It is only this part which our eyes can see (or detect). Special instruments
are required to detect non-visible radiation. Different kinds of units are used to represent
electromagnetic radiation. These radiations are characterised by the properties, namely,
frequency (ν ) and wavelength (λ). The SI unit for frequency (ν ) is hertz (Hz, s–1), after
Heinrich Hertz. It is defined as the number of waves that pass a given point in one second.
Wavelength should have the units of length and as you know that the SI units of length is
meter (m). Since electromagnetic radiation consists of different kinds of waves of much
smaller wavelengths, smaller units are used. Figure 3 shows various types of electro-magnetic
radiations which differ from one another in wavelengths and frequencies. In vacuum all types
of electromagnetic radiations, regardless of wavelength, travel at the same speed, i.e., 3.0 ×
108 m s–1 (2.997925 × 108 m s–1, to be precise). This is called speed of light and is given the
symbol ‘c‘. The frequency (ν ), wavelength (λ) and velocity of light (c) are related by the
equation. c = ν λ . The other commonly used quantity specially in spectroscopy, is the wave
number (ν ). It is defined as the number of wavelengths per unit length. Its units are reciprocal
of wavelength unit, i.e., m–1. However commonly used unit is cm–1 (not SI unit). The
16
proportionality constant, ‘h’ is known as Planck’s constant and has the value 6.626×10 –34 J s.
With this theory, Planck was able to explain the distribution of intensity in the radiation from
black body as a function of frequency or wavelength at different temperatures.
Albert Einstein, a German born American physicist, is regarded by many as one of the two
great physicists the world has known (the other is Isaac Newton). His three research papers
(on special relativity, Brownian motion and the photoelectric effect), fetched him the Nobel
Prize in Physics in 1921 for his explanation of the photoelectric effect. Einstein (1905) was
able to explain the photoelectric effect using Planck’s quantum theory of electromagnetic
radiation as a starting point. Shining a beam of light on to a metal surface can, therefore, be
viewed as shooting a beam of particles, the photons. When a photon of sufficient energy
strikes an electron in the atom of the metal, it transfers its energy instantaneously to the
electron during the collision and the electron is ejected without any time lag or delay. Greater
the energy possessed by the photon, greater will be transfer of energy to the electron and
greater the kinetic energy of the ejected electron. In other words, kinetic energy of the ejected
electron is proportional to the frequency of the electromagnetic radiation. Since the striking
photon has energy equal to hν and the minimum energy required to eject the electron
is hν0 (also called work function, W0; then the difference in energy (hν – hν0 ) is transferred as
the kinetic energy of the photoelectron. Following the conservation of energy principle, the
kinetic energy of the ejected electron is given by the equation
where m is the mass of the electron (e) and v is the velocity associated with the ejected
electron. Lastly, a more intense beam of light consists of larger number of photons,
consequently the number of electrons ejected is also larger as compared to that in an
experiment in which a beam of weaker intensity of light is employed.
The speed of light depends upon the nature of the medium through which it passes. As a
result, the beam of light is deviated or refracted from its original path as it passes from one
medium to another. It is observed that when a ray of white light is passed through a prism, the
17
wave with shorter wavelength bends more than the one with a longer wavelength. Since
ordinary white light consists of waves with all the wavelengths in the visible range, a ray of
white light is spread out into a series of coloured bands called spectrum. The light of red
colour which has longest wavelength is deviated the least while the violet light, which has
shortest wavelength is deviated the most. The spectrum of white light, that we can see, ranges
from violet at 7.50 × 1014 Hz to red at 4×1014 Hz. Such a spectrum is called continuous
spectrum. Continuous because violet merges into blue, blue into green and so on. A similar
spectrum is produced when a rainbow forms in the sky. Remember that visible light is just a
small portion of the electromagnetic radiation (Fig.2.7). When electromagnetic radiation
interacts with matter, atoms and molecules may absorb energy and reach to a higher energy
state. With higher energy, these are in an unstable state. For returning to their normal (more
stable, lower energy states) energy state, the atoms and molecules emit radiations in various
regions of the electromagnetic spectrum.
Emission and Absorption Spectra: The spectrum of radiation emitted by a substance that
has absorbed energy is called an emission spectrum. Atoms, molecules or ions that have
absorbed radiation are said to be “excited”. To produce an emission spectrum, energy is
supplied to a sample by heating it or irradiating it and the wavelength (or frequency) of the
radiation emitted, as the sample gives up the absorbed energy, is recorded. An absorption
spectrum is like the photographic negative of an emission spectrum. A continuum of radiation
is passed through a sample which absorbs radiation of certain wavelengths. The missing
wavelength which corresponds to the radiation absorbed by the matter, leave dark spaces in
the bright continuous spectrum.
18
characteristic lines in atomic spectra can be used in chemical analysis to identify unknown
atoms in the same way as finger prints are used to identify people. The exact matching of lines
of the emission spectrum of the atoms of a known element with the lines from an unknown
sample quickly establishes the identity of the latter, German chemist, Robert Bunsen (1811-
1899) was one of the first investigators to use line spectra to identify elements. Elements like
rubidium (Rb), caesium (Cs) thallium (Tl), indium (In), gallium (Ga) and scandium (Sc) were
discovered when their minerals were analysed by spectroscopic methods. The element helium
(He) was discovered in the sun by spectroscopic method.
Schrödinger Equation: When Schrödinger equation is solved for hydrogen atom, the
solution gives the possible energy levels the electron can occupy and the corresponding wave
function(s) (ψ) of the electron associated with each energy level. These quantized energy
states and corresponding wave functions which are characterized by a set of three quantum
numbers (principal quantum number n, azimuthal quantum number l and magnetic quantum
number ml ) arise as a natural consequence in the solution of the Schrödinger equation. When
an electron is in any energy state, the wave function corresponding to that energy state
contains all information about the electron. The wave function is a mathematical function
whose value depends upon the coordinates of the electron in the atom and does not carry any
physical meaning. Such wave functions of hydrogen or hydrogen like species with one
electron are called atomic orbitals. Such wave functions pertaining to one-electron species are
called one-electron systems. The probability of finding an electron at a point within an atom is
proportional to the |ψ|2 at that point. The quantum mechanical results of the hydrogen atom
successfully predict all aspects of the hydrogen atom spectrum including some phenomena
that could not be explained by the Bohr model. In view of the shortcoming of the Bohr’s
model, attempts were made to develop a more suitable and general model for atoms. Two
important developments which contributed significantly in the formulation of such a model
were : Dual behaviour of matter and Heisenberg uncertainty principle.
Dual Behaviour of Matter: The French physicist, de Broglie in 1924 proposed that matter,
like radiation, should also exhibit dual behaviour i.e., both particle and wavelike properties.
This means that just as the photon has momentum as well as wavelength, electrons should
19
also have momentum as well as wavelength, de Broglie, from this analogy, gave the following
relation between wavelength (λ) and momentum (p) of a material particle.
One of the important implications of the Heisenberg Uncertainty Principle is that it rules out
existence of definite paths or trajectories of electrons and other similar particles. The
trajectory of an object is determined by its location and velocity at various moments. If we
know where a body is at a particular instant and if we also know its velocity and the forces
acting on it at that instant, we can tell where the body would be sometime later. We, therefore,
conclude that the position of an object and its velocity fix its trajectory. Since for a sub-atomic
object such as an electron, it is not possible simultaneously to determine the position and
velocity at any given instant to an arbitrary degree of precision, it is not possible to talk of the
trajectory of an electron.
20
Pauli Exclusion Principle: The number of electrons to be filled in various orbitals is
restricted by the exclusion principle, given by the Austrian scientist Wolfgang Pauli (1926).
According to this principle : No two electrons in an atom can have the same set of four
quantum numbers. Pauli exclusion principle can also be stated as : “Only two electrons may
exist in the same orbital and these electrons must have opposite spin.” This means that the two
electrons can have the same value of three quantum numbers n, l and ml, but must have the
opposite spin quantum number. The restriction imposed by Pauli’s exclusion principle on the
number of electrons in an orbital helps in calculating the capacity of electrons to be present in
any subshell. For example, subshell 1s comprises of one orbital and thus the maximum
number of electrons present in 1s subshell can be two, in p and dsubshells, the maximum
number of electrons can be 6 and 10 and so on. This can be summed up as : the maximum
number of electrons in the shell with principal quantum number n is equal to 2n2.
Hund’s Rule of Maximum Multiplicity: This rule deals with the filling of electrons into the
orbitals belonging to the same subshell (that is, orbitals of equal energy, called degenerate
orbitals). It states : pairing of electrons in the orbitals belonging to the same subshell
(p, d or f) does not take place until each orbital belonging to that subshell has got one electron
each i.e., it is singly occupied. Since there are three p, five d and seven f orbitals, therefore,
the pairing of electrons will start in the p, d and f orbitals with the entry of 4th, 6th and 8th
electron, respectively. It has been observed that half filled and fully filled degenerate set of
orbitals acquire extra stability due to their symmetry
21
potassium (K) and calcium (Ca), the 4sorbital, being lower in energy than the 3d orbitals, is
occupied by one and two electrons respectively.
A new pattern is followed beginning with scandium (Sc). The 3d orbital, being lower in
energy than the 4p orbital, is filled first. Consequently, in the next ten elements, scandium
(Sc), titanium (Ti), vanadium (V), chromium (Cr), manganese (Mn), iron (Fe), cobalt (Co),
nickel (Ni), copper (Cu) and zinc (Zn), the five 3d orbitals are progressively occupied. We
may be puzzled by the fact that chromium and copper have five and ten electrons in
3d orbitals rather than four and nine as their position would have indicated with two-electrons
in the 4s orbital. The reason is that fully filled orbitals and half-filled orbitals have extra
stability (that is, lower energy). Thus p3, p6, d5, d10,f 7, f14 etc. configurations, which are either
half-filled or fully filled, are more stable. Chromium and copper therefore adopt
the d5 and d10 configuration.
Electronic Configuration: To indicate the electronic configuration of the atom, that is to say,
where the electrons reside, we use the following notation.
Given a periodic table, all we need to know to write the electronic configuration for
a given atom is the atomic number Z, which tells us the number of electrons in the neutral
atom. We start by writing the first potential energy level (n=1), then the possible types of
orbitals in this level (s, p, etc.), and then the number of electrons occupying that orbital,
which is always either 1or 2. It will always be 2 unless Z is an odd number and we’re down to
the last electron in the valence shell. The filling of electrons into the orbitals of different
atoms takes place according to the aufbau principle which is based on the Pauli’s exclusion
principle, the Hund’s rule of maximum multiplicity and the relative energies of the orbitals.
22
Aufbau Principle: The word ‘aufbau’ in German means ‘building up’. The building up of
orbitals means the filling up of orbitals with electrons. The principle states : In the ground
state of the atoms, the orbitals are filled in order of their increasing energies. In other words,
electrons first occupy the lowest energy orbital available to them and enter into higher energy
orbitals only after the lower energy orbitals are filled. The order in which the energies of the
orbitals increase and hence the order in which the orbitals are filled is as follows :
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 4f, 5d, 6p, 7s...
The order may be remembered by using the method given in Fig. 2.17. Starting from the top,
the direction of the arrows gives the order of filling of orbitals, that is starting from right top
to bottom left as shown in Figure 5.
Orbitals and Quantum Numbers : A large number of orbitals are possible in an atom.
Qualitatively these orbitals can be distinguished by their size, shape and orientation. An
orbital of smaller size means there is more chance of finding the electron near the nucleus.
Similarly shape and orientation mean that there is more probability of finding the electron
along certain directions than along others. Atomic orbitals are precisely distinguished by what
are known as quantum numbers. Each orbital is designated by three quantum numbers
labelled as n, l and ml.
The principal quantum number determines the size and to large extent the energy of the
orbital. For hydrogen atom and hydrogen like species (He+, Li2+, .... etc.) energy and size of the
orbital depends only on ‘n’. The principal quantum number also identifies the shell. With the
increase in the value of ‘n’, the number of allowed orbital increases and are given by ‘n2’ All
the orbitals of a given value of ‘n’ constitute a single shell of atom and are represented by the
following letters:
n = 1 2 3 4 and Shell = K L M N
The quantum numbers are parameters that describe the distribution of electrons in the atom,.
They are:
23
1. Principal quantum number (n) - Represents the main energy level, or shell,
occupied by an electron. It is always a positive integer, that is
n = 1, 2, 3 ...
4. Spin quantum number (mS) - Represents the two possible orientations that an
electron can have in the presence of a magnetic field, or in relation to another
electron occupying the same orbital. Only two electrons can occupy the same
orbital, and they must have opposite spins. When this happens, the electrons
are said to be paired. The allowed values for the spin quantum number ms are
+1/2
and -1/2.
Size of an orbital increases with increase of principal quantum number ‘n’. In other words the
electron will be located away from the nucleus. Since energy is required in shifting away the
negatively charged electron from the positively charged nucleus, the energy of the orbital will
increase with increase of n.
Azimuthal quantum number: ‘l’ is also known as orbital angular momentum or subsidiary
quantum number. It defines the three dimensional shape of the orbital. For a given value
of n, l can have n values ranging from 0 to n – 1, that is, for a given value of n, the possible
value of l are : l = 0, 1, 2, .......... (n –1).
Electron spin ‘s’ : The three quantum numbers labelling an atomic orbital can be used
equally well to define its energy, shape and orientation. But all these quantum numbers are not
enough to explain the line spectra observed in the case of multi-electron atoms, that is, some
24
of the lines actually occur in doublets (two lines closely spaced), triplets (three lines, closely
spaced) etc. This suggests the presence of a few more energy levels than predicted by the
three quantum numbers.
Orbit, orbital and its importance: Orbit and orbital are not synonymous. An orbit, as
proposed by Bohr, is a circular path around the nucleus in which an electron moves. A precise
description of this path of the electron is impossible according to Heisenberg uncertainty
principle. Bohr orbits, therefore, have no real meaning and their existence can never be
demonstrated experimentally. An atomic orbital, on the other hand, is a quantum mechanical
concept and refers to the one electron wave function ψ in an atom. It is characterized by three
quantum numbers (n, l and ml) and its value depends upon the coordinates of the
electron. ψ has, by itself, no physical meaning. It is the square of the wave function i.e.,
|ψ|2 which has a physical meaning. |ψ|2 at any point in an atom gives the value of probability
density at that point. Probability density (|ψ|2) is the probability per unit volume and the
product of |ψ|2 and a small volume (called a volume element) yields the probability of finding
the electron in that volume (the reason for specifying a small volume element is that
|ψ|2 varies from one region to another in space but its value can be assumed to be constant
within a small volume element). The total probability of finding the electron in a given
volume can then be calculated by the sum of all the products of |ψ|2 and the corresponding
volume elements. It is thus possible to get the probable distribution of an electron in an
orbital. An electron spins around its own axis, much in a similar way as earth spins around its
own axis while revolving around the sun. In other words, an electron has, besides charge and
mass, intrinsic spin angular quantum number. Spin angular momentum of the electron — a
vector quantity, can have two orientations relative to the chosen axis. These two orientations
are distinguished by the spin quantum numbers m which can take the values of +½ or –½.
These are called the two spin states of the electron and are normally represented by two
arrows, ↑ (spin up) and ↓ (spin down). Two electrons that have different m s values (one +½
and the other –½) are said to have opposite spins. An orbital cannot hold more than two
electrons and these two electrons should have opposite spins.
25
Figure 5
All chemistry except for nuclear reactions involves electron transfers from one atom to
another. More specifically, this electron trade takes place at the valence shells of the atoms. In
general chemistry, we learn about two major types of chemical bonds, namely ionic and
covalent. Ionic bonding is more likely to take place between elements of highly different
electro-negativities, especially between metals and nonmetals. We can use Lewis formulas to
indicate the manner of the electron transfer in a highly simplified fashion. The dots around the
atomic symbol represent the electrons in the valence shell of each element. In the classic
example of the reaction between sodium and chlorine, the only electron in the valence shell of
sodium is completely transferred to the valence shell of chlorine. Sodium then forms a
positive ion and chlorine a negative ion. The electrostatic attraction between oppositely
charged ions forms the basis of the ionic bond and is the force that keeps the atoms together.
Covalent bonding is more likely to take place between elements of similar electro-negativities,
especially between nonmetals. In the reaction between two hydrogen atoms, for instance, the
electron transfer is never complete. Instead, the two atoms share the electrons, which in this
case spend equal amounts of time around both nuclei. There is no formation of full positive or
negative charges and therefore there is no electrostatic attraction. The force that keeps the
atoms together is the fulfillment of the octet rule
***************************************************************************
26
MODULE 2
ACIDS AND BASES: FUNDAMENTALS
_________________________________________________
Module 2A - Arrhenius Theory
a) Acids contain H+ and bases contain OH-
b) Neutralization reaction occurs hydrogen ions and hydroxide ions react to form
water.
Arrhenius defined bases as substances that dissolve in water to release hydroxide ions (OH-)
into solution. For example, a typical base according to the Arrhenius definition is sodium
hydroxide (NaOH):
NaOH + H2O - Na+(aq) + OH- (aq)
The Arrhenius definition of acids and bases explains a number of things. Arrhenius's
theory explains why all acids have similar properties to each other (and, conversely, why all
bases are similar): because all acids release H+ into solution (and all bases release OH-). The
Arrhenius definition also explains Boyle's observation that acids and bases counteract each
other. This idea, that a base can make an acid weaker, and vice versa, is called neutralization
Svante Arrhenius first defined acids to be proton (H+) donors and bases to be hydroxide ion
(OH-) donors in aqueous solution. The Arrhenius model of acids and bases is summarized by
the following two reactions:
Arrhenius acids
Arrhenius bases
The Arrhenius model of acids and bases, where A = acid and B = base
27
At the time that Arrhenius proposed these definitions, water was virtually the only solvent
used in chemistry, and nearly all known acids and bases contained protons (H+) and hydroxyl
groups (OH), respectively. His definition was sufficient for the chemistry that was understood
then. But progress in chemistry necessitated new definitions: it was discovered that ammonia
behaves like a base, and HCl donates protons in non-aqueous solvents. The Bronsted-Lowry
model of acids and bases serves that need by describing acids as proton donors and bases as
proton acceptors. These definitions remove the role of solvent and allow bases like ammonia
and fluoride ion to be classified as bases, so long as they bond to protons. The Bronsted-
Lowry model implies that there is a relationship between acids and bases (acids transfer
protons to bases) and allows us to define conjugate acids and conjugate bases
b) Conjugate acid base pairs are the two chemicals (one on each side) that differ
only by a hydrogen ion (sometimes just called a proton).
Despite the usefulness of the Bronsted-Lowry definition, there is an even more general
definition of acids and bases provided by G. N. Lewis. The Lewis model of acids and bases
proposes that an acid is an electron pair acceptor while a base is an electron pair donor. This
model of acidity and basicity broadens the characterization of acid-base reactions to include
reactions like the following which do not involve any hydrogen transfers. The nitrogen atom
in ammonia donates an electron pair to complete the valence octet of boron.
The final and most general definition of acids and bases is the Lewis definition: acids are
electron pair acceptors and bases are electron pair donors. Acids are recognizable because
they have vacant orbitals that can accommodate electron pairs. Bases have lone pairs of
28
electrons available for sharing. A significant aspect of this definition is that it helps clarify the
acidity of most transition metal ion solutions. These highly charged ions draw water
molecules to their centers. As the electron rich oxygen end of water has its charge density
slide towards the transition metal d orbital vacancies, the poor hydrogens become vulnerable
to the surroundings – they are more apt to be lost – thus the complex ion behaves as an acid.
Because we are more interested now in describing terms and processes that involve proton
transfers (pH, titration), we will focus on the Bronsted-Lowry definitions of acids and bases.
We will leave consideration of the Lewis model of acids and bases for studying reactions in
organic chemistry.
As you may have noted already in the acid-base reactions above, we use arrows in both
reaction directions to indicate that these are equilibrium processes. Proportions of reagents
and products at equilibrium can be described by an equilibrium constant. The equilibrium
constant given in is for the reaction of an acid, HA, with water as shown.
Although water is a reactant in the above reaction and belongs in the equilibrium constant, its
value of 55.6 M in aqueous solution is so large in comparison with the change in water
concentration at equilibrium that we will assume that the value of [H 2O] is constant. Using
that assumption, we will define the acid dissociation constant, K a, in to be the following:
29
Definition of the acid dissociation constant: From the form of the above equation we can see
that stronger acids, those that dissociate to a greater extent, will have larger values
of K a whereas weaker acids will have smaller values of K a. A practical range for K a values
runs from 10-12 for very weak acids to 1013 for the strongest acids. Knowing this practical
range of acidity constants will aid in judging how reasonable your answers are when you
calculate values for K a in problems.
Definition of the base dissociation constant: Stronger bases have larger values of K b while
weaker bases have smaller values of K b. K b's of typical bases in inorganic chemistry tend to
have a range of values between 10-11 and 103. Water can act as both an acid and as a base. For
this reason water is said to be amphiprotic. Water is often incorrectly termed amphoteric. An
amphiprotic species like water can either donate or accept a proton. Amphoteric species can
both donate and accept hydroxide ions, as water cannot. The following reaction in , called the
autoionization of water, has the equilibrium constant K w defined in the manner of K a and K b.
The dissociation constant K w for water is 1 x 10-14 at room temperature (298 K), and tends to
rise with higher temperatures.
Knowing that K w = 1 x 10-14 is useful because the relationship between K aand K b for a
conjugate acid-base pair is K a * K b = K w. Therefore, you can calculate the K a of the
conjugate acid of a base when given its K b.
30
Module 2E Equilibrium Equations
Water is by far the major constituent and thus it is essentially pure; it's not included in the
equilibrium expression. These expressions are only written for weak bases and acids – not
strong, since K would be very large.
Kb = [NH4+][OH-]
[NH3]
To model conjugate acid base pairs, follow the H+ ion reading the forward and reverse
reactions. In the reaction above, the ammonia is the base and the ammonium ion is its
conjugate acid. Water is acting as the acid and the hydroxide ion is its conjugate base.
Conjugate pairs ONLY differ by an H+ ion – move the H and move the + charge.) See top of
page 673 for additional examples.
For a weak acid, the Ka expression includes the hydronium ion, H3O+. This recognizes the
role of water as a base. By accepting a proton, the water becomes H3O+. Species like H2O
that can act as an acid or a base are said to be amphiprotic. For sulfurous acid in water:
H2SO3 (aq) + H2O(l) <-> HSO3-(aq) + H3O+(aq);
Ka = [HSO3-][H3O+]
[H2SO3]
Module 2F Strengths of Acids and Bases (The pH and pOH Scales)
Due to the large range of proton concentration values ([H+]) in aqueous solution (typically
from 10-15 to 10 M), a logarithmic scale of acidity makes the most sense to put the values into
manageable numbers. The pH scale of acidity defines pH as the negative common logarithm
of the concentration of H+:
pH = - log [H+].
31
Before proceeding with our discussion on pH, let's review some properties of Logarithms
(affectionately called "logs"). A common logarithm is a function that computes what exponent
would be on 10 to obtain the input number. For example, the logarithm of 100 (written "log
100") is 2 because 102 = 100. Likewise, log 1,000,000 = 6. To add logs, multiply their
arguments: log 100 + log 1000 = log
(100 x 1000) = log (100,000) = 5. To subtract logs, divide their arguments: log
100 - log 1000 = log (100/1000) = log (0.1) = - 1. Given those simple rules, you can now
manipulate logs well enough to do any problem involving logs in acid-base chemistry. The pH
scale in water is centered around 7, which is called neutral pH. This value is not randomly
selected, but rather it comes from the fact that the [H+] in pure water is 10-7 (recall that K w =
10-14). An acidic solution has a pH value less than 7 because [H+] is greater than that of pure
water. A basic solution has a pH greater than 7 because there is a lower [H+] than that of pure
water (and consequently a larger [OH-]). Chemists use a pOH scale analogous to the pH
acidity scale to gage hydroxide ion concentrations of aqueous solutions. pOH is defined as the
negative common logarithm of the concentration of OH-: pOH = - log [OH- ]. In the pOH
scale, 7 is neutral, less than 7 is basic, and greater than 7 is acidic. A useful relationship
(which you should be able to derive using the definition of K w) is that pH + pOH = 14. This
formula will allow you to readily convert between values of pH and pOH. A comparison of
the pH and pOH scales is provided. Note that because K w is constant, the product of [H+] and
[OH-] is always equal to 10-14.
The prefix "p" in front of a symbol means "take the negative log". We have defined pH and
pOH as values to describe the acidic or basic strength of solutions. Now we can talk about
pK a(- log K a) and pK b (- log K b) as measures of acidity and basicity, respectively, for
standard acids and bases. An acid with a pK a less than zero is called a strong acid because it
almost completely dissociates in water, giving an aqueous solution a relatively low pH. Acids
with pK a's greater than zero are called weak acids because they only partially dissociate in
water, to make solutions with larger pH's than those strong acids produce at the same
concentration. Similarly, bases with pK b's less than zero are strong bases and bases with
pK b's greater than zero are called weak bases.
32
Factors Affecting Acidity: Now that we have the tools to be able to describe the acidity and
basicity of compounds with known pK a's, let's analyze trends in acidity and basicity to reach
an intuitive understanding of those trends. First, let's consider the acidity of the halogen acids-
-HF, HCl, HBr, and HI-- collectively abbreviated HX as shown in , where X represents the
halogen. From the data in the figure below, you can see that the major factor affecting the
acidity of halogen acids is the strength of the H-X bond. Intuitively, a larger electronegativity
difference should lead to a stronger acid due to the polarization of electrons away from
hydrogen. However, the trend in bond strength is enough to overrule that competing trend in
electronegativity. The smaller the halogen, the closer in size it is to the proton and the greater
the orbital overlap; so HF is most the strongly bonded and weakly acidic of the halogen acids.
Generalizing that result, we can say that when the an H-A bond is strong, the acid is weak.
Experimental confirmation of this postulate comes from the oxyacid series of compounds. An
oxyacid is a molecule of the form AOn(OH)m, where A is a non-metal. Pauling and Ricci
derived the following approximate equation for oxyacid acidity from experimental
observations:
pK a = 8 - 9f + 4n
The variable f is the formal charge on A when all oxygen’s are singly bound to A. The
variable n represents the number of O atoms bound to A that are not bound to an H. A general
trend, summarized in the Pauling-Ricci rule above, is that the more electron withdrawing
(more electropositive) the non- metal center, the stronger the acid due to a weakening of the
O-H bond.
In summary, we note the following trend regarding acidity: hydrogens are more weakly bound
to more electronegative groups, and this produces stronger acids. By using the
relationship K w = K a * K b, you should be able to figure out that we need only to discuss the
acidity of compounds to describe basicity. From the above discussion, we can deduce that
bases with weaker conjugate acids are more basic than those with stronger conjugate acids.
Therefore, bases that form stronger bonds to H will have larger K b's and are stronger bases.
33
Module 2G pH Concept
Under the Brønsted-Lowry definition, both acids and bases are related to the concentration of
hydrogen ions present. Acids increase the concentration of hydrogen ions, while bases
decrease the concentration of hydrogen ions
(by accepting them). The acidity or basicity of something, therefore, can be measured by its
hydrogen ion concentration.
For example, a solution with [H+] = 1 x 10-7 moles/liter has a pH equal to 7 (a simpler way to
think about pH is that it equals the exponent on the H+concentration, ignoring the minus sign).
The pH scale ranges from 0 to 14. Substances with a pH between 0 and less than 7
are acids (pH and [H+] are inversely related - lower pH means higher [H+]). Substances with a
pH greater than 7 and up to 14 are bases (higher pH means lower [H+]). Right in the middle, at
pH = 7, are neutral substances. The relationship between [H+] and pH is shown in the table
below
[H+] pH Examples
1 X 100 0 HCl
1 x 10-4 4 Soda
1 x 10-5 5 Rainwater
1 x 10-6 6 Milk
1 x 10-13 13 Drano®
1 x 10-14 14 NaOH
34
The equilibrium equation for the above reaction is as follows.
The key to recognizing weak acids and bases lies in memorizing the strong acids and bases. If
you know these, all other examples must be weak. Just because a molecule contains H does
not make it an acid. The H must be ionizable – like those that are bound to many polyatomic
ions, some monatomic ions, and carboxylic acids. See below for examples:
hydrofluoric acid HF
Note that ethanoic acid is commonly written as the formula in parenthesis but the formula
preceding it actually conveys the bonding pattern specific to carboxylic acids, namely a
carbon both double bonded to one oxygen and single bonded to another which in turn is
bonded to the hydrogen. Weak bases are less familiar. Ammonia is often used in problems
because it is a weak base and many are familiar with it. If you examine Table 16.4 on page
694, you’ll notice that several of the weak bases listed contain N and many have one H of the
ammonia molecule substituted with an organic fragment. Nitrogen shows its tendency to
form a coordinate covalent bond with an H+ — it's acting as a base.
35
Besides calculating the pH of weak acid/base systems, sometimes one is asked to calculate the
percent ionization. For some, it gives a more tangible feel for how weak the acid is. The
formula is as follows:
Understand that the percent ionization of a weak acid or base increases as the solution
becomes more dilute.
The quantitative idea is derived from the definition of the equilibrium expression and is as
follows:
Ka x Kb = Kw
The product of the ionization constants of an acid and its conjugate base (or vice versa) equals
the ion product of water.
36
If one dissolves salts in water, frequently the pH shifts because most commons salts contain
one or two ions that are conjugates of either weak acids or bases. The useful summary
statement is this:
Oxyacids: Strength of acid increases with the number of oxygen’s attached to central atom.
The strongly electronegative O withdraws electrons towards it and weakens the remaining O-
H bond.
37
MODULE 3
CHEMICAL KINETICS
_________________________________________________
Module 3A Laws of Thermodynamics
Chemical reactions proceed if the products of the reaction have lower free energies than do
the reactants
ΔU = ΔQ + Δw
ΔHp = cp(ΔT)p
HEAT
``
38
The second law of thermodynamics
Spontaneous processes are those which increase the entropy of the universe (i.e.system +
surroundings). Entropy is equal to a measurement of the randomness of a system.
Both enthalpy and entropy changes are important in determining whether a reaction will occur
or not. A relationship between these two factors can be expressed by a new thermodynamic
term known as Gibbs Free Energy (named after the American physicist J.W. Gibbs [1839-
1903]).
ΔG = ΔH - TΔS
For example: ΔG for the following reaction at 25°C. Will the reaction occur (be
spontaneous)? How do you know?
NH3(g) + HCl(g) → NH 4Cl(s) Also given for this reaction: ΔH = -176.0 kJ; ΔS = -284.8
J·K-1
39
Solution
ΔG = ΔH - TΔS
ΔG = -176.0 - (-84.9) = -91.1 kJ, Since ΔG < 0 the reaction will be spontaneous.
The term rate is often used to describe the change in a quantity that occurs per unit of time.
The rate of inflation, for example, is the change in the average cost of a collection of standard
items per year. The rate at which an object travels through space is the distance traveled per
unit of time, such as miles per hour or kilometers per second. In chemical kinetics, the
distance traveled is the change in the concentration of one of the components of the reaction.
The rate of a reaction is therefore the change in the concentration of one of the reactants -
(X)- that occurs during a given period of time - t.
Rate of Reaction = X / t
The rate of the reaction between phenolphthalein and the OH- ion isn't constant; it changes
with time. Like most reactions, the rate of this reaction gradually decreases as the reactants are
consumed. This means that the rate of reaction changes while it is being measured.
To minimize the error this introduces into our measurements, it seems advisable to measure
the rate of reaction over periods of time that are short compared with the time it takes for the
reaction to occur. We might try, for example, to measure the infinitesimally small change in
40
concentration d(X) that occurs over an infinitesimally short period of time dt. The
ratio of these quantities is known as the instantaneous rate of reaction.
Rate = d(X) / dT
The instantaneous rate of reaction at any moment in time can be calculated from a graph of
the concentration of the reactant (or product) versus time. The graph below shows how the
rate of reaction for the decomposition of phenolphthalein can be calculated from a graph of
concentration versus time. The rate of reaction at any moment in time is equal to the slope of a
tangent drawn to this curve at that moment.
The instantaneous rate of reaction can be measured at any time between the moment at which
the reactants are mixed and the reaction reaches equilibrium. Extrapolating these data back to
the instant at which the reagents are mixed gives the initial instantaneous rate of reaction.
Module 3C Different Ways of Expressing the Rate of Reaction
There is usually more than one way to measure the rate of a reaction. We can study the
decomposition of hydrogen iodide, for example, by measuring the rate at which either H 2 or
I2 is formed in the following reaction or the rate at which HI is consumed.
2 HI(g) H2(g) + I2(g)
41
Experimentally we find that the rate at which I2 is formed is proportional to the square of the
HI concentration at any moment in time.
What would happen if we studied the rate at which H2 is formed? The balanced equation
suggests that H2 and I2 must be formed at exactly the same rate.
What would happen, however, if we studied the rate at which HI is consumed in this reaction?
Because HI is consumed, the change in its concentration must be a negative number. By
convention, the rate of a reaction is always reported as a positive number. We therefore have
to change the sign before reporting the rate of reaction for a reactant that is consumed in the
reaction.
The negative sign does two things. Mathematically, it converts a negative change in the
concentration of HI into a positive rate. Physically, it reminds us that the concentration of the
reactant decreases with time.
What is the relationship between the rate of reaction obtained by monitoring the formation of
H2 or I2 and the rate obtained by watching HI disappear? The stoichiometry of the reaction
says that two HI molecules are consumed for every molecule of H2 or I2 produced. This means
that the rate of decomposition of HI is twice as fast as the rate at which H2 and I2 are formed.
We can translate this relationship into a mathematical equation as follows.
42
As a result, the rate constant obtained from studying the rate at which H 2 and I2 are formed in
this reaction (k) is not the same as the rate constant obtained by monitoring the rate at which
HI is consumed (k')
In the 1930s, Sir Christopher Ingold and coworkers at the University of London studied the
kinetics of substitution reactions such as the following.
They found that the rate of this reaction is proportional to the concentrations of both reactants.
Rate = k(CH3Br)(OH-)
When they ran a similar reaction on a slightly different starting material, they got similar
products.
But now the rate of reaction was proportional to the concentration of only one of the reactants.
Rate = k((CH3)3CBr)
These results illustrate an important point: The rate law for a reaction cannot be predicted
from the stoichiometry of the reaction; it must be determined experimentally. Sometimes, the
rate law is consistent with what we expect from the stoichiometry of the reaction.
43
Module 3D Order of Molecularity
Some reactions occur in a single step. The reaction in which a chlorine atom is transferred
from ClNO2 to NO to form NO2 and ClNO is a good example of a one-step reaction.
Other reactions occur by a series of individual steps. N2O5, for example, decomposes to
NO2 and O2 by a three-step mechanism.
The steps in a reaction are classified in terms of molecularity, which describes the number of
molecules consumed. When asingle molecule is consumed, the step is called unimolecular.
When two molecules are consumed, it is bimolecular.
Reactions can also be classified in terms of their order. The decomposition of N2O5 is a first-
order reaction because the rate of reaction depends on the concentration of N2O5 raised to the
first power.
Rate = k(N2O5)
Rate = k(HI)2
When the rate of a reaction depends on more than one reagent, we classify the reaction in
terms of the order of each reagent.
The difference between the molecularity and the order of a reaction is important. The
molecularity of a reaction, or a step within a reaction, describes what happens on the
molecular level. The order of a reaction describes what happens on the macroscopic scale. We
44
determine the order of a reaction by watching the products of a reaction appear or the
reactants disappear. The molecularity of the reaction is something we deduce to explain these
experimental results.
Chemical kinetics is the branch of chemistry which addresses the question: "how fast do
reactions go?" Chemistry can be thought of, at the simplest level, as the science that concerns
itself with making new substances from other substances. Or, one could say, chemistry is
taking molecules apart and putting the atoms and fragments back together to form new
molecules. (OK, so once in a while one uses atoms or gets atoms, but that doesn't change the
argument.) All of this is to say that chemical reactions are the core of chemistry.
If Chemistry is making new substances out of old substances (i.e., chemical reactions), then
there are two basic questions that must be answered:
1. Does the reaction want to go? This is the subject of chemical thermodynamics.
2. If the reaction wants to go, how fast will it go? This is the subject of chemical
kinetics.
We can calculate ΔrGo for this reaction from tables of free energies of formation (actually this
one is just twice the free energy of formation of liquid water). We find that Δ rGo for this
reaction is very large and negative, which means that the reaction wants to go very strongly. A
more scientific way to say this would be to say that the equilibrium constant for this reaction
is very large.
However, we can mix hydrogen gas and oxygen gas together in a bulb or other container,
even in their correct stoichiometric proportions, and they will stay there for centuries, perhaps
even forever, without reacting. (If we drop in a catalyst - say a tiny piece of platinum - or
introduce a spark, or even illuminate the mixture with sufficiently high frequency uv light, or
compress and heat the mixture, the mixture will explode.) The problem is not that the
45
reactants do not want to form the products, they do, but they cannot find a "pathway" to get
from reactants to products.
C(diamond) → C(graphite).
If you calculate ΔrGo for this reaction from data in the tables of thermodynamic properties you
will find once again that it is negative (not very large, but still negative). This result tells us
that diamonds are thermodynamically unstable. Yet diamonds are highly regarded as gem
stones ("diamonds are forever") and are considered by some financial advisors as a good long-
term investment hedge against inflation. On the other hand, if you were to vaporize a diamond
in a furnace, under an inert atmosphere, and then condense the vapor, the carbon would come
back as graphite and not as diamond.
The answer is that thermodynamics is not the whole story in chemistry. Not only do we have
to know whether a reaction is thermodynamically favored, we also have to know whether the
reaction can or will proceed at a finite rate. The study of the rate of reactions is called
chemical kinetics.
We can specify the rate of this reaction by telling the rate of change of the partial pressures of
one the gases. However, it is convenient to convert these pressures into concentrations, so we
will write our rates and rate equations in terms of concentrations, where square brackets, [ ],
mean concentration in mol/L.
or as
46
but these are not the same because each molecule of O2 gives two molecules of NO2. To
arrive at an unambiguous definition of reaction rate we define the "reaction velocity," v, as
This is unambiguous. The negative sign tells us that that species is being consumed and the
fractions take care of the stoichiometry. Any one of the three derivatives can be used to
define the rate of the reaction.
aA + bB → cC + dD,
the reaction velocity can be written in a number of different but equivalent ways,
As in our previous example, the negative signs account for material that is being consumed in
the reaction and the positive signs account for material that is being formed in the reaction.
The stoichiometry is preserved by dividing the rate of change of concentration of each
substance by its stoichiometric coefficient.
A rate law is an equation that tells us how fast the reaction proceeds and how the reaction rate
depends on the concentrations of the chemical species involved. A rate law is an equation of
the form,
Equation 4 is gives us a first order differential equation in t because the reaction velocity is
related to a time-derivative of one of the concentrations
(as in Equation 3).
47
The rate law may contain substances which are not in the balanced reaction and may not
contain some things that are in the balanced equation (even on the reactant side).
where x, y, z, are small whole numbers or simple fractions and k is called the "rate constant."
The sum of x + y + z + . . . is called the "order" of the reaction.
In a first order reaction the rate is proportional to the concentration of one of the
reactants. That is,
v = rate = k[B],
The constant, k, in this rate equation is the first order rate constant.
2. Second Order Reactions
In a second order reaction the rate is proportional to concentration squared. For example,
possible second order rate laws might be written as
48
Rate = k[B]2
or as
Rate = k[A][B].
That is, the rate might be proportional to the square of the concentration of one of the
reactants, or it might be proportional to the product of two different concentrations.
3.Third Order Reactions
There are several different ways to write a rate law for a third order reaction. One
might have cases where
Rate = k[A]3,
or
Rate = k[A]2[B],
or
We will see later that there are other, more "interesting" rate laws in nature, but a large
fraction of rate laws will fit in one of the above categories.
In order to understand how the concentrations of the species in a chemical reaction change
with time it is necessary to integrate the rate law (which is given as the time-derivative of one
of the concentrations) to find out how the concentrations change over time.
B + . . . . → products.
49
Then we can write the rate law and integrate it as follows (recall that the derivative is negative
because the concentration of the reactant, B, is decreasing):
The first order rate law is a very important rate law, radioactive decay and many chemical
reactions follow this rate law and some of the language of kinetics comes from this law. The
form of Equation 14d is called an "exponential decay." This form appears in many places in
nature. One of its consequences is that it gives rise to a concept called "half-life."
Half-life
The half-life, usually symbolized by t1/2, is the time required for [B] to drop from its initial
value [B]o to [B]o/2.
Using the integrated form of the first order rate law we find that
or
50
which may actually give a little more insight into what is meant by half-life. This equation
demonstrates clearly that the concentration drops by a factor of two for every t1/2 increment in
time.)
so that one can write the integrated form of the rate law as
τ is the time required for [B] to drop from [B]o to [B]o/e. Sometimes τ is called the "one
over e" time. Although the half-life is almost always used to describe the decay rate of
radioactive elements, it is common for chemists to talk about the rate of first order processes
in chemistry in terms of the relaxation time
A typical simple second order reaction, where B is one of the reactants, would look like,
51
This equation can be integrated to give,
Or
We can make a quick check to see if this result fits what we know about the system. At t = 0
we get [B] = [B]o which is correct and at t = infinity we find that [B] = 0 which is also what
we would expect.
Half-life
We can define the half-life of a second order reaction in the same way as in first order
reactions. That is, the half-life, t1/2, is the time required for [B] to fall from [B]o to [B]o/2. Thus
we use Equation 3 to give
or
and, finally,
Note that the half-life for a simple second order reactions depends on the initial concentration
and that it is proportional to 1/[B]o. Contrast this to what we know about first order reactions
where the half-life is independent of the initial concentration. This fact may come in useful
later when we try to find experimental methods for determining rate laws.
52
Mixed Second Order Rate Equations
A mixed second order rate law equation, where both components A and B are reactants,
would look like,
Equation 7 has two variables in it (besides time). In order to integrate this equation we need to
know the stoichiometry so that we can tell how [A] depends
on [B].
2A + B → P + etc (P = product).
We have a choice as to which component concentration to use for our variable, we can use
[A], [B], or [P].
Let's try [P] as our variable. (The form of the answer will depend on the choice of variable. A
different choice of variable will give an answer that looks different even though it is
algebraically equivalent, see below.)
Let's assume that there is no product present at the beginning of the experiment, that is,
[P]o = 0.
We now need to express [A] and [B] in terms of [P]. We can see that,
(We obtain Equation 10 from the stoichiometry, which says that every time we make one
mole of P we use up two moles of A. That is, each P formed takes away two A's.) Also,
[B] = [B]o − [P], because each P formed takes away only one B.
53
So our rate equation in terms of the single variable, [P] becomes,
and then
or
The left-hand side can be integrated by partial fractions, or you can look it up in a book of
integral tables.
We have made a big deal out of the fact that you cannot predict the rate law for a chemical
reaction from the balanced equation for that reaction. We now introduce a new class of simple
reactions, called elementary reaction steps, for which the rate law IS determined by the
balanced reaction equation. We also introduce a new term, molecularity, which tells us the
number of molecules involved in an elementary reaction step.
We emphasize that the reaction velocity for each elementary reaction step IS determined by
the reaction step and, later on, we will string a sequence of elementary reaction steps together
to form what is called a mechanism. A reaction mechanism is a detailed (theoretical)
description of how we think the chemical reaction proceeds. That is, it describes our thought
54
about which molecule collides with which other molecule to form an intermediate product,
which my go on to react with some other species, and so on, to produce the overall reaction.
The elementary reaction steps must be balanced (as do all chemical reactions). We can usually
tell the difference between an elementary reaction step and a balanced reaction by the fact that
the elementary reaction step will have one or more rate constants associated with it, as we will
see below.
It is probably easiest to describe the elementary reaction steps and their associated rate laws
by just telling you what they are. As we proceed you will become aware of the fact that the
rate law is completely determined by the elementary reaction step you write down. (The
converse is not true. We will see in the examples below that several different elementary steps
may give rise to the same rate law.)
is unimolecular because there is only one molecule reacting, that is, molecule "A" is reacting.
This unimolecular reaction step implies the rate law,
or, equivalently,
In words, these elementary reaction steps say that the molecule, A, spontaneously transforms
into B at some rate k1. The algebraic sign in front of k1 tells whether you are gaining product
or losing reactant depending on whether the concentration in the derivative is increasing or
decreasing. For example, in Equation 2, [B] is increasing, and in Equation 3, [A] is
55
decreasing. (All of the usual rules of stoichiometry still hold in elementary reaction steps. If
you use up some reactant you must gain an equivalent amount of product.)
and/or
(Either one of these may be used, depending on whether we are trying to account for the
disappearance of reactant, A, or the appearance of product, B, in our mechanism for a
particular reaction.)
A unimolecular reaction step can have more than one product, for example,
The unmimolecular process given by Equation 7 implies the same rate law as the reaction in
Equation 1, namely, either Equation 2 or Equation 3.
56
,
or
and so on. In above equations only one product is given, "C." We would get the same rate
laws if there had been two or more products, for example as in,
implies
57
Ter-molecular Reaction Steps
Termolecular reaction steps require three molecules coming together at the same time. They
are rare because three-body collisions in the gas phase are rare, but there are cases of
termolecular reactions in the literature. There are many varieties of termolecular reactions and
they may be reversible. A typical termolecular process might be,
A reaction mechanism is a combination of one or more elementary reaction steps which start
with the appropriate reactants and end with the appropriate product(s). Further examples of
the determination of rate laws from proposed mechanisms for chemical reactions we will
consider two examples of a class of reactions called "chain reactions."
Chain reactions usually involve free radicals. We will show how to deal with chain reactions
by working out several examples. Our first example is the gas phase reaction of hydrogen
with bromine to give HBr. (The temperature must be high enough that bromine is a gas and
not a liquid.)
H2 + Br2 → 2 HBr
58
Our task is to show that the postulated mechanism will give this rate law. (We will not try to
invent mechanisms for particular reactions in this course.) The postulated mechanism has five
elementary reaction steps:
Our goal is to find the rate law implied by this mechanism and see how it compares to the
experimental rate law, Equation 2. Our first task will be to define the reaction rate. We will
define the reaction rate as rate of production of the product, HBr.
Looking at the mechanism and using only the steps which yield or consume HBr we find that
We see that the rate contains two transient species, Br• and H• , so we set up the equations
which show the rate of change of the concentrations of these two species and apply the steady
state approximation. The two equations are,
and
********************************************************************
59
MODULE 4
Module 4A Introduction
Organic molecules contain carbon atoms. The carbon atoms are covalently bonded to other
atoms, and various chains of carbon atoms can be found in most every molecule you will deal
with in this chapter. As you might recall from your introductory class, carbon has four valence
electrons, and therefore will make four bonds in accordance with the octet rule. All non-
carbon-to-carbon bonds will be assumed to be carbon-hydrogen bonds, as hydrogen atoms are
the most commonly found attached atom. Hydrogen has one valence electron, and will make
one covalent bond. Use these facts to draw the structures of the following molecules in the
space provided.
The carbon atom is capable of making single, double, and triple bonds, as well as bonding
with oxygen, nitrogen, chlorine, or bromine. Oxygen has six valence electrons, and will make
two covalent bonds. A single bond and a double bond are both possible for oxygen atoms.
Nitrogen has five valence electrons, and will make three covalent bonds. Single, double, and
triple bonds are all possibilities for nitrogen atoms. Chlorine and bromine each have seven
valence electrons, and will make one covalent bond. The presence of carbon-carbon double or
triple bonds, as well as the presence of oxygen, nitrogen, chlorine, or bromine, is very
significant and worth noting, as it will be extremely important throughout your study of
organic chemistry. Be sure to look for patterns in the bonding when you encounter these
situations. Use your best judgment to draw the following molecules in the spaces provided.
In the study of general chemistry, the empirical formula of a chemical compound is the
simplest whole-number ratio of elements in a compound. An empirical formula makes no
reference to the shape, structure, or absolute number of atoms in a compound. Empirical
formulas are the standard for most ionic compounds, such as calcium chloride, CaCl2, and for
macromolecules like silica, SiO2. The term empirical formula refers to the process of
60
elemental analysis, a technique of analytical chemistry used to determine the relative percent
composition of a pure chemical substance by element. Much like an empirical formula, a
molecular formula contains a whole-number ratio of the elements in a compound. In contrast,
the molecular formula does identify the absolute number of atoms in a compound. However,
no shape or structure determination can be made based on a molecular formula.
Consider the empirical formula CH. From what you already know, there must be one
hydrogen atom for every carbon atom. However, all three of the following molecules satisfy
that requirement, and there is no discernable pattern in structure that can be discerned from the
empirical formula.
Consider the molecular formula C7H16. From what you know of covalent bonding, carbon
atoms make four bonds, while hydrogen atoms make only one. There are many different ways
to assemble carbons and hydrogens into C7H16, with many possible structures resulting.
(Two such structures for C7H16 are found on the following page. As with empirical formulas,
no determination of structure can be made from a molecular formula.
61
Module 4B Structural Formulas
Organic compounds tend to be built according to a general scheme as follows: The carbon
atoms form a “skeleton” that runs through the middle of the molecule. Hydrogen atoms and
any other are attached to the carbon skeleton. One method of drawing organic molecules
involves representing every single atom, with the bonds between them drawn as lines. This is
typically referred to as a structural formula. The structural formula for 1-hexanol, or
C6H13OH, looks like this:
Organic molecules can also be represented in a more compact form, known as abbreviated
structural formulas, also known as condensed structural formulas. This is done by omitting the
bonds between atoms. The same compound, 1-hexanol, could therefore be represented as:
62
The simplest hydrocarbon is methane, containing only one carbon atom. Since each carbon
atom can form four bonds, the formula of methane must be CH4. The bonding can be
described as using sp3 hybrid orbitals on carbon, giving the molecule a tetrahedral shape.
This three-dimensional shape is not easy to show in two dimensions on a sheet of paper. One
common representation is shown below as structure I. More often structure II is used because
it is easier to draw, especially for more complex molecules. You should recall how to
construct a Lewis dot structure from your study of inorganic chemistry. If a Lewis dot
structure is required, it usually is drawn as in structure III.
It is also possible to have a double or triple bond between the carbon atoms, giving an
unsaturated compound.
For a three carbon compound, we could write the following skeletons (for simplicity, the
bonds to the hydrogen atoms are shown, but not the hydrogen atoms themselves): Since the
arrangement of bonds about each carbon atom is tetrahedral, rather than planar, the two
structures are identical. At this point it would be very helpful for you to make a model of the
63
three-carbon compound (molecular formula C3H8). Notice that the three carbons are attached
to each other in a line, with no branching of the chain. The drawings above simply show the
same carbon skeleton from different viewpoints. One structure could be converted to another
simply by rotating one part of the molecule about a carbon-carbon bond, relative to the rest of
the molecule. You can see this best by using a model.
For the four-carbon molecule, we can simplify the representation of the carbon skeleton even
further by showing only the carbon atoms and the bonds between them. In looking at this
representation you must remember that each carbon will have other bonds in addition to those
shown. All the other bonds are to hydrogen atoms, and each carbon has a total of four bonds.
The two skeletal structures immediately above are different. If you use models, you can see
that it is not possible to change one to the other simply by rotating one part of the molecule
relative to the other. The only way to convert one to the other is by breaking a carbon-carbon
bond. Notice also that the first structure has four carbons in a single chain. In the second
structure, the longest continuous chain is three carbons, with the fourth carbon attached to the
middle of the chain. We refer to the first structure as a straight chain, and the second as a
branched chain. Since the structures are different, but the molecular formulas identical
(C4H10), this is an example of isomerism. This type of isomerism is known as structural
isomerism. As the number of carbon atoms increases, the number of possible isomers
increases rapidly. For molecules containing only single bonds, the maximum number of
isomers for different numbers of carbon atoms range from 2 – 75 for C4 to C10 atoms.
64
Homologous Series
The straight-chain alkanes (compounds containing all single carbon-carbon bonds) all could
be written with the following general condensed structural formula:
A series of compounds showing this relationship, namely that members of the series differ
only in the number of CH2 groups, is called a homologous series.
Other important physical properties of the alkanes are their low solubilities in water and their
densities. The alkanes have such low solubility in water that we consider them insoluble. This
is because dissolving would require that strong intermolecular forces between water
molecules (due to hydrogen bonding) be broken and replaced by the much weaker van der
Waals’ forces (London forces) between water molecules and non-polar alkane molecules. The
net result of this process is energetically highly unfavorable, so it does not occur to any
significant extent.
All alkanes, regardless of whether they are gas, liquid, or solid, are less dense than water.
Most liquid hydrocarbons have a density of about 0.7 g cm-3. Practical consequences of this
property include the fact that hydrocarbons such as oil float on water. Oil fires cannot be put
out with water, because the burning oil floats on top of water and may be spread more widely.
The following table gives the molecular formula and the condensed structural formula of each
of the first eight straight-chain alkanes, as well as its name according to the IUPAC system.
65
Module 4D Naming of Alkanes
In the table on the previous page, we saw the names of the first eight straight-chain alkanes.
We also have seen that alkanes containing more than three carbons can have branched chains.
In fact, the number of branched-chain alkanes is much greater than the number of straight-
chain compounds. We will now see how to name these branched-chain alkanes.
The procedure is simple and logical and based on the name (already known) for the longest
continuous chain in the compound. Find the points where the chain branches. The “branches”,
or substituents, are called alkyl groups. (from alkane + yl).
(a) The names of the substituents are derived from the alkane with the same
number of carbons. Thus CH3 - = methyl, CH3CH2- = ethyl, and so on in the
same way that the general name, alkyl group, was derived from alkane.
(b) The substituent name is placed before the name of the longest chain, without a
space between the two parts of the name.
To indicate the position of the substituent, where there is more than one possibility, the
carbons in the longest continuous chain are numbered. The numbering can be started from
either end of the chain. The direction of numbering should be chosen so that when the two
possible directions are compared, the number should be the lowest at the first point where the
numbers differ. The numbers are separated from the rest of the name by hyphens.
66
Names of straight-chain alkenes
The presence of double bonds is indicated by changing the ending -ane for an alkane to -ene
for alkenes.
CH3CH=CH2 propene
If the multiple bond can be in more than one position in the molecule, number the position of
the first carbon in the multiple bond, starting from the end of the chain which gives the lowest
number.
CH3CH2CH2CH=CH2 pent-1-ene (or 1-pentene)
67
The two compounds are closely related structurally; ethanol is like ethane except for the
replacement of an H by an OH. Except for combustion, a characteristic of almost all organic
compounds, the physical and chemical properties of the two compounds are very different.
68
Structures and names of functional groups
In Table 1, the names of the main functional groups are given in the first column. The
common name of the type of compound in which each occurs appears in column 2.
Systematic names sometimes used within the IUPAC system are given in parentheses below
the common name. The structural formulas in column 3 introduce a new symbol which
simplifies the writing of organic structures. R stands for any alkane in which a hydrogen has
been replaced by another atom or group of atoms. Recall from Chapter 2 that such a group is
called an alkyl group. The simplest possible group is that derived from methane. The alkyl
group is thus CH3-, or methyl. Thus ROH can represent any alkane in which a hydrogen atom
has been replaced by an OH group.
Notice that the structure of cholesterol above contains two methyl groups and a C8H17 group.
This also is an alkyl group, in this case the octyl group. Representing compounds in this way
is consistent with the observation that the properties of the compound depend much more on
the nature of the functional group than on the hydrocarbon portion of the molecule.
Column 3 of the table is especially important. You should be able to recognize and write
functional groups in both full and condensed forms. Ethyl ethanoate, for example, could be
written in any of the following forms:
69
In the fourth column are given systematic names and formulas of specific compounds, as well
as common names in parentheses. R represents an alkyl group; R', if appearing in the same
structure as R, represents a second alkyl group. It may be the same or different from the first.
Notice that for amines, two different structures have been written. These differ in the number
of carbon-containing groups attached to the nitrogen. The number of groups can be 1, 2, or 3,
and they can be any combination of alkyl or aryl (aromatic) groups.
Naming compounds containing functional groups
To name simple compounds with a single functional group, only a slight extension of the rules
for naming hydrocarbons is needed.
Rule 1: For a compound with a straight chain of carbons, name it as for an alkane, but replace
the final –e with an ending to indicate the functional group.
70
For aldehydes, carboxylic acids and amides, the functional groups can only occur at the ends
of carbon chains, so they do not need numbers to indicate their positions. For alcohols and
ketones, the groups may occur at different places in the chain, so numbers may be necessary
to distinguish different isomers. For alcohols, this would apply to all compounds except
methanol and ethanol. The number usually is placed immediately before the suffix indicating
the functional group. Numbering starts from the end which gives the lowest number for the
functional group.
Examples: CH3CH2CHOHCH3 = butan-2-ol
CH3CH2CH2COCH2CH3 = hexan-3-one (not hexan-4-one)
In many textbooks in the U.S., these compounds would be named 2-butanol and 3-hexanone,
respectively.
(a) Esters are structurally related to carboxylic acids, with an alkyl group replacing
the acidic hydrogen. They are named by placing the alkyl group first, then
naming the acidic portion by replacing the –e of the alkane with the same
number of carbon atoms by –oate.
(b) amines are named as substituted amines, regardless of the number of alkyl
groups attached to nitrogen.
CH3CH2NH2 = ethylamine
CH3CH2NHCH3 = ethylmethylamine (alphabetic order of alkyl
groups)
(c) halogenoalkanes are named as substituted alkanes
CH3CHBrCH3 = 2-bromopropane
Rule 2: Branching in the main carbon chain is indicated by numbering the position of the
alkyl substituent(s). Numbering starts from the carbon of a functional group if it is at the end
of a chain. If the functional group is not at the end of the main chain, number from the end
that will give the functional group the lowest possible number.
71
(CH3)2CHCH2COOH = 3-methylbutanoic acid
CH3CHOHCH(CH3)2 = 3-methylbutan-2-ol
Isomerism due to functional groups
We know that isomers refer to compounds with the same molecular formula, but different
structures. We have encountered these already in hydrocarbons, with different arrangements
of the carbon atoms, and in compounds containing functional groups, when a substituent such
as a halogen atom or hydroxyl group can be attached to different positions in the chain.
In some cases, we have compounds with different functional groups but the same formula. For
example, the following two compounds, an acid and an ester, are isomers, both with the
molecular formula
72
MODULE 5
ELECTROCHEMISTRY
_____________________________________________
H2O H2 + 1/2 O2
Faraday’s first law of electrolysis states that the masses m of deposited or dissolved
substances are proportional to the quantity of electricity q passed through the electrolyte. The
second law states that the masses of different substances deposited or dissolved as a result of
the passage of the same quantity of electricity through the electrolyte are proportional to the
chemical equivalents A of the substances. It follows from the second law that the same
quantity of electricity, called the Faraday constant F, is required for the deposition of the same
gram-equivalent weight of different substances. Mathematically, Faraday’s laws may be
written as one equation m = (A/F)q = kq, where the coefficient k = A/F is called the
electrochemical equivalent.
73
Both of Faraday’s laws are exact if the ions of the electrolyte carry all the electricity passed
through the electrolyte. Deviations from the laws are observed in certain cases; such
deviations may be associated with electrochemical side reactions that are not taken into
account—for example, the liberation of gaseous hydrogen during the electro-deposition of
some metals—or with partial electron conduction—for example, during the electrolysis of
certain alloys.
Second Law: For a given quantity of electricity the quantity of substance produced is
proportional to its weight The quantity of electricity or charge contained in a current running
for a specified time can be calculated:
Q = Ixt
t = time (seconds)
The Faraday constant, F, is the quantity of electricity carried by one mole of electrons.
F = 96,484 C mol-1
Q = n(e) x F
74
F = Faraday constant = 96, 500 C mol-1
E = QxV
A redox reaction is a reaction which involves a change in oxidation state of one or more
elements. When a substance loses its electron, its oxidation state increases, thus it is oxidized.
When a substance gains an electron, its oxidation state decreases, thus it is reduced. Zn(s) is
being oxidized into Zn2+, and Cu2+ is being reduced into Cu(s). Zn(s) is known as a reducing
agent and Cu2+ is known as an oxidizing agent. In a redox reaction, oxidation-reduction must
occur simultaneously. This means that when one substance is being oxidized, the other
substance must be reduced. A redox reaction is a reaction which involves a change in
oxidation state of one or more elements. When a substance loses its electron, its oxidation
state increases, thus it is oxidized. When a substance gains an electron, its oxidation state
decreases, thus it is reduced. :
75
Redox reactions, or oxidation-reduction reactions, have a number of similarities to acid-base
reactions. Fundamentally, redox reactions are a family of reactions that are concerned with the
transfer of electrons between species. Like acid-base reactions, redox reactions are a matched
set -- you don't have an oxidation reaction without a reduction reaction happening at the same
time. Oxidation refers to the loss of electrons, while reduction refers to the gain of electrons.
Each reaction by itself is called a "half-reaction", simply because we need two (2) half-
reactions to form a whole reaction. In notating redox reactions, chemists typically write out
the electrons explicitly:
This half-reaction says that we have solid copper (with no charge) being oxidized (losing
electrons) to form a copper ion with a plus 2 charge. Notice that, like the stoichiometry
notation, we have a "balance" between both sides of the reaction. We have one (1) copper
atom on both sides, and the charges balance as well. The symbol "e -" represents a free
electron with a negative charge that can now go out and reduce some other species, such as in
the half-reaction:
Here, two silver ions (silver with a positive charge) are being reduced through the addition of
two (2) electrons to form solid silver. The abbreviations "aq" and "s" mean aqueous and solid,
respectively. We can now combine the two (2) half-reactions to form a redox equation:
Galvanic Cell
Galvanic cells have a spontaneous chemical reaction that generates electricity. One solution
must be oxidized while the other is reduced. The net reaction is composed of 2 half-
reactions, an oxidation reaction and a reduction reaction. Free energy change for the
reaction is -150 kJ per mol Cd
76
Generally oxidation-reduction or redox reactions take place in electrochemical cells. There are
two types of electrochemical cells. Spontaneous reactions occur in galvanic (voltaic) cells;
nonspontaneous reactions occur in electrolytic cells. Both types of cells contain
electrodes where the oxidation and reduction reactions occur. Oxidation occurs at the
electrode termed the anode and reduction occurs at the electrode called the cathode.
The anode of an electrolytic cell is positive (cathode is negative), since the anode attracts
anions from the solution. However, the anode of a galvanic cell is negatively charged, since
the spontaneous oxidation at the anode is the source of the cell's electrons or negative charge.
The cathode of a galvanic cell is its positive terminal. In both galvanic and electrolytic cells,
oxidation takes place at the anode and electrons flow from the anode to the cathode. Galvanic
or Voltaic Cells. The redox reaction in a galvanic cell is a spontaneous reaction. For this
reason, galvanic cells are commonly used as batteries. Galvanic cell reactions supply energy
which is used to perform work. The energy is harnessed by situating the oxidation and
reduction reactions in separate containers, joined by an apparatus that allows electrons to
flow. A common galvanic cell is the Daniell cell, shown below.
77
Electrolytic Cells
78
Many oxidation-reduction reactions occur spontaneously, giving off energy. An example
involves the spontaneous reaction that occurs when zinc metal is placed in a solution of
copper ions as described by the net ionic equation shown below.
The zinc metal slowly "dissolves" as its oxidation produces zinc ions which enter into
solution. At the same time, the copper ions gain electrons and are converted into copper atoms
which coats the zinc metal or sediments to the bottom of the container. The energy produced
in this reaction is quickly dissipated as heat, but it can be made to do useful work by a device
called, an electrochemical cell. This is done in the following way. An electrochemical cell is
composed to two compartments or half-cells, each composed of an electrode dipped in a
solution of electrolyte. These half-cells are designed to contain the oxidation half-reaction and
reduction half-reaction separately as shown below (Figure 1).
The half-cell, called the anode, is the site at which the oxidation of zinc occurs as shown
below.
79
During the oxidation of zinc, the zinc electrode will slowly dissolve to produce zinc ions
(Zn+2), which enter into the solution containing Zn+2 (aq) and SO4-2 (aq) ions. The half-cell,
called the cathode, is the site at which reduction of copper occurs as shown below.
When the reduction of copper ions (Cu+2) occurs, copper atoms accumulate on the surface of
the solid copper electrode.The reaction in each half-cell does not occur unless the two half
cells are connected to each other.
Recall that in order for oxidation to occur, there must be a corresponding reduction reaction
that is linked or "coupled" with it. Moreover, in an isolated oxidation or reduction half-cell, an
imbalance of electrical charge would occur, the anode would become more positive as zinc
cations are produced, and the cathode would become more negative as copper cations are
removed from solution. This problem can be solved by using a "salt bridge" connecting the
two cells as shown in the diagram below. A "salt bridge" is a porous barrier which prevents
the spontaneous mixing of the aqueous solutions in each compartment, but allows the
migration of ions in both directions to maintain electrical neutrality. As the oxidation-
reduction reaction occurs, cations ( Zn+2) from the anode migrate via the salt bridge to the
cathode, while the anion, (SO4)-2, migrates in the opposite direction to maintain electrical
neutrality.
80
The two half-cells are also connected externally. In this arrangement, electrons provided by
the oxidation reaction are forced to travel via an external circuit to the site of the reduction
reaction. The fact that the reaction occurs spontaneously once these half cells are connected
indicates that there is a difference in potential energy. This difference in potential energy is
called an electomotive force (emf) and is measured in terms of volts. The zinc/copper cell has
an emf of about 1.1 volts under standard conditions.
Electrochemical reactors are called electrolysis cells. The cells consist of a container, the cell
body; two electrodes, the anode and cathode, where the electrochemical reactions occur;
and an electrolyte. Some cells have a diaphragm or membrane between the anode and
cathode compartments to separate the anodic and cathodic products. While general purpose
electrolysis cells are available, cells are usually custom designed for a particular process. The
electrolysis cells used to produce the various chemicals and metals cited above differ
significantly from one another. It is the responsibility of the electrochemical engineer in
industry to simultaneously manage electrical consumption and chemical production. He or
she must apply relevant scientific and engineering principles to design, construct, and operate
a process in an economical, safe, and environmentally conscious manner. Improved
understanding of scientific principles and the application of new materials can lead to more
efficient cell designs and processes. The constant evolution of technology provides
challenging and rewarding careers to engineers and scientists in a range of disciplines.
81
separations include desalting of brackish water and of seawater, demineralization of food
products, separation of amino acids, and recovery of resources from wastewater streams. In
these processes, the key component is a membrane that permits some chemicals to go through
but not others, thus separating them
Electrochemical Engineering: While each of the processes above use cells designed for a
specific purpose, they are all bound together by fundamental principles that govern the
operation. Known collectively as the principles of electrochemical engineering, these
concepts include transport processes, current and potential distribution phenomena,
thermodynamics, kinetics, scale-up, sensing, control, and optimization. With use of
quantitative methods, many salient features of cell operations can be modeled in concise
mathematical form. Thus, it has been increasingly possible to predict cell behavior without the
cost of an extensive empirical (trial and error) program of preliminary study. These
principles cut across all electrochemical industries and can be applied with equal success
whether they are used to design a plant to produce chemicals or to design a battery or a fuel
cell for use in an electric car. While these concepts are well established in the electrolytic
industry, their use in other areas has blossomed in the last few years. Concepts of current and
potential distribution within cells, originally conceived by electroplaters, are now being
applied throughout the field.
*******************************************************************
82
MODULE 6
GASEOUS STATE
The search for an understanding of the nature of the gaseous state of matter began early in the
history of science and continues even today. The early work is a classic example of the
deductive method of reasoning that led from the laws or habits of gases to a theory of the
gaseous state – the kinetic molecular theory. In its original form, the kinetic molecular theory
explained the behavior of gases in terms of the kinetic energy of motion of minute spheres
without internal structure. As knowledge of the internal structure of molecules increased, the
theory was modified accordingly. These modifications enable us to understand how molecules
hold energy.
Extensive observations of gas behavior reveal that all gases have certain properties in
common. These include the ability to expand or contract readily, and the ability to diffuse or
to pass through other gases. These properties are of considerable theoretical and practical
interest and have been studied extensively.
Robert Boyle, in 1662, measured the change in volume with the change in pressure while
keeping the amount of air and the temperature constant. The data are plotted in a graph.
Comparison of the data showed that the volume decreases as the pressure increases. Similar
data can be obtained for any kind of gas; they show that the qualitative effect of a decrease in
volume when the pressure is increased is common to all gases. A further study of the data in
column 3 of Table 1 reveals that the product of the pressure times volume is a constant value
i.e. PV = constant value = k ---(1) (amount and temperature constant) or for a given quantity
of gas at two different pressures, P1 and P2;
83
Table 1: Change in volume with change in pressure for a given amount of gas at
constant temperature.
3.0
Volume (litres)
2.0
0.200 3.00 0.600
0.500 1.20 0.600 1.5
1.000 0.60 0.600
1.500 0.40 0.600 1.0
P1V1 = k = P2V2 or V1 / V2 = P2 / P1.---------
0.5
----(2)
0.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
“The volume that a gaseous substance occupies is inversely proportional to the pressure
under which it is measured, provided that the temperature and the amount of gas are
held constant.
The significance of the phrase “inversely proportional” may be better understood by dividing
both sides of the equation (1) by P to obtain the equation in the following form:
This shows in symbols that the volume is proportional to the reciprocal of the pressure, that is,
inversely proportional to the pressure. It means that as P increases, V decreases and vice
versa. ‘k’ is the proportionality constant. The value of k however depends upon the amount
and kind of gas, upon the temperature, and of course, upon the units used for P and V.
Solution: Since the temperature and amount of gas remain constant, the increase in volume
must be accompanied by a corresponding decrease in pressure. We can find the final pressure
by multiplying the initial pressure by a ratio of the two volumes. To obtain the correct answer,
84
this ratio must be such that the final pressure is less than the initial pressure. This means that
the smaller volume must be in the numerator, the larger volume in the denominator.
Pressure is a property that determines the direction of gas (or mass) flow, just as temperature
determines the direction of heat flow. Unless otherwise restrained, a gas (and liquids or solids)
tends to move from a region of higher pressure to one of low pressure. In fluids, (gases and
liquids), the pressure at a given point is the same in all directions.
Experiments show that all gases expand when the temperature is raised (if the pressure is kept
constant). To make this qualitative statement quantitative, experiments must be performed to
show how much the volume of a given quantity of gas changes for a measured change in
temperature at constant pressure. The results of four such experiments are given in Table.2.
Table.2 Change in volume of gases with change in temperature, for constant mass and
constant pressure.
A B C D
Temperature Volume of 1g of Volume of 1g of Volume of 0.5 g Volume of 0.5 g
(°C) O2 at 1500 torr O2 at 2500 torr of O2 at 1500 of SO2 at 1500
(litres) (litres) torr (litres) torr (litres)
-33.0 0.312 0.187 0.156
+7.0 0.364 0.218 0.182
+47.0 0.416 0.250 0.208 0.104
+87.0 0.468 0.281 0.234 0.117
+127.0 0.520 0.312 0.260 0.130
By trying various mathematical operations with these data, we find that a graph of volume
against temperature gives, for each case, a straight line as shown below.
A = 1g O2
at 1500 torr
B = 1g O2
at 2500 torr
85
C = 0.5g O2
at 1500 torr
0.6
0.5
Volume (litres)
0.4
0.3
0.2
0.1
0.0
-200 -100 0 100 200
Temperature (K)
D = 0.5g O2
at 1500 torr
The slopes and intercepts of these lines differ according to the amount of gas used (A
compared to C), the (constant) pressure under which the measurements are carried out
(compare A with B), and the kind of gas used (C and D). the curious fact emerges, however,
that if the curves are extended (extrapolated) to lower and lower temperatures, all four meet at
the same point on the temperature axis
at -273°C (more precisely, - 273.15°C). This means that if we could lower the temperature of
a gas to -273°C, the gas should occupy zero volume, regardless of the kind of gas, the weight
of it present, or the pressure under which it was measured. Of course, such a contraction to
zero volume could never occur. Before such a low temperature was reached, the gas would
have changed to a liquid or a solid.
We can take advantage of this uniformity of gas behavior by changing the zero of the
temperature scale to -273°C. On this new absolute temperature scale, zero of temperature
corresponds to the apparent zero of gas volume. The volume of a gas is now said to be
86
directly proportional to the absolute temperature, provided the quantity of gas and its
pressure remain constant. This is a statement of Charles’ Law.
Expression (4) is an equation for a straight line of zero intercept as shown in the graph. The more
common absolute temperature scale, known as the Kelvin scale, maintains the size of the degree
the same as the size of the Celsius degree, but changes the zero point by 273, so that 0 K = -
273°C; 273 K = 0°C, 373 K = 100°C; etc.,
A useful form of equation (4) for the same quantity of gas at two different temperatures, but at
the same pressure, is V1 / V2 = T1 / T2--------(5)
Problem: 10 litres of gas in a large balloon at 0°C is heated, and the balloon is allowed to
expand at constant pressure to a temperature of 200°C. What will be the volume of the
balloon at this temperature?
Solution: Because the amount of gas and its pressure remain constant, the increase in absolute
temperature must be accompanied by a corresponding increase in volume. We can find the
final volume by multiplying the initial volume by a ratio of the absolute temperatures,
arranged such that the final volume will be larger than the initial volume. This means that the
higher absolute temperature must be in the numerator; the lower absolute temperature must be
in the denominator of this ratio. i.e. V Final =V Initial xT abs.Final /T abs.Initial = 10 x 473 / 273 =
17.3 litres.
We can combine the two laws for the behavior of ideal gases into a single relationship using
the three variables – pressure, volume and temperature – for a given amount of an ideal gas:
P1V1 / T1 = P2V2 / T2
This relationship can be called the combined gas laws. It is valid only when the temperature is
given in Kelvins. We can use this relationship for any calculation in which a given amount of
gas undergoes a change in conditions.
87
Another way to interpret the combined gas laws is to say that for a given amount of gas, PV /
T is a constant. The value of this constant depends on the amount of gas in a sample. We can
put this interpretation in the form of an equation. It is called the ideal gas equation and
generally is written as PV = nRT where n is the usual symbol for amount in moles and R is a
constant called the Universal Gas Constant. While T must be given in Kelvins and n is almost
always given in moles, the units of P and V may vary. The value of R thus depends on the
units used for P and V. One common value that is often used is ‘litres’ as the unit of volume
and ‘atmospheres’ as the unit of pressure. The value of R then is 0.0821 lit.atm. mol -1K-1. The
ideal gas equation provides a complete description of an ideal gas.
An ideal gas is defined as a gas for which both the volume of molecules and the forces
between the molecules are so small that they have no effect on the behavior of the gas.
Although no such ideal gas exists, the ideal gas equation can be used to describe the behavior
of a real gas. All gases are real gases. Real gases behave like ideal gases under many ordinary
conditions. Only at low temperatures and high pressures, do real gases show significant non-
ideal behavior. For now, let us assume that gases are close to ideal and the ideal gas equation
applies.
Early in the 19th century, the Italian Chemist Amedeo Avogadro proposed a very simple but
profound hypothesis relating to the volume of a gas to the number of gas particles. Although
this hypothesis resolved many of the problems that puzzled scientists in those times, only
decades later, did the hypothesis gain acceptance. Avogadro’s law states that equal volumes
of gases at the same temperature and pressure contain an equal number of particles.”
Avogadro’s law contains 2 important points worth noting. First, it states that all gases show
the same physical behavior. Second, the law tells us that a gas with a larger volume must
consist of a greater number of particles. This is quite reasonable. As long as the pressure and
temperature of a gas do not change, the only way to change the volume is by changing the
number of gas particles.
88
Because n, the number of moles, is directly proportional to the number of particles, the
relationship expressed by Avogadro’s law can be written as V = k n where n is the number of
moles and k is the proportionality constant – Avogadro’s law constant. The volume of one
mole of gas is called the molar volume, whose value is 22.4 litres at STP (273 K and 1 atm).
The English Chemist John Dalton was among the first few scientists to consider mixtures of
gases even though he did not have the kinetic-molecular theory of gases available to help him
formulate and develop his ideas. After experimenting with gases and gas mixtures, he
proposed that the particles of different gases in a mixture act independently in exerting
pressures upon the walls of the container. In other words, he concluded that each gas in a
mixture exerts the same pressure that it would if it were present alone at the same temperature.
The pressure exerted by each component of a mixture of gases is called the partial pressure of
that component. Dalton’s law of partial pressure states that the sum of the partial
pressures of all the components in a gas mixture is equal to the total pressure of the gas
mixture.
PT = pa + pb + pc +……..
In this equation, PT is the total pressure of a gas mixture, pa is the partial pressure of
component a, pb is the partial pressure of component b and so on. The law is true only when
the component gases do not react chemically with each other.
A common laboratory situation that requires the use of Dalton’s law is encountered when a
gas is collected by bubbling it through water. As each gas bubble travels upward through the
water, some water evaporates into it, so by the time it escapes from the liquid, the bubble
contains a mixture of the gas and water vapour. The total pressure of the mixture in the
collection bottle is PT = PA + PH2O. If we wish to know the partial pressure of gas A, we will
have to subtract the partial pressure of water vapour from the total pressure. P A = PT - PH2O.
The partial pressure of water vapour depends on the temperature. The general formula to
calculate the partial pressure of a component from a mixture of gases is given below:
89
Partial pressure of a component = mole fraction of the component x Total pressure. Mole
fraction of a component = Number of moles of the component / Total number of moles of all
the components in the mixture. Therefore, partial pressure of a component = Total pressure x
Number of moles of the component / Total number of moles of all the components in the
mixture.
All gases have the same kinetic energy at the same temperature. Therefore, from the kinetic
energy equation, i.e. K E = ½ mv2, we can see that, if we compare the velocities of the
molecules of two gases, the lighter molecules will have a greater velocity than the heavier
ones. For example, calculations show that the velocity of a hydrogen molecule is 4 times the
velocity of an oxygen molecule.
Due to their molecular motion, gases have the property of diffusion, the ability of 2 or more
gases to mix spontaneously until they form a uniform mixture. Two large flasks, one
containing reddish brown bromine vapours and the other dry air are connected by a side tube.
When the stopcock between the flasks is opened, the bromine and air will diffuse into each
other. After standing a while, both flasks will contain bromine and air.
A gas expands to fill its container even if the container already has gas in it. This process is
called diffusion. A similar process takes place when we make a small hole in the wall of a
container that holds a gas. The gas passes through the hole by a process called effusion. The
rate of both effusion and diffusion depend on the velocity of the gas molecules. For both
processes, the rate increases as velocity increases.
Thomas Graham, a Scottish chemist, observed that the rate of effusion / diffusion was
dependant on the density of a gas. This observation led to Graham’s law of effusion or
diffusion. “The rate of effusion or diffusion of two gases at the same temperature and
pressure are inversely proportional to the square roots of their densities or molar
masses.
90
Rate of diffusion of gas A / rate of diffusion of gas B = √DB/DA = √molar mass of B/molar
mass of A.
Problem: Find the relative rates of diffusion of hydrogen and oxygen gas at the same
temperature and pressure.
Solution: A simple way to approach this problem is to write the inverse proportionality
between the rate and the square root of the molar mass, using a form that is convenient for two
sets of conditions.
Rate of H2 / rate of O2 = √M O2 / √M H2
The rate of diffusion of hydrogen is 4 times faster the rate of diffusion of oxygen at the same
temperature.
91
MODULE 7
ENVIRONMENTAL CHEMISTRY
Introduction
All the external factors that affect an organism could be defined as environment. These factors
may be other living organisms or nonliving variables, such as water, soil, climate, light, and
oxygen.
The environment is never static. Physical forces continuously change the surface of the earth
through weather, the action of waves and natural phenomena such as volcanoes. At the same
time they introduce gases, vapor and dust into the atmosphere. Living organisms also play a
dynamic role through respiration, excretion and ultimately death and decay, recycling their
constituent elements through the environment. Just as the familiar substances of our physical
universe are divided into solids, liquids and gases, for convenience our physical environment
can be divided into the atmosphere, the geosphere, the hydrosphere, the biosphere, the
anthroposphere, and all the fauna and flora.
The atmosphere is the gaseous envelope that surrounds the solid body of the planet. Although
it has a thickness of more than 1100 km about half its mass is concentrated in the lower 5.6
km. The atmosphere is a protective blanket which nurtures life on the Earth and protects it
from the hostile environment of outer space. It is the source of carbondioxide for plant
photosynthesis and of oxygen for respiration.Transports water from the oceans to land, thus
acting as the condenser in a vast solar powered still. Serves a vital protective function,
absorbing harmful ultraviolet radiation from the sun and stabilizing Earth’s temperature.
92
Module 7B The Hydrosphere
The hydrosphere is the layer of water that, in the form of the oceans, covers approximately
70.8 percent of the surface of the earth. Water covers about 70% of Earth’s surface and over
97 % of this water exists in oceans. It occurs in all spheres of the environment—in the oceans
as a vast reservoir of saltwater, on land as surface water in lakes and rivers, underground as
groundwater, in the atmosphere as water vapor, in the polar icecaps as solid ice, and in many
segments of the anthrosphere such as in boilers or municipal water distribution systems. It is
an essential part of all living systems and is the medium from which life evolved and in which
life exists. It carries energy and matter are through various spheres of the environment.
Leaches soluble constituents from mineral matter and carries them to the ocean or leaves them
as mineral deposits some distance from their sources. Carries plant nutrients from soil into the
bodies of plants by way of plant roots. Absorbs solar energy in oceans and this energy is
carried as latent heat and released inland when it evaporates from oceans. The accompanying
release of latent heat provides a large fraction of the energy that is transported from equatorial
regions toward Earth’s poles and powers massive storms.
The geosphere, is that part of the Earth upon which humans live and from which they extract
most of their food, minerals, and fuels. It is divided into layers, which include the solid, iron-
rich inner core, molten outer core, and the lithosphere – which consists of the upper mantle
and the crust. Inner core, Outer core, Lower mantle, Transition zone, Upper mantle, Crust
Lithosphere, Transition zone. Environmental science is most concerned with the lithosphere,
which extends to depths of 100 km and comprises two shells—the crust and upper mantle.
The crust (the earth’s outer skin) is the layer that is accessible to humans and is extremely thin
compared to the diameter of the earth, ranging from 5 to 40 km thick.
93
Module 7D The Biosphere
The Biosphere is the earth’s relatively thin zone of air, soil, and water that is capable of
supporting life, ranging from about 10 km into the atmosphere to the deepest ocean floor. Life
in this zone depends on the sun’s energy and on the circulation of heat and essential nutrients.
The biosphere is virtually contained by the geosphere and hydrosphere in the very thin layer
where these environmental spheres interface with the atmosphere. It strongly influence bodies
of water, producing biomass required for life in the water and mediating oxidation-reduction
reactions in the water. It is involved with weathering processes that break down rocks in the
geosphere and convert rock matter to soil. Biosphere is mainly responsible for plant
photosynthesis, which fixes solar energy and carbon from atmospheric CO2 in the form of
high-energy biomass, represented as carbohydrates.
The terms fauna and flora are collective names given to animals and plants respectively.
There is a continuous interaction between the various sections of the environment and the
flora and fauna. An assembly of mutually interacting organisms and their environment in
which materials are interchanged in a largely cyclical manner is known as ecosystem. The
environment in which a particular organism lives is called habitat. All parts of the
environment are subjected to drastic change due to human overuse of natural resources.
In the last two and half century, the industrial revolution has changed the face of the planet by
natural resources at an alarming rate, especially fossil fuel. Every year natural resources
consumption is rising as the human population increases and standards of living rise. In the
following possible environmental consequences accompanying the over consumption of the
natural resources: fossil fuel, forest wood, water, land and energy by humans.
94
Fossil Fuel: Fossil Fuels, which include petroleum, coal, and natural gas, are energy-rich
substances that have formed from long-buried plants and microorganisms. They provide most
of the energy that powers modern industrial society. The gasoline that fuels our cars, the coal
that powers many electrical plants, and the natural gas that heats our homes are all fossil fuels.
Fossil fuels are largely composed of hydrocarbons which are formed from ancient living
organisms that were buried under layers of sediment millions of years ago. These fuels are
extracted from the earth’s crust, and refined into suitable fuel products, such as gasoline,
heating oil, and kerosene. Some of these hydrocarbons may also be processed into plastics,
chemicals, lubricants, and other non-fuel products. The most commonly used fossil fuels are
petroleum, coal, and natural gas. Once extracted and processed, fossil fuel can be burned for
direct uses, such as to power cars or heat homes, or it can be combusted for the generation of
electrical power. Within the last century, the amount of carbon dioxide in the atmosphere has
increased dramatically, largely because of the practice of burning fossil fuels. This has
resulted in an increase in global temperature. The consequences of such an increase in
temperature may well be dangerous. Sea levels will rise, completely inundating a number of
low-lying island nations and flooding many coastal cities. Many plant and animal species may
probably be forced to go into extinction, agricultural regions will be disrupted, and the
frequency of droughts is likely to increase.
Forest Wood: Forests are very important for maintaining ecological balance and provide
many environmental benefits. In addition to timber and paper products, forests provide
wildlife habitat, prevent flooding and soil erosion, help provide clean air and water, and
contain tremendous biodiversity. Forests are also an important defense against global climate
change. Forests produce life-giving oxygen and consume carbon dioxide, the compound most
responsible for global warming through photosynthesis, thereby reducing the effects of global
warming. Forests provide habitat for a wide variety of plants and animals and perform many
other important functions that affect humans. The forest canopy (the treetops) and root
systems provide natural filters for the water we use from lakes and rivers. When it rains the
forest canopy intercepts and re-distributes precipitation that can cause flooding and erosion,
the wearing away of top soil. Some of the precipitation flows down the trunks as stemflow,
the rest percolates through the branches and foliage as throughfall. The canopy is also able to
95
capture fog, which it distributes into the vegetation and soil. Forests also increase the ability
of the land to store water.
Soil: Soil, a mixture of mineral, plant, and animal materials, is essential for most plant growth
and is the basic resource for agricultural production. In the process of developing the land and
clearing away the vegetation that holds water and soil in place, erosion has devastated soils
worldwide. The rapid deforestation taking place in the tropics is especially damaging because
the thin layer of soil that remains is fragile and quickly washes away when exposed to the
heavy tropical rains. Globally, agriculture accounts for 28 percent of the nearly 2 billion
hectares of soil that have been degraded by human activities; overgrazing is responsible for 34
percent; and deforestation is responsible for 29 percent.
Water: Clean freshwater resources are essential for drinking, bathing, cooking, irrigation,
industry, and for plant and animal survival. Due to overuse, pollution, and ecosystem
degradation the sources of most freshwater supplies—groundwater (water located below the
soil surface), reservoirs, and rivers—are under severe and increasing environmental stress.
Over 95 percent of urban sewage in developing countries is discharged untreated into surface
waters such as rivers and harbors. About 65 percent of the global freshwater supply is used in
agriculture and 25 percent is used in industry. Freshwater conservation therefore requires a
reduction in wasteful practices like inefficient irrigation, reforms in agriculture and industry,
and strict pollution controls worldwide.
96
results from pollution. Polluting industries include mining, power generation, and chemical
production. Other major sources of pollution include automobiles and agricultural fertilizers.
In developing countries, deforestation has had particularly devastating environmental effects.
Many rural people, particularly in tropical regions, depend on forests as a source of food and
other resources, and deforestation damages or eliminate these supplies. Forests also absorb
many pollutants and water from extended rains; without forests, pollution increases and
massive flooding further decreases the usability of the deforested areas. Poor land
management and increasing population are factors that promote increased irrigation, improper
cultivation or over cultivation, and increased numbers of livestock. These events alter the land
and the soil, diminish the resources, and increase the chances of desertification.
Over the last few years urbanization of rural areas has increased. As agriculture, more
traditional local services, and small-scale industry give way to modern industry the urban and
related commerce with the city drawing on the resources of an ever-widening area for its own
sustenance and goods to be traded or processed into manufactures. Urbanization is among the
most significant factors that aggravated environmental degradation. In the following subunit
we shall briefly discuss on the impacts of urbanization on the environment.
97
Most air pollution comes from one human activity: burning fossil fuels—natural gas, coal, and
oil—to power industrial processes and motor vehicles. This results in the emission of harmful
chemical compounds such as carbon dioxide, carbon monoxide, nitrogen oxides, sulfur
dioxide, and tiny solid particles—including lead from gasoline additives—called particulates.
Various volatile organic chemicals (VOCs), generated from incompletely burned fuels, also
enter the air. Carbon dioxide is one of the green house gases which contribute significantly to
global warming. Sulfur dioxide and nitrogen oxide emissions are the principal causes of acid
rain in many parts of the world. Sulfur dioxide and nitrogen oxides emitted into the
atmosphere, are absorbed by rain to form sulphuric acid and nitric acid. These acids are bad
for the lungs and attack anything made of limestone, marble, or metal. Smog is a type of air
pollution produced when sunlight acts upon motor vehicle exhaust gases to form harmful
substances such as ozone (O3), aldehydes and peroxyacetylnitrate (PAN). Before the
automobile age, most smog came from burning coal. Burning gasoline in motor vehicles is the
main source of smog in most regions today.
Powered by sunlight, oxides of nitrogen and volatile organic compounds react in the
atmosphere to produce photochemical smog. Ozone in the lower atmosphere is a poison—it
damages vegetation, kills trees, irritates lung tissues, and attacks rubber. Smog spoils views
and makes outdoor activity unpleasant. The effects of smog are even worse for the very young
and the very old people who suffer from asthma or heart disease. Smog can cause breathing
difficulties, headaches and dizziness. In extreme cases, smog can lead to mass illness and
death, mainly from carbon monoxide poisoning.
Pollution is changing Earth’s atmosphere so that it lets in more harmful radiation from the
Sun. The temperature increase, known as global warming, is predicted to affect world food
supply, alter sea level, make weather more extreme, and increase the spread of tropical
disease. As a chemistry teacher, you are expected to contribute your part to the solution of the
current globally burning issue of atmospheric pollution. You should be able to describe the
composition of the atmosphere, the major contributors to atmospheric pollution and the way
these pollutants are accumulated in the atmosphere. In this unit, we shall discuss the chemistry
98
of the atmosphere and the major pollutants of the atmosphere, describe the way these
pollutants are formed and accumulated in the atmosphere and the threat posed by the
pollutants.
Earth’s atmosphere: is a layer of gases surrounding the planet Earth and retained by the
Earth’s gravity. It contains roughly 78% nitrogen, 21% oxygen, 0.93% argon, 0.04% carbon
dioxide, and trace amounts of other gases, in addition to water vapor. This mixture of gases is
commonly known as air. The atmosphere protects life on earth by absorbing solar UV
radiation and reducing temperature extremes between day and night. The gases ozone, water
vapor, and carbon dioxide are only minor components of the atmosphere, but they exert a
huge effect on the Earth by absorbing radiation. Ozone in the upper atmosphere filters out the
ultraviolet light below about 360 nm that is dangerous for living things. In the troposphere
ozone is an undesirable pollutant. It is toxic to animals and plants, and it also damages
materials. The atmosphere slowly becomes thinner and fades away into space. Therefore,
there is no definite boundary between the atmosphere and outer space. Seventy five percent of
the atmosphere’s mass is within 11 km of the planetary surface.
99
atmosphere, the troposphere, stratosphere, mesosphere and thermosphere, have been defined
based on the temperature.
The Troposphere is the region nearest the earth’s surface. This is the region between the
surface of the earth and 7 km altitude at the poles and 17 km at the equator with some
variation due to weather factors. Air temperature drops uniformly with altitude at a rate of
approximately 6.5° Celsius per 1000 meters. Top is reached at an average temperature of -
56.5°C. In the stratosphere, temperature remains constant with height in the first 9 kilometers
(called an isothermal layer) and then goes on increasing. Above the stratosphere in the region
from 50 to about 80 km, the temperature again decreases in the mesosphere. The atmosphere
reaches its coldest temperatures (about -90°C) at the top layer of the mesosphere (a height of
about 80 km). The upper atmosphere is characterized by the presence of significant levels of
electrons and positive ions. Because of the rarefied conditions, these ions may exist in the
upper atmosphere for long periods before recombining to form neutral species. At altitudes of
approximately 50 km and up, ions are so prevalent that the region is called the ionosphere.
Ultraviolet light is the primary producer of ions in the ionosphere. In darkness, the positive
ions slowly recombine with free electrons. The process is more rapid in the lower regions of
the ionosphere where the concentration of species is relatively high. Atmospheric temperature
again increases at the thermosphere. Temperature in this layer can be as high as 1200°C. The
boundaries between these regions are named the tropopause, stratopause, mesopause, and
thermopause. The average temperature of the atmosphere at the surface of Earth is 14 °C.
Passenger jets normally fly near the top of the troposphere at altitudes of 10 to 12 km, and the
world altitude record for aircraft is 37.65 km roughly in the middle of the stratosphere.
100
absorbed energetic electromagnetic radiation in the UV or visible regions of the
spectrum.Electronically excited molecules, free radicals, and ions consisting of electrically-
charged atoms or molecular fragments are the three relatively reactive and unstable species
that are encountered in the atmosphere. They are strongly involved in atmospheric chemical
processes.
Free radicals are atoms or groups of atoms with unpaired electrons. Such species may be
produced by the action of energetic electromagnetic radiation on neutral atoms or molecules.
The strong pairing tendencies of the unpaired electrons make free radicals highly reactive and
are involved with most significant atmospheric chemical phenomena. The hydroxyl radical,
HO•, is the single most important reactive intermediate species in atmospheric chemical
processes. It is formed by several mechanisms. At higher altitudes it is produced by photolysis
of water:
In the relatively unpolluted troposphere, hydroxyl radical is produced as the result of the
photolysis of ozone,
Hydroxyl radical is removed from the troposphere by reaction with methane or carbon
monoxide:
The hydrogen atom produced in the second reaction reacts with O2 to produce hydroperoxyl
radical which in turn may react with another hydroperoxyl or hydroxyl radical
101
or reactions that regenerate hydroxyl radical:
The atmosphere is slightly acidic because of the presence of a low level of carbon dioxide,
which dissolves in atmospheric water droplets and dissociates slightly:
In terms of pollution, however, strongly acidic HNO3 and H2SO4 formed by the atmospheric
oxidation of N oxides, SO2, and H2S are much more important because they lead to the
formation of damaging acid rain.
Basic species are relatively less common in the atmosphere. Particulate calcium oxide,
hydroxide, and carbonate can get into the atmosphere from ash and ground rock, and can react
with acids such as in the following reaction:
The most important basic species in the atmosphere is gas-phase ammonia, NH3. The major
source of atmospheric ammonia is the biodegradation of nitrogen containing biological matter
and bacterial reduction of nitrate:
102
Ammonia is the only water soluble base present at significant levels in the atmosphere. This
makes it particularly important as a base in the air. Dissolved in atmospheric water droplets, it
plays a strong role in neutralizing atmospheric acids:
In addition to O2, the upper atmosphere contains oxygen atoms, O; excited oxygen molecules,
O2*; and ozone, O3. Atomic oxygen, O, is stable primarily in the thermosphere, where the
atmosphere is so rarefied that the three-body collisions necessary for the chemical reaction of
atomic oxygen seldom occur. Atomic oxygen is produced by a photochemical reaction:
At altitudes exceeding about 80 km, the average molecular weight of air is lower than the
28.97 g/mole observed at sea level because of the high concentration of atomic oxygen. This
condition has divided the atmosphere into a lower section with a uniform molecular weight
(homosphere) and a higher region with a nonuniform molecular weight (heterosphere).
Molecular oxygen and excited oxygen atoms (O*) are produced due to the photolysis of
atmospheric ozone
103
Oxygen ion, O+, which may be produced by ultraviolet radiation acting upon oxygen atoms, is
the predominant positive ion in some regions of the ionosphere. It may react with molecular
oxygen or nitrogen to form other positive ions:
in which M is another species, such as a molecule of N2 or O2, which absorbs the excess
energy given off by the reaction and enables the ozone molecule to stay together. In addition
to undergoing decomposition by the action of ultraviolet radiation, stratospheric ozone reacts
with atomic oxygen, hydroxyl radical, and NO: The HO• radical is regenerated from HOO• by
the reaction,
104
D) Reactions of Atmospheric Nitrogen
In the region above 105 km of the ionosphere a plausible sequence of reactions by which NO+
is formed is the following:
In the lowest region of the ionosphere, which extends from approximately 50 km in altitude to
approximately 85 km, NO+ is produced directly by ionizing radiation:
E) Atmospheric Carbondioxide
Although only about 0.035% (350 ppm) of air consists of carbon dioxide, it is the atmospheric
“nonpollutant” species of most concern. Chemically and photochemically, however, it is a
comparatively insignificant species because of its relatively low concentrations and low
photochemical reactivity. The one significant photochemical reaction that it undergoes, and a
major source of CO at higher altitudes, is the photo-dissociation of CO2 by energetic solar UV
radiation in the stratosphere:
105
F) Atmospheric Water
The water vapor content of the troposphere is normally within a range of 1–3% by volume
with a global average of about 1%. However, air can contain as little as 0.1% or as much as
5% water. The percentage of water in the atmosphere decreases rapidly with increasing
altitude. The cold tropopause serves as a barrier to the movement of water into the
stratosphere. Thus, little water is transferred from the troposphere to the stratosphere, and the
main source of water in the stratosphere is the photochemical oxidation of methane:
The water thus produced serves as a source of stratospheric hydroxyl radical as shown by the
following reaction:
Air Pollution can be defined as the addition of harmful substances to the atmosphere resulting
in damage to the environment, human health, and quality of life. Air pollution causes
breathing problems and promotes cancer. It harms plants, animals, and the ecosystems in
which they live. Some air pollutants return to Earth in the form of acid rain and snow, which
corrode statues and buildings, damage crops and forests, and make lakes and streams
unsuitable for fish and other plant and animal life. Especially the pollutants that result from
the use of combustion as a source of energy: oxides of sulfur, oxides of nitrogen, and carbon
monoxide.
There are a number of ways of classifying air pollutants. Most commonly they are classified
on the basis of 1) differences in their physical or chemical characteristics, 2) by their origin, 3)
by the nature of the response they elicit, 4) by their legal status.
106
Based on differences in their physical or chemical characteristics
Aerosols:- are tiny particles dispersed in gases and include both liquid and solid particles. Air
pollution terminology relating to atmospheric aerosols includes dusts, fog, fumes, hazes,
mists, particulate matter, smog, smoke, and soot.
Gases and vapors:- are composed of widely separated freely moving molecules which will
expand to fill a larger container and exert a pressure in all directions. A substance is a true gas
if it is far removed form the liquid state (i.e. the temperature of the substance is above its
critical point). A vapor is a substance in the gaseous state which is not far from being a liquid
(i.e. it can be condensed to a liquid relatively easily).
Human activities such as: burning coal and petroleum products (gasoline, kerosene, fuel oil
etc; driving a car, and industrial activities, such as manufacturing products or generating
electricity are among the major sources of air pollutants. The generation of energy through the
combustion of fossil fuels produces plenty of water and carbon dioxide, which contributes to
global warming. When coal, gasoline, and similar fuels are burned, the hydrocarbons and
other impurities in them are oxidized. The sulfur of the pyrite that remains in coal, oxidizes to
sulfur dioxide, an irritating gas with a harsh, acrid odor. The oxides formed under this
condition combine with water vapor in the air to form acids, which return to the ground as
acid rain. Powered by sunlight, oxides of nitrogen and volatile organic compounds react in
the atmosphere to produce photochemical smog. Smog contains ozone, a form of oxygen gas
made up of molecules with three oxygen atoms rather than the normal two. In the lower
atmosphere Ozone is a poison—it damages vegetation, kills trees, irritates lung tissues, and
attacks rubber. The severity of smog is determined by measuring the ozone level in the smog.
When the ozone level is high, other pollutants, including carbon monoxide, are usually
present at high levels as well. The very young, the very old, and people who suffer from
asthma or heart disease, are more seriously affected by smog. Smog may cause headaches or
dizziness and can cause breathing difficulties. In extreme cases, it can lead to mass illness and
death, mainly from carbon monoxide poisoning. Still another pollutant, a product of the
incomplete combustion of carbon or organic compounds, such as the hydrocarbons of
107
gasoline, one that we can’t see and that produces no sense of irritation, is carbon monoxide,
CO. This gas is known as the silent killer because it is odorless, tasteless, and invisible. Its
major symptom is a drowsiness sometimes accompanied by headache, dizziness, and nausea.
CO is primarily a pollutant of cities and usually fluctuates with flow of traffic.
Rainwater which was considered to be the purest form of water available in the past, is now
known to be often contaminated by pollutants in the air. In the presence of atmospheric
moisture, gases such as sulfur dioxide and oxides of nitrogen, resulting from industrial
emissions, turn into droplets of pure acid floating in smog, known as acid rain. These airborne
acids are bad for the lungs and attack anything made of limestone, marble, or metal. Forests
and lakes that are far away from industrial activities may be damaged by acid rain resulting
from pollutants that may be carried by winds in the troposphere and descend in acid form,
usually as rain or snow. Leaves of plants are burned and lakes will be too acidic to support
fish and other living things due to acid precipitation.
108
attack atmospheric ozone. Scientists are finding that under this assault the protective ozone
layer in the stratosphere is thinning.
Global Warming: The energy that lights and warms Earth comes from the Sun. Most of this
energy comes as short-wave radiation. Earth’s surface, in turn, releases some of this heat as
long-wave infrared radiation. Much of the emitted infrared radiation goes back out to space,
but a portion remains trapped in Earth’s atmosphere. Certain gases in the atmosphere,
including water vapor, carbon dioxide, and methane, provide the trap. Absorbing and
reflecting infrared waves radiated by Earth, these gases conserve heat as the glass in a
greenhouse does and are thus known as greenhouse gases. As the atmosphere becomes richer
in these gases, it becomes a better insulator, retaining more of the heat provided to the planet
by the Sun. The net result; more heat is received from the sun than is lost back to space, a
phenomenon known as “greenhouse effect”. Because of this green house effect, the average
surface temperature of the Earth is maintained at a relatively comfortable 15°C. Was this not
the case, the surface temperature would average around -18°C. The problem with global
warming is that man is adding to and changing the levels of the greenhouse gases and is
therefore enhancing this warming.
Carbon dioxide (CO2) is the gas most significantly enhancing the greenhouse effect. Plant
respiration and decomposition of organic material release more than 10 times the CO 2 than
released by human activities, but these have generally been in balance before the industrial
revolution. Since the industrial revolution amounts have increased drastically due to
combustion of fossil fuel (oil, natural gas and coal) by heavy industry and other human
activities, such as transport and deforestation. Other factors slow the warming, but not to the
same degree.
i) The world is expected to have a more extreme weather, with more rain during
wet periods, longer droughts, and more powerful storms. ii) Melting of the
polar ice caps, leading to a rise in sea level. Such a rise would flood coastal
cities, force people to abandon low-lying islands, and completely inundate
coastal wetlands. Diseases like malaria may become more common in the
regions of the globe between the tropics and the polar regions, called the
109
temperate zones. iii) Climate change may bring extinction for many of the
world’s plant species, and for animal species that are not easily able to shift
their territories as their habitat grows warmer.
Properties of Water
Water is a vitally important substance in all parts of the environment. It covers about 70% of
Earth’s surface and occurs in all spheres of the environment. It is an essential part of all living
systems. Water carries energy and matter through various spheres of the environment. It
carries plant nutrients from soil into the bodies of plants by way of plant roots. The properties
of water would best be understood by considering the structure and bonding of the water
molecule. A single water molecule we have two hydrogen atoms bonded covalently to an
oxygen atom. The three atoms are arranged in a V-shape structure with an angle of 105°.
Because of its bent structure and the fact that the oxygen atom attracts electrons more strongly
than hydrogen atoms, a water molecule behaves like a dipole having opposite electrical
charges at either end. The water dipole may be attracted to either positively or negatively
charged ions like with Na+ and Cl- during the dissolution of NaCl. Water has the ability to
form hydrogen bonds. Hydrogen bond is a special type of bond that can form between the
partially positively charged hydrogen atoms in one water molecule and the partially
negatively changed oxygen atoms in another water molecule. Hydrogen bond holds water
molecules together with strong and also help to hold some solute molecules or ions in
solutions.
Dissolved Gases in Water: Natural water systems contain a number of gases dissolved in
them. Among these gases O2 and CO2 are vital for aquatic animals and plants. For example O2
is essential for fish and CO2 for photosynthetic algae. Some gases in water can also cause
problems, such as the death of fish from bubbles of nitrogen formed in the blood.
Oxygen in Water: Most of the elemental oxygen that we find dissolved in water comes from
the atmosphere and also from the photosynthetic action of algae. As many kinds of aquatic
organisms require oxygen for their existence, water bodies should contain appreciable level of
110
dissolved oxygen. Dissolved oxygen can decrease due to different reasons. Part of the oxygen
coming from algal photosynthesis during the day for example is used up by the algae itself as
part of their metabolic processes. Because of this, dissolved oxygen contribution through algal
photosynthesis is not that efficient. The degradation of biomass coming from dead algae and
other organic matter also consume dissolved oxygen. Therefore, the ability of a body of water
to reoxygenate itself by contact with the atmosphere is an important characteristic.
Carbon Dioxide in Water: Carbondioxide is present in virtually all natural waters and
wastewaters and is the most important weak acid in water. The CO2 in water comes from the
dissolution of atmospheric CO2 and from microbial decay of organic matter. carbon dioxide
and its ionization products have extremely important influence upon the chemistry of water.
Many minerals are deposited as salts of the carbonate ion. Although dissolved CO 2 is often
represented as H2CO3 just a small fraction of the dissolved CO2 is actually present as H2CO3.
Therefore, to make a distinction, non-ionized carbon dioxide in water is designated simply
as CO2.
Suspended solids - Sources and impacts: Solids can be dispersed in water in both suspended
and dissolved forms. Although some dissolved solids may be perceived by the physical sense,
they fall more approximately under the category of chemical parameters. Solids suspended in
water may consist of inorganic substances such as clay and silt or organic particles such as
plant fibers, algal and bacterial remains. Because of the filtering capacity of the soil,
suspended material is seldom a constituent of ground water. Domestic wastewater usually
contains large quantities of suspended solids that are mostly organic in nature. Suspended
materials in water make the water aesthetically unpleasant and provide adsorption sites for
chemical and biological agents. Biological degradation of suspended organic solids results in
objectionable by-products. Biologically active (live) suspended solids may include disease
causing organisms as well as organisms such as toxin-producing strains of algae. Suspended
solids could be divided into filterable residues and non-filterable residues. Suspended solids,
111
where such material is likely to be organic and /or biological in nature, are important
parameter of wastewater. The suspended-solids parameter is used to measure the quality of
the wastewater affluent, to monitor several treatment processes, and to measure the quality of
the effluent. Regulators have set a maximum suspended-solids standard of 30 mg/L for most
treated wastewater discharges.
Turbidity: Sources and impacts: Turbidity is the measure of the extent to which light is
either absorbed or scattered by suspended particles in water. Absorption and scattering are
known to be influenced by both the size and surface characteristics of the suspended material.
Therefore, turbidity cannot be a direct quantitative measure of suspended solids. For example,
a small pebble in a glass of water would not produce any turbidity. However, if this pebble
were crushed into small particles of colloidal size it produces a measurable amount of
turbidity although the mass of the pebble will not change. Turbidity in surface waters results
from soil erosion. Household and industrial wastewaters such as soaps, detergents and
emulsifying agents produce stable colloids that result in turbidity. Turbidity makes drinking
water aesthetically displeasing and the colloidal materials associated with turbidity provide
adsorption sites for chemicals that may be harmful or cause undesirable tastes and odors and
for biological organisms that may be harmful to health.
Color and its sources: Pure water has no color. However, due to foreign materials natural
water is often colored. The tannins, humic acid, and humates taken up by water from
filterable suspended organic debris such as leaves, weeds, or wood, impart yellowish-brown
color to water. Non-filterable substances such as iron oxides cause reddish water. Industrial
wastes from textile and dyeing operations, pulp and paper production, food processing,
chemical production, and mining, refining, and slaughterhouse operations may add substantial
coloration to water in receiving streams. In general part of the water coloration coming from
filterable suspended matter is said to be apparent color and that contributed by dissolved
solids, which remain after removal of filterable suspended matter, is known as true color.
Similar to turbid water, colored water is aesthetically unacceptable to the general public.
Highly colored water is unsuitable for laundering, dyeing, papermaking, beverage
manufacturing, and dairy production. Therefore, colored water is less marketable both for
domestic and industrial use.
112
Taste and odor
The taste of water is the flavor that water gives us when we put it into our mouth, and the
quality of water that we perceive by our sense of smell is its odor. Consumers may attribute a
wide variety of tastes and odors to water. Substances that produce odor in water will almost
invariably impart a taste as well. However, the converse is not true, as there are many mineral
substances that produce taste but not odor. Organic materials or, their biological
decomposition, are known to produce both taste and odor problems in water. Principal among
these are the reduced products of sulfur that imparts a “rotten egg” odor and taste. Also,
certain species of algae secrete an oily substance that may result in both taste and odor. The
combination of two or more substances neither of which would produce taste or odor by itself
may sometimes result in taste or odor problems. Alkaline mineral imparts a bitter taste to
water, while metallic salts may give a salty or bitter taste. Taste and odor make water
aesthetically displeasing to consumers. Some of the substances that impart bad taste and odor
may be carcinogenic.
Lower temperatures favor slower rate of biological activity. Provided that essential nutrients
are available, biological activity is doubled by an increase of approximately 10 °C. With an
increase in metabolic rates, organisms that are efficient at food utilization and reproduction
flourish, while other species decline and are perhaps eliminated altogether. Algal growth is
often accelerated in warm water and can become a problem when cells cluster into algae mats.
113
Natural secretion of oils by the algae in the mats and the decay products of dead algae cells
can result in taste and odor problems. Higher-order species, such as fish, are affected
dramatically by temperature and by dissolved oxygen levels, which are a function of
temperature. Game fish generally require cooler temperatures and higher dissolved-oxygen
levels. Temperature changes affect the reaction rates and solubility levels of chemicals. Most
chemical reactions involving dissolution of solids are accelerated by an increase in
temperature. The solubility of gases, on the other hand, decreases at elevated temperatures.
Thus, with an increase in temperature the level of dissolved oxygen will decrease. This is
undesirable situation since biological oxidation of organics in streams is dependent on
adequate supply dissolved oxygen.
Soil is a mixture of mineral, plant, and animal materials that form during a long process that
may take thousands of years. In general, soil is an unconsolidated, or loose, combination of
inorganic and organic materials. It is necessary for most plant growth and is essential for all
agricultural production. The inorganic components of soil are principally produced by the
weathering of rocks and minerals. The organic materials are composed of debris from plants
and from the decomposition of the many tiny life forms that inhabit the soil. The chemical
composition and physical structure of soils is determined by a number of factors such as: the
kinds of rocks, minerals, and other geologic materials from which the soil is originally
formed. The vegetation that grow in the soil are also important. Food sources grown on soils
are predominately composed of carbon, hydrogen, oxygen, phosphorous, nitrogen, potassium,
sodium and calcium. Plants take up these elements from the soil and configure them into the
plants we recognize as food. Each plant has unique nutritional requirements that are obtained
through the roots from the soil. Nutrients are stored in soil on “exchange sites” of the organic
and clay components. Calcium, magnesium, ammonium, potassium and the vast majority of
the micronutrients are present as cations under most soil pHs.
114
areas of various shapes and sizes. Networks of pores hold water within the soil and also
provide a means of water transport. Oxygen and other gases move through pore spaces in soil.
Pores also serve as passageways for small animals and provide room for the growth of plant
roots. The mineral component of soil is made up of an arrangement of particles that are less
than 2.0 mm in diameter. Soil scientists divide soil particles, into three main size groups: sand,
silt, and clay. According to the standard classification scheme the size designations are: sand,
0.05 to 2.00 mm; silt 0.002 to 0.05 mm; and clay, less than 0.002 mm. Depending upon the
parent rock materials from which they were derived, these assorted mineral particles
ultimately release the chemicals on which plants depend for survival, such as potassium,
calcium, magnesium, phosphorus, sulfur, iron, and manganese. Soils also have key chemical
characteristics. The surfaces of certain soil particles, particularly the clays, hold groupings of
atoms known as ions. These ions carry a negative charge. Like magnets, these negative ions
(called anions) attract positive ions (called cations). Cations, including those from calcium,
magnesium, and potassium, then become attached to the soil particles, in a process known as
cation exchange. The chemical reactions in cation exchange make it possible for calcium and
the other elements to be changed into water-soluble forms that plants can use for food.
Therefore, a soil’s cation exchange capacity is an important measure of its fertility. Organic
materials constitute another essential component of soils. Some of this material comes from
the residue of plants—for example, the remains of plant roots deep within the soil, or
materials that fall on the ground, such as leaves on a forest floor. These materials become part
of a cycle of decomposition and decay, a cycle that provides important nutrients to the soil. In
general, soil fertility depends on a high content of organic materials.
Water: Soils are also characterized according to how effectively they retain and transport
water. Once water enters the soil from rain or irrigation, gravity comes into play, causing
water to trickle downward. Soils differ in their capacity to retain moisture against the pull
exerted by gravity and by plant roots. Coarse soils, such as those consisting of mostly of sand,
tend to hold less water than do soils with finer textures, such as those with a greater
proportion of clays. Water also moves through soil pores by capillary action. This is the kind
of movement in which water molecules move because they are more attracted to the pore
walls than to one another. Such movement tends to occur from wetter to drier areas of the soil.
115
The attraction of water molecules to each other is an example of cohesion. The attraction of
water molecules to other materials, such as soil or plant roots, is a type of adhesion.
Soil Characteristics: Scientists can learn a lot about a soil’s composition and origin by
examining various features of the soil. Color, texture, aggregation, porosity, ion content, and
pH are all important soil characteristics.
Color: Soils possess a wide range of colors. In the surface soil horizons, a dark color usually
indicates the presence of organic matter. Soils with significant organic material content appear
dark brown or black. The most common soil hues are in the red-to-yellow range, getting their
color from iron oxide minerals coating soil particles. Red iron oxides dominate highly
weathered soils. Soils frequently saturated by water appear gray, blue, or green because the
minerals that give them the red and yellow colors have been leached away.
Texture: A soil’s texture depends on its content of the three main mineral components of the
soil: sand, silt, and clay. Texture is the relative percentage of each particle size in a soil. Soils
with predominantly large particles tend to drain quickly and have lower fertility. Very fine-
textured soils may be poorly drained, tend to become waterlogged, and are therefore not well-
suited for agriculture. Soils with a medium texture and a relatively even proportion of all
particle sizes are most versatile. A combination of 10 to 20 percent clay, along with sand and
silt in roughly equal amounts, and a good quantity of organic materials, is considered an ideal
mixture for productive soil.
Aggregation: Individual soil particles tend to be bound together into larger units referred to
as aggregates or soil peds. Aggregation occurs as a result of complex chemical forces acting
on small soil components or when organisms and organic matter in soil act as glue binding
particles together. Soil peds range in size from very fine—less than 1 mm —to very coarse—
greater than 10 mm.
Porosity: The part of the soil that is not solid is made up of pores of various sizes and
shapes—sometimes small and separate, sometimes consisting of continuous tubes. The size,
number, and arrangement of soil pores is known as the soil’s porosity. Porosity greatly affects
water movement and gas exchange. Well-aggregated soils have numerous pores, which are
116
important for organisms that live in the soil and require water and oxygen to survive. The
transport of nutrients and contaminants will also be affected by soil structure and porosity.
pH: Another important chemical measure is soil pH, which refers to the soil’s acidity or
alkalinity. The pH of a soil will often determine whether certain plants can be grown
successfully.
Soil Erosion: For most of human history, soil has not been treated as the valuable and
essentially nonrenewable resource that it is. Erosion has devastated soils worldwide as a result
of overuse and misuse. In recent years, however, farmers and agricultural experts have
become increasingly concerned with soil management. Erosion is the wearing away of
material on the surface of the land by wind, water, or gravity. In nature, erosion occurs very
slowly, as natural weathering and geologic processes remove rock, parent material, or soil
from the land surface. Human activity, on the other hand, greatly increases the rate of erosion.
In a cultivated field from which crops have been harvested, the soil is often left bare, without
protection from the elements, particularly water. Raindrops smash into the soil, dislodging soil
particles. Water then carries these particles away. This movement may take the form of broad
overland flows known as sheet erosion. More often, the eroding soil is concentrated into small
channels, or rills, producing so-called rill erosion. Gravity intensifies water erosion. Wind
erosion occurs where soils are dry, bare, and exposed to winds. Very small soil particles can
be suspended in the air and carried away with the wind. Larger particles bounce along the
ground in a process called saltation.
Soil Pollution: Unhealthy soil management methods have seriously degraded soil quality,
caused soil pollution, and enhanced erosion. In addition to other human practices, the use of
chemical fertilizers, pesticides, and fungicides has disrupted the natural processes occurring
within the soil resulting in soil pollution. Soil pollution is a buildup of toxic chemical
compounds, salts, pathogens, or radioactive materials that can affect plant and animal life. The
concern over soil contamination stems primarily from health risks, both of direct contact and
from secondary contamination of water supplies. All kinds of soil pollutants originate from a
source. The source is particularly important because it is generally the logical place to
eliminate pollution. After a pollutant is released from a source, it may act upon a receptor. The
117
receptor is anything that is affected by the pollutant. The following sub-unit describes some of
the most common sources of soil pollution.
The most common toxic soil pollutants include metals and their compounds, organic
chemicals, oils and tars, pesticides, explosive and toxic gases, radioactive materials,
biologically active materials, combustible materials, asbestos and other hazardous materials.
These substances commonly arise from the rupture of underground storage tanks; application
of chemical fertilizers, pesticides, and fungicides; percolation of contaminated surface water
to subsurface strata; leaching of wastes from landfills or direct discharge of industrial wastes
to the soil etc. Heavy metal soil contaminants such as cadmium, lead, chromium, copper, zinc,
mercury and arsenic, are a matter of great concern. Naturally all soils contain heavy metals.
However, their levels are increased by:
Pesticides that are used in agricultural practices pollute the soil directly by affecting the
organisms that reside in it. Organic pollutants enter the soil via atmospheric deposition, direct
spreading onto land, contamination by wastewater and waste disposal. Organic contaminants
include pesticides and many other components, such as oils, tars, chlorinated hydrocarbons,
PCBs and dioxins. The use of pesticides may lead to: destruction of the soil’s micro-flora and
fauna, leading to both physical and chemical deterioration. Severe yield reduction in crops
and leaching of toxic chemicals into groundwater and potentially threatening drinking water
resources. Soil pollution prevention requires proper land use planning and provision of
environmental infrastructures. For example, industries that can cause accidental discharge of
pollutants and toxic chemicals will not be allowed to be sited within water catchments. Once
preventive measures are established, controls are stringently enforced to ensure that pollution
control equipment are properly maintained and operated, and effluents discharged meet
emission standard.
************************************************************************
118
ENG-101
English
INDEX
MODULE 1 Communication 3
MODULE 3 Listening 24
MODULE 4 Interviewing 33
Page | 1
CONTENTS
MODULE 1 Communication
MODULE 3 Listening
MODULE 4 Interviewing
MODULE 6A Introduction
MODULE 6B Preparing to Give an Oral Presentation
MODULE 6C Making the Presentation
MODULE 6D Summary
Page | 2
Chapter 1
Communication
Page | 3
Communication is the process of transferring meanings. In a business setting this process
sometimes accounts for the difference between, success and failure and also profit and loss This
fact is now being recognized by both the corporate community and business schools,. In a 1984
Harvard Business Review poll of practitioners and academicians, both groups felt that the oral and
written skills of MBAs required a great deal of improvement.
Today business communication has become so important that many colleges and universities,
nationally and internationally, require the course for graduation It is becoming clear to all the
concerned parties that communication is critical to the effective functioning of modern business
enterprises
MODULE 1A : COMMUNICATION
Effective business communication is Important both to the individual and to modern organizations
Helping You
Good communication skills often make the difference between being hired and fired. A well
written resume and cover letter, and a convincing interview, can get you the job you want
even though more qualified people had applied for it.
And once you start working, you'll find that good verbal and nonverbal communication enables
you to interact effectively with others and get work done efficiently. Good writing skills
also draw attention to you and increase your chances of promotion.
Good communication skills can help you in a variety of ways in your career: advance you
socially (i e make useful contacts), build your self confidence, enable you to help and lead
others Most successful people recognize the role communication skills have played in their
career. In a survey Of college graduates in a wide variety of fields, most respondents said that
communication was vital to their job success. Most, in fact, said that communication skills
were more important than the major subject they had studied in college
Page | 4
MODULE 1B: THE BASIC FORMS OF COMMUNICATION
Communication is essential for the functioning of an organization Every day a vast amount of
information flows from managers to employees, employees to managers and from employee to
employee Apart from this internal communication, a considerable amount of information is also
carried in and out of the organization.
This communication, internal and external, takes place in a nonverbal and verbal manner through
gestures, expressions, meetings, listening, speaking and writing.
Nonverbal Communication
The most basic form of communication is nonverbal communication all the cues, gestures, vocal
qualities, spatial relationships, and attitudes toward time that allow us to communicate without
words These nonverbal cues are used to express superiority, dependence, dislike, respect, love and a
host of other feelings and attitudes.
Nonverbal communication differs from verbal communication in fundamental ways. For one
thing, it is less structured, which makes it more difficult to study. You can just pick up a book
on nonverbal language and master the vocabulary of gestures that are common in a particular
culture. Nonverbal communication also differs from verbal communication in terms of intent and
spontaneity. We generally plan our words, so when we say something we have a conscious
purpose. However, when we communicate nonverbally, we sometimes do so unconsciously. We
don't mean to raise an eyebrow or blush - those actions come naturally. Without our consent, our
faces express our emotions.
Nonverbal cues are especially important for conveying feelings. One reason for the power of
nonverbal communication is its reliability. Most people can deceive us much more easily with words
than they can with their bodies. Words are relatively easy to control; but body language, facial
expressions and vocal characteristics are not. By paying attention to these nonverbal cues, we can
detect deception or affirm a speaker's honesty. Not surprisingly, we have more faith in nonverbal cues
than we do in verbal messages. If a person says one thing but transmits a conflicting message
nonverbally, we almost invariably believe the nonverbal signal. To a great degree, then an
individual's credibility as a communicator depends on nonverbal messages.
Nonverbal communication is important for another reason: it can be efficient from both
the sender's and the receiver’s standpoint. You can transmit a nonverbal message
without even thinking about it, and your audience can register the meaning
unconsciously. At the same time, when you have a conscious purpose, you can often
achieve it more economically with a gesture than you can with words- a wave of the hand, a pat
on the back, a wink, and so on. Although nonverbal communication can stand alone, it usually
blends with speech, carrying part of the message. Together, the two modes of expression are a
powerful combination, augmenting, reinforcing, and clarifying each other.
Page | 5
Verbal Communication
Although you can express many things nonverbally, there are limits to what you can communicate
without the help of language-.If you want to discuss past events, ideas, or abstractions, you need
symbols that stand for your thoughts. Verbal communication consists of words arranged in
meaningful patterns. To express a thought, words are arranged according to the rules of grammar,
with the various parts of speech arranged in the proper sequence. The message composed is then
transmitted in spoken or written form to the appropriate audience.
Business people tend to prefer oral communication channels to written ones. Basically, this
preference reflects the relative ease and efficiency of oral communication. It is generally quicker
and more convenient to talk to somebody than to write a memo or letter. Besides, when you are
speaking or listening, you can pick up added meaning from nonverbal cues and benefit from
immediate feedback.
On the other hand, relying too heavily on oral communication can cause problems in organizations.
As organizations grow and the number of employees increases, keeping everyone adequately
informed by word of mouth becomes difficult.
For maximum impact, business people should use both written and spoken channels. One
company, for example, used multiple channels when it chose the multimedia approach to exclaim
its new employee-benefits plan. Employees received a printout of their benefits ,a computer disk,
brochures, and a take home video. They also attended special face-to-face training meetings and
were even given access to a hotline. With so many choices available, employees were able to
use the media they felt most comfortable with. As workforces grow and become more culturally
diverse, such multimedia approaches are expected to become even more popular.
Listening
Business people spend a considerable amount of time receiving information. Effective business
communication thus depends not only on skill in sending information but also on skill in receiving
information.
Unfortunately, most of us aren't very good listeners. Immediately after hearing a ten minute
speech, we typically remember only half of what was said. A few days later, we have forgotten
most of the message. And, to make matters worse, we probably didn't even grasp the subtle,
underlying meaning when we heard the speech. To some extent, our listening problems stem from
our education, or lack of it. We spend years learning to express our ideas, but few of us ever take
a course in listening. Developing good listening habits is crucial if we want to foster the
understanding and cooperation so necessary for an increasingly diverse, international workforce.
Page | 6
MODULE 1C : THE PROCESS OF COMMUNICATION
Whether you are speaking, writing, or listening, communication is more than a single act. Instead,
it is a chain of events that can be broken into five phases.
The communication Process
Communication
Perception Perception
Network
attitudes Attitudes
Down ward
beliefs Beliefs
Upward
experiences Experiences
communication ability Lateral
Communication ability
Informal
Feedback (Verbal, Nonverbal)
Fundamentals of Communication
1. Sender
2. Message
3 Channel
4 Receiver
5. Feedback
Sender
The sender is that individual who initiates the communication. This person is sometimes known
as the "encoder. " Encoding is the process of selecting and formulating the information to be
conveyed. The sender should mentally visualize the communication from the receiver's point of
view. For example, if the sender must convey both good news and bad news, it is often better to
relate the good news first. If he must convey both a simple. And a complex message, it is best to start
with the simple one. This order of priority improves the chances of effective communication. If the
message is being delivered verbally, the sender should also look and listen for clues that can help
provide additional information.
Page | 7
Message
The message is the information being transmitted. This includes both verbal and nonverbal data
Verbal information is the part of the message that is heard.
Nonverbal information entails such things known as and the surrounding, environment. For
example if a manager, calls in a subordinate and says "you did an excellent job in closing that sale to
the Tata group this comment would generally be interpreted as one of praise. However if the
manager continues to look at some reports on the table while he speaks, it may be interpreted as
simply an obligatory statement If the manager stands up, walks around the desk, and shakes he
subordinate's hand while delivering the message, it is more likely to be interpreted as high
praise. The nonverbal part of the message often expands what is being said by providing additional
meaning. Bear in mind, however, that interpretation can alter the intended meaning of a message
For example, you might remind a co worker about a deadline with the intention of being helpful,
but your colleague could (for a number of different reasons) interpret the message as an indication that
you were annoyed or mistrustful.
Channel
The channel is the means used to convey the message. To physically transmit your message
you select a communication channel and a medium. A communication channel could be
nonverbal, spoken, or written. The medium could be telephone, computer, fax, letter, memo,
report, face-to-face, etc.
1. When immediate feedback is necessary, oral communication channels are more effective.
2. If there is a reasonable chance that the other party, will not understand the message, verbal
channels are the preferred choice.
3. If there is likely to be reluctance on the part of the receiver to comply with the message, verbal,
channels are usually more effective.
4. If there is a need to document the communication, written channels are the best choice.
5. If the message should have detailed accuracy, written channels are best.
6. If the message must be delivered to more than a handful of people, written channels are often
more efficient.
In many cases, both oral and written channels should be used, for one supplements the other. For
example, it is common to find managers giving their subordinates assignments over the phone
and then saying "I'll follow this up with a written memo to confirm our conversation. This provides
the receiver an opportunity to review the assignment and, if the written message is not in accord
with the oral one, to contact the superior and seek further clarification.
Page | 8
When choosing a specific channel or medium, it is important to be aware of the internal
communication network within which the message is conveyed. As mentioned earlier, information
flows in four ways through the internal communication network: downward (superior to-
subordinate); upward (subordinate to superior); lateral or horizontal (between employees at the
same hierarchical level); and informal, also known as the "grapevine."
Downward communication. Information flows from superior to subordinate. There are five basic
purposes of this type of communication: (1) to give job instructions; (2) to bring about
understanding of the work and its relationship to other organizational tasks; (3) to provide
information about procedures and practices; (4) to provide subordinates feedback on their
performance; and (5) to instill a sense of mission in the workers. One of the major problems with
both downward and upward communication is that there is often loss and distortion of
information. Every link in the communication chain opens up a chance for error. So by the time a
message makes its way all the way up or down a chain, it may bear little resemblance to the original
idea.
Attempts to quash the grapevine generally have the opposite effect. Informal communication
increases when official channels are closed or when the organization faces periods of change,
excitement, or anxiety. Instead of trying to eliminate the grapevine, sophisticated companies
minimize its importance by making certain that the official word gets out. Many successful managers
tap. into the grapevine themselves to avoid being isolated from what is really happening. They
practice MBWA (Management By Walking Around) to make contact with their employees.
Page | 9
Receiver
The receiver is the individual to whom the message is directed. The extent to which this person
comprehends the message will depend on number of factors, including (1) how much the individual
knows about the topic, () his receptivity to the message, (3) the relationship and trust that exists
between sender and receiver, and (4) the receiver's understanding and perception of the information
being conveyed.
Perception refers to an individual's view of reality. It is the result of many factors, including past
experience, attitude toward the message and the sender, mental abilities such as intelligence, and
communication skills such as speaking and listening It is important to realize that a person's
perception is not always accurate For example, the top staff may think that they always inform
their subordinates in advance about change, but only about half their subordinates may agree that
their bosses communicate in advance about change One group's perception of its communication
style can thus be radically different from another's, and this perception will influence the way the
group both sends and receives messages For example if personnel at one level were to begin
asking questions about future changes, their superiors might shrug them off with the explanation,
"oh, you don't need any more 'information. You've already been fully informed about the change.
,
Feedback
Feedback is the receiver's response to a message; it can take a number of verbal and nonverbal
forms In verbal form some of the most common responses are designed to obtain more
information or to provide closure by letting the sender know that the message has been
received and will be acted upon accordingly. Here are some examples.
As is evident from the above examples, feedback also reveals attitude, perception, and -
comprehension or lack of it. In nonverbal form, some of the most common examples of feedback
are nodding one's head, shrugging, grimacing, smiling, winking, rolling one's eyes, looking the
other person directly in the eye, and looking away from the other person
Page | 10
MODULE 1D : Barriers to Communication
There are many types of communication barriers, For the purpose of analysis,
they can be placed into four categories: problems caused by the sender, problems in message
transmission, problems in reception, problems in receiver comprehension and perception.
Too much knowledge about the subject can also be a problem. The sender may over explain the
message or make it so detailed and complex that it is confusing. A third barrier is indecision
regarding selection of information. What should be included? What should be left out? Until the
sender can answer these questions, he will be unable to complete the message.
A fourth barrier is the order of presentation. What should be presented first? What should come
next? It is often a good idea to present some general material first to set the stage for the
material to follow. For example, if a series of productivity improvement recommendations
are being presented to the manager, they should be preceded by a discussion of the current
productivity situation and an analysis of the causes of low productivity.
A fifth barrier is a lack of familiarity with the audience. A sender who does not know the
audience very well may use an inappropriate approach. For example, if the audience does not
know much about the subject, a brief discussion is in order. The message should also be conveyed
in a carefully structured format that presents the information in an easy-to-understand manner. On
the other hand, if the audience is knowledgeable about the subject, the sender should move
immediately to the heart of the matter and present the important information up front. A sender
who is unfamiliar: with his audience is likely to confuse the first group and bore the second.
A sixth barrier is a lack of experience in speaking or writing. When senders have limited
education or training in communication, they often have difficulty expressing their ideas. Their
vocabulary is limited, their word choice is poor, there are punctuation and spelling errors, and the
overall presentation style is ineffective.
Page | 11
Problems in Message Transmission
Communication can also break down because of problems in transmission. One major problem is the
number of transmission links. When a verbal message is transmitted through three or four
different people before reaching its final destination, the message will most likely be altered or
changed through the stages in communication. This distortion of message occurs often in upward
and downward .communication.
Another problem arises from the involvement of personal interests. For example, a
message from top management that announces that there will be a wage freeze this year
may conclude with the statement, "While this decision was a difficult. one to make, it will be good
for the entire company." Management may believe this, but many of the lower, level employees
may not. There, interpretation is that management' is making them suffer, because a top executive
making Rs 5 lakh annually will be able to forego an annual raise easily, while a worker. earning
Rs 50,000. a year will experience difficulties coping with economic inflation.
Page | 12
MODULE 1E :DEALING WITH COMMUNICATION BARRIERS
The following chapters analyze different communication situations, but they all have one
common focus: how to deal with communication barriers. Although each communication
situation is unique, there are some basic methods for dealing with communication barriers. In this
section we discus these, methods, and in subsequent chapters we will refer to or repeat them.
Know your Subject
Identify the topic you are going to discuss and find out (may be even research) all you need to
know, about it. In particular; 'find out all the important, specific details, because without details it is
not possible to guide, or instruct anyone clarity.
At the same time the writer or speaker should be clear about what he wishes to convey and select
the material' accordingly. Many people make the mistake of trying to convey everything they know
about a subject. Unfortunately, when a message contains too 'much information, it's, difficult to
absorb. So if you, as the communicator, want to get your point across, decide what to include
and what to leave out," 'and how much detail to provide
Focus on the Purpose
To determine the type and amount of information you should include in your document (or
presentation)', you should know the purpose of your message. If you are writing a report on the
consumer market for sports equipment but you don't know the purpose of the report, it would'
be hard to determine 'what' to include or exclude from the report; What sort of sports equipment
should you cover? 'Should you include team sports as well as individual sports? Should you
subdivide the market geographically or according to price ranges? Should the report provide
conclusions and recommendations or simply facts and figures? Unless you know the purpose (in
this case, why the report 'is needed) of the report, you can't really answer these 'questions
intelligently As 'a, result, you will end up 'writing a vague, general report, that contains a bit of
every type of information
Page | 13
Audience-oriented writing can be achieved by answering the question, "How will the receiver .react to
this message?" For example, if the manager must announce a'10 percent reduction in the workforce,
the union representative will interpret this message negatively. :The union member will see it as an
attempt by management to improve profitability at the cost of the workers' welfare.
By examining the message from the other person's point of view, the sender can
identity, some of, the problems. to be encountered and can work to sidestep or minimize
them. For example, ii the above message the sender could explain the reason for, the
cut back in terms of the well -being of the workers: "A small cutback now will ensure that no
reductions will be needed in the next few years."
Be Organized
Knowledge of the subject, purpose, and audience will also help you to organize your material .in such
a way. that, it, conveys your message effectively. For example, if the receiver is not familiar with the
subject you might have to first give some background information. If you have, to persuade the
receiver to accept a decision or take a certain course of action, you may first have to explain the
rationale for your recommendation to convince the receiver, and prevent an emotional response
from him.
Communication should be organized in such a way that it conveys the sender's message •
effectively. This means, the communication should be structured in such a way that it is easily
readable by and understandable to the audience. If a message is short it can be organized within a
single paragraph. If it is, more than one page long, an introduction-body-close format should be
followed. If the message is detailed or difficult to follow, it. should be broken into segments so that
it is easier to understand. In this context, the: use of headings, subheadings, underlining and
numbering can be useful. If material can be 'presented in a checklist format, this too can be useful
in ensuring that the message is logical, complete, and understandable.
Page | 14
MODULE – 2
NONVERBAL COMMUNICATION
Page | 15
A B
Your boss has been telling Your friend has started a new
e staff that he would organization. He is encountering
welcome suggestions about some labor problems. You are a
ho w t o i m p r o v e t h e
organization, You take him personnel management specialist
at his word and meet him to and. have several years of
discuss some of your ideas experience in the personnel
• with him.. As you begin to de p a r t m e n t of a large
outline the changes you conglomerate. . Your friend calls
propose, he fixes you with an you over to his office one day to
icy stare and folds his arms discuss his problem. In the course of
across. his chest; as. you go the conversation, you give him some
on, the frown on his face ideas, about how he could solve his
gathers intensity. When you problems. He finds your ideas
finish, he gets up abruptly
a n d sa y s w ith b a r e ly innovative and thinks they
suppressed menace in his are just right for his kind of set up.
voice, "Thank you very As you get up to leave your friend
much. Your, ideas are .
clasps your hand warmly, beams at
priceless
you and says, "Thank you very
much Your ideas are priceless
You've seen two identical expressions being used in two ondifferent Situations Verbally, the two
expressions mean exactly...the same But, do you think the two speakers meant to communicate the
same message? What is it that differentiates one message from the other? You will, notice that it is the
nonverbal cues embedded in each message, that makes these two messages so radically different
from each other, The boss uses nonverbal cues to let you know he thinks it is preposterous that
you should try to suggest changes to him Thus, though his words say something different, what he
actually says is 'You have a nerve coming in here and trying to tell me how I should run this
place ' So then, what exactly is nonverbal communication? Very broadly, nonverbal
communication is all those messages expressed by other than linguistic means.
Communication researchers have found that nonverbal signals have more impact in conveying
meaning than verbal content In fact nonverbal behavior is so important to effective business
communication that many companies are now trying to train their employees to understand it.
Page | 16
MODULE 2A:CHARACTERISTICS OF NONVERBAL COMMUNICATION
We've seen how nonverbal signals can completely alter the message that you communicate
Let us now look at some of the characteristics of nonverbal communication
Page | 17
MODULE 2 : COMPONENTS OF NONVERBAL COMMUNICATION
Now that we have some idea of the broad characteristics of nonverbal communication, let us
examine some of the ways in which nonverbal messages are transmitted. The study of nonverbal
signals is divided into three main areas: paralanguage, the way we say what we say, kinesics, the
study of body language and facial expression, and proxemics, which is the study of how physical
space is used. Other forms of nonverbal communication include the use of time and the mode of
dress.
Paralanguage
The study of paralanguage focuses on how you say what you say. As we saw in the example at the
beginning of the chapter, two identical verbal messages may communicate entirely different
meanings when, the tone of voice is different. The tone of our voice, the loudness, softness, rate of
speech, and the words we choose to accent, communicate a great deal. In fact, by changing the
emphasis in a sentence, we can change the total meaning of the sentence. Look at these sentences.
Sentences Possible meaning.
1) I never said that. I didn't say it.
Somebody else might have.
2) I never said that. At no, time did I ever say
that. What makes you think
3)I never said that. I did?
I didn't say it in so many
words. I may have implied
it, but I didn't say it.
Of course, other, listeners may just see other meanings in each of these sentences We did point out
that nonverbal cues were ambiguous, didn't we?
Paralanguage has several component parts voice qualities, voice qualifiers, voice characteristics,
and vocal segregates
Voice qualities
Voice qualities include such things as volume, rate of speech pitch, rhythm, pronunciation land
enunciation
Volume
A person may sometimes speak louder to attract others' attention But an overly loud speech can
be annoying or disturbing. On the other hand, though. a soft voice conveys a sense of calm, in a
business setting it may give an impression of weakness or indecisiveness. Thus, the volume
that may be right in one setting, may convey a negative message in a different situation.
Page | 18
Head shakes are particularly difficult to interpret People in the United States shake their heads up
and down to signify yes ' Many British, however, make the same motions just to indicate that
they hear not necessarily that they agree To say "no," people shake their heads from side to
side in the United States, jerk their heads back in 'a haughty manner in the Middle East wave
a hand in front of the face in the Orient, and shake a finger from side to side in Ethiopia.
The pointing of a finger is a dangerous action In North America it is a very normal
gesture, but it is considered very rude in many other parts of the world - especially
in areas of Asia and Africa. It is therefore much safer to merely close the hand and
point with the thumb.
Other forms of communication have also caused problems The tone of the voice, for example
can be important Some cultures permit people to raise their voices When they are not close to
others but loudness in other cultures is often associated with anger or loss of self control
A lack of knowledge of such % differences. An verbal and nonverbal forms of communication
has resulted in many a .social and corporate, blunder. Local people tend to be willing to overlook
most of the mistakes of tourists; after all, they are just temporary visitors Locals are much less
tolerant of the errors of business people - especially those who represent firms trying to project
an impression of permanent interest in the local economy. The consequences of erring, therefore,
are much greater for the corporation
Rate of speech
On an average, it has been found that people speak at about 150 words a minute When a person
speaks at much higher or lower rates he may have a negative impact Fast speech often makes
people nervous, while slow speech causes boredom or leads people to believe that the speaker is not
quite sure about what to say next
Voice Pitch.
Voice pitch is often equated with emotion High pitch shrieking generally indicates excitement or
nervousness A low voice pitch usually commands attention and respect because it indicates that the
speaker is in control of the situation
Rhythm
Rhythm refers to the pattern of the voice whether it is regular or irregular, whether it flows
smoothly or moves in fits and starts A smooth rhythm, like a moderately low pitch, indicates a
confident, authoritative attitude, while an uneven rhythm may convey lack of prior preparation and
lack of clarity.
Page | 19
well educated as those who pronounce words correctly. Thus, the way you pronounce words may
play an important role in building your image
Enunciation also relates to the correctness of how a word is pronounced, but is more a matter of clear
articulation People with poor enunciation drop word endings, slur their speech, or do not speak clearly
Poor enunciation may indicate carelessness, but overly precise enunciation may sometimes seem
phony or pretentious.
Voice qualifiers
Temporary variations in pitch, volume and fate of speech are known as voice qualifiers. If one is
aware of the normal voice qualities of a person, it is easy, to detect the voice qualifiers in his speech
For instance, if your secretary, who normally talks in A low, even tone, suddenly starts talking
faster and louder, you should be able to tell that something is not quite right He may be
conveying impatience, anger or excitement
Vocal characteristics
All of you are familiar with certain audible sounds like sighing, laughing, crying, clearing the
throat, whistling and groaning. These sounds, which serve to communicate some meaning, are called
vocal characteristics As a communication expert put it "Awareness of the more subtle voice
characteristics such as pleasantness especially in combination with voice qualifiers, can do
much to help individuals and organizations improve communication Think for instance how
much a company's image can be helped by a. receptionist who sounds both in person and over the
telephone pleasant confident and competent
Vocal segregates
Er….. urnm…will you lend me some money, please? Now, in this sentence what do
the words “er”, “um” mean? They don't mean a thing. Such meaningless words or sounds that
are used to punctuate or pace sentences are called vocal segregates Sometimes people use filler
expressions like "right?" "you know what I mean” or "OK" to fill in their silences Vocal
segregates are usually awkward components of speech and should be avoided as far as possible
These sounds indicate a lack of confidence and exhibit a feeling of stress on the part of the speaker.
Kinesics
Human beings communicate a lot through body movements and facial expressions Kinesics is the
study of this kind of communication. Let us look at how different body movements and facial
expressions communicate different messages
Posture
The way people sit or stand can reveal a lot about their attitudes and emotions. Posture' portrays
confidence anxiety, fear, aggressiveness and a host of other emotions A boss who wants to reprimand
his subordinate may do so by standing, leaning over the table and peering down at the hapless
employee. Here, he is using posture to establish his superiority. Insecure or nervous people often
betray their weakness by slouching, biting their nails or looking down. A person who wants to tell
everyone else that he is quite confident may sit back expansively, wrap his arm over the back of the
chair and stretch out his legs in front
Page | 20
Gestures
Gestures are of various, types. Four common ones are emblems, adaptors, regulators, and
illustrators.
Emblems
Emblems are gestures that have a meaning that is understood by the public at large Of course, most
of them are culture specific. Sometimes the same emblem may, have different meanings in
different cultures. For instance, forming an "0" with index and thumb means "OK" in the US,
while in Japan it means "money" and in parts of France it means "worthless' or "zero"
Adaptors
These are learned behavior patterns that we usually pick up in childhood. They way we use our
spoons or our hands while eating is a good example
Illustrators
These are gestures that go with what we are saying verbally and tend to depict what is being said A
good example is when you tell someone, "Come, sit in this chair," and accompany it by a nod of the
head or a wave of the hand
1. Surprise : The eyebrows are raised, the eyes opened wide, and the jaw drops open, parting the lips.
2. Fear : The eyebrows are raised and drawn together; the eyes are open and the lower lid is
:
4. Anger :The eyebrows are lowered and drawn together, the eyelids are tensed and the eye appears
to stare in a hard' fashion'. The lips are either tightly pressed together or parted in a square
shape.
5 Happiness : The corners of the lips are drawn back and up The mouth may or may not be, parted.
Crow's feet wrinkles go outward from the outer corners of the eyes.
6. Sadness : The inner corners of the eyebrows are raised and may be drawn together The corners of
the lips are drawn down or the lips appear to tremble.
Regulators
These are gestures that control the communication exchange...'Patting an employee on the back
may encourage him to keep talking. .'Shuffling through your papers while he's talking will certainly
encourage him to stop
Page | 21
Facial expressions
The face plays a vital role in communicating various messages The brow, the eyes the root of the
nose, the lower face, are all capable of conveying attitudes and emotions Exhibit 2. 2 describes six
universal facial expressions But minor variations do occur from culture to culture
Eyes
Of all facial expression, those of the eyes are considered the most revealing. Studies have provided
numerous insights about eye contact:
1. Eye contact is perceived as an indication of honesty, confidence, openness and interest.
2. People who avoid eye contact are usually embarrassed or nervous.
3. Eye contact varies by culture. For instance, some Latin American cultures teach children
not to look directly at the face of an adult.
Proxemics
Proxemics is the study of how people use the physical space around them and what this use says
about them.
People often put an invisible boundary between themselves and others. This is called the
personal feature space The below shows the four feature space categories or zones. The intimate
distance zone within a radius of up to 18 inches around a person, is reserved for close relations
and friends. The personal distance zone, which may extend from 1 ½ to 4 feet is also reserved for
friends and family. Of course, there are cultural variations. Certain cultures are more tolerant of
intrusions into a person’s personal space than others. The social distance zone extends from 4 to
12 feet. It is in this zone that most business is transacted. The public distance zone usually
extends from 12 to 25 feet. It is the farthest distance at which one can communicate effectively
on a face – to – face basis. Thus, by observing the physical distance between two individuals,
one can judge the relationship between them.
Page | 22
Personal Feature Space Categories
In organizations the control of space generally constitutes an extension of one’s personal power.
Status can often be determined by how much space a person occupies. The more status a person
has, the easier it is for him or her to invade someone else’s space. For instance, it is all right for a
manager to walk into a subordinate’s office, but the subordinate must seek permission to enter
the manager’s room.
Use of Time
How you use time also gives others clues about what kind of a person you are and what they can
expect from you in terms of dependability. In the office, the junior staff are expected to conform
strictly to time guidelines. However, those in senior management are allowed to flout these
guidelines as they are seen to have greater control over both their own time and the time of
others. The amount of time we spend on a task also indicates how much importance we give it.
Mode of Dress:
Our first impression of people is often based on what they are wearing. In organizations it is
generally found that promotions and other benefits go to people who dress the way those in
power feel they should dress. John Molloy, author of Dress for Success, says:
“ The overriding essential of all corporate business clothing is that it establishes power and
authority. If you can accomplish nothing else, presenting yourself as a person who is capable of
the job he has been given is an acceptable goal.’
Think About it
Oral communication is thus a mixture of verbal and nonverbal messages. A good
communicator is one whose nonverbal cues authenticate and reinforce his words. As the old folk
saying goes, “Actions speak louder than words.” So, if your actions belie your words there’s
every possibility the listener may choose to believe your actions rather than your words.
Page | 23
MODULE-3
Listening
Page | 24
Amit Suri was asked to represent hi department at the Benefits Committee meeting. The purpose
of the meeting was to announce changes in the company's home loan schemes. Amit and the
others in his department were unanimous in their support of the OTP (one-time-payment)
scheme. But he knew that there were some in the other departments who favored the MIP
(monthly-installment payment) scheme. He walked into the room, hoping the OTP scheme would
be adopted.
The head of the committee entered the room, walked up to the podium, put his papers in front, and
began to speak.
"1 know you are all here to learn about our new home loan scheme, and I won't keep you in
suspense. We have decided that the MIP scheme will be our primary scheme,"
As he continued to talk, Amit fumed. As soon as the meeting ended, he rushed back to his
department with the news. Everyone was upset. .
Twenty minutes later Amit 's colleagues walked up to his desk. "Are you sure the MIP is the only
scheme the company is adopting? I just spoke to a friend. in the Accounts Department. He says the
MIP will be the primary scheme, but those who wish to do so, can opt for the OTP,"
Amit couldn't believe his ears. He called up another colleague, who confirmed what he had just
heard.
What do you think happened in this case? Amit was present throughout the meeting. Yet, he didn't
really hear what the, speaker was saying. Why?. Because he wasn't listening.
If research studies are anything to go by, Amit is certainly not an oddity in the business world.
According to these studies, the average listening efficiency rate is only 25 percent. Immediately
after a ten-minute presentation, a normal listener can recall only 50 percent of the information
conveyed. After 24 hours the recall level is only 25 percent. Does this bode well for organizations?
No. Why? Let us see why listening is so very important in a modem organization.
Listening on the job is not only frequent, it is very important as well. In fact, most managers agree
that "active listening" is the most crucial skill for becoming a successful manager. Stephen Covey
identifies listening as one of the "seven habits of highly effective people." Listening can improve
work quality, and boost productivity. Poor listening leads to innumerable mistakes because of
which letters have to be retyped, meetings rescheduled, shipments rerouted. All this affects
productivity and profits. Apart from the obvious benefits, good listening helps employees to update
and revise their collection of facts, skills and attitudes. Good listening also helps them to improve
their speaking.
Page | 25
Despite all these benefits as pointed out earlier, good listening skills are quite rare in the business
world today. A number of studies have revealed why people listen poorly, despite the advantages of
doing just the opposite. Let us look at some of the common barriers to effective listening.
Environmental Barriers
Physical distractions
Distracting sounds,: poor acoustics, uncomfortable seating arrangements can all hamper effective listening.
But then it is not impossible to counter these distractions through concentration.
When all your attention is focused on what is being said, the other noises take backseat in your
consciousness." Unless of course, the noises are too powerful.
Message overload
When you are forced to listen to a quick succession of messages, then after a point your receptivity dulls.
You find it gets impossible to listen attentively. Coping with a deluge of information is like juggling—you can
keep only a few things going at atime.
Attitudinal Barriers
Prejudices
Sometimes our prejudices and deep-seated beliefs make it impossible for us to be receptive to the speaker:
For instance, when two :.politicians who belong to, say the BJP and the CPI(M), argue over a political issue,
they are not likely to give each other's views, a fair "hearing, because of their preconceived attitudes.:
To break down this barrier, we must achieve some;: control. over our instinctive responses and ;learn to
postpone judgment until we have listened to exactly what is being said.
Page | 26
Preoccupation
Sometimes we are preoccupied with other concerns. As students, all of you must have had days when you
registered nothing of what was said in class, because your thoughts were on the freshers' party you had to
arrange the next evening.
A casual attitude
Because hearing is relatively easy,...we assume that. we can do it without much concentration and
effort. This attitude is often a major barrier to listening.
Egocentrism
Many people are poor listeners, because they are overly concerned with themselves .Three personal
concerns dominate their listening behavior. These can be summed up in three sentences
1) I must defend my position
2) I already know what you have to say.
3) How am I coming through?
These concerns set up effective barriers that destroy the critical link between speaker and listener
SPEAKER CHARACTERISTICS
MENTAL DISTRACTIONS
Di Unclear, nonspecific message
fferences in sending and receiving Lack of sympathy for listener
messages OTHER Distracting appearance,
Pr ??? mannerisms, voice, expressions,
eoccupation with other matters etc
Suspect motive (coercive)
Developing a response rather
than listening
Inappropriate timing • SPEAKER/LISTENER HINDRANCE
LISTENER CHARACTERISTICS . Various interpretation of
verbal/nonverbal message
• Poor listening habits
• Unreceptive to new and different Lack of feedback
ideas (verbal/nonverbal)
• Lack of empathy for sender
• Negative feelings about the speaker • Lack of trust
Low interest level
Intimidation or, fear caused by
• Unwilling to concentrate
position/status of speaker
Page | 27
Poor Listening Habits
Listening, like much of human behavior, tends to follow consistent patterns Most of us develop
certain bad listening habits that eventually create a pattern Four of the most L common bad
habits are:
1. Faking attention Many, of us fake attention so as not to appear discourteous However,
this tan become habitual and turn out to be barrier to effective listening
2. Listening only for facts In looking only for the facts we often forget to locate the main idea
3. Avoiding difficult and uninteresting material Sometimes we switch off our attention
when what is being said is difficult unfamiliar, or simply uninteresting If we do this often, this
turning off becomes a consistent pattern
4. Focusing on delivery Sometimes we are so concerned with how someone says something
that we pay scant attention to what he or she is actually saying
determine whether the phone is ringing is an example. We learn how to discriminate among sounds at an early.
age Eventually, we come to recognize not only the sounds that make up our language, we also learn to identify
vocal cues such. as tone of voice, volume, pitch and rate, all of which contribute to the total meaning of a message
Comprehensive Listening
A person trying to understand a speaker's message in totality, to interpret the meaning as precisely as possible is
engaged in comprehensive listening This kind of listening is generally practiced in the classroom when w must
remember what we have heard in a lecture and rely upon it for future use
Critical, Listening
When a person wants to sift through what he has heard and come to a decision, he must listen
critically. This involves judging the clarity, accuracy and reliability of the evidence that is
presented, and being alert to the effects of emotional appeals
Page | 28
Active Listening
Active listening is also called empathic listening. This kind of listening goes beyond ju st
paying attention or listening critically. It entails supportive behavior that tells the speaker, “ I
understand. Please go on.”
MODULE 3D : How to be a Better Listener
Regardless of whether the situation calls for appreciative, critical, discriminative or —active
listening, listening skills can he improved with conscious effort. The differences between
good listeners and bad listeners. Let us now hook at some of the specific steps you can take to
become a better listener.
Distinguishing Good Listeners from Bad Listeners
Page | 29
Be Motivated to Listen
When you resolve that you will listen, an improvement .in your listening skills will become
immediately noticeable. Researchers have concluded that the more motivated a listener is, the more
active and alert he becomes as a receiver. Though motivation alone cannot solve all problems in
listening; it is the first prerequisite to becoming a good listener.
Be Prepared to Listen
Sometimes you need to make some preparation beforehand in order to listen effectively to a
particular piece of communication. It is helpful to gather as much relevant information as you
can about the subject, the speaker, and the situation. This will help you to better understand and
appraise what the speaker is saying. Preparations could also include attempts to minimize physical
barriers between yourself and the speaker; and to eliminate all distractions in the environment.
In fact they listen with the left side of their brain, while women use both the right and left
sides, according to a study. But the authors of the study say this is not the time to resurrect
gender stereotypes, and that the jury is still out on which sex really has the better listening
skills.
Language processing between men and women is different but it doesn't necessarily mean
performance is different, " says Michael - Philips, assistant professor of radiology at
Indiana University School of Medicine.
After playing a. portion of the John Grisham novel," The Partner, to 20 men and 20 women
for a period of less than six minutes; the researchers found that both had a similar grasp of the
plot. B
But what the magnetic resonance imaging (MRI) demonstrated was the markedly different
ways they arrived at roughly the same point.
The MRI revealed that while brain activity in men was exclusively in the temporal lobe of
the brain's left hemisphere, which is associated with listening and speech, the women also
employed the right temporal lobe which is thought to help. With performing music and
understanding spatial relationships. "It was a surprise result, says Philips
Page | 30
Be Objective
From your own experiences, you would have noticed that you are more receptive to a message
when you approach it with an open mind.
To be objective, one must avoid jumping to conclusions. Keep your critical faculties on the alert
but do not make a judgment until all points are fully developed, If you make a judgment too fast;
there is always the danger that you may fail to register things the speaker says that may not exactly
tie in with your judgment. Objective listening entails a conscious' effort to keep our emotions and
prejudices at bay. The example given at the beginning of the chapter shows you how you can miss
important facts, when your emotional responses pull down the shutters over your receptivity.
Be Alert to all Cues
Look' for the speaker's main ideas.' The speaker's voice quality, inflection, emphasis and body
movement can all offer vital clues to what the speaker feels is most important. Besides, these cues
also give you insights into the emotional .content of the speaker's message, which must be taken into
consideration:;if the message is to be fully understood.
Use Feedback
Using feedback is one way we can get more from our communication encounters. Sometimes
this feedback may be as simple as telling the speaker that you don't understand This lets you
hear the message again While using feedback make sure that the speaker receives your
message, that there's no ambiguity about your feedback and that your feedback is related to what
is going on.
Practice Listening
Proficiency in listening, like in any other skill, is the result of conscious effort. Many of the
barriers to effective listening can be successfully overcome through practice Force yourself to
listen to speeches and lectures that seem to hold no obvious interest value Doing this will help
you overcome the temptation to "switch off' when the messages seem dull or difficult
Page | 31
MODULE 3E : WHAT SPEAKERS CAN DO TO ENSURE BETTER LISTENING
So far, we have studied 'listening' entirely from the listener's perspective. But the speaker too to a
certain extent, influences the way in which others listen to the message Of course, this is not to
suggest that the entire onus of communicating a message is on the speaker - a notion that is
alarmingly popular among most poor listeners But the speaker can use certain techniques to
encourage more effective listening.
Try to Empathize
Speak to your listeners. To do this you must .understand them - understand h9wtiey will
respond to your ideas The best way to do this is to imagine yourself in their position This will
help you to weed out uninteresting and difficult parts that may I be irrelevant or could be made
more easily understandable by being put in a different way
Adjust your Delivery
Make sure the listeners have no difficulty hearing you. You, can retain listener interest by
modulating your voice and making your speech as lively as you can without sounding ridiculous A
dull monotone often induces mental lethargy and turns listeners off.
Utilize Feedback
As the listener can use feedback to improve the communication, so can the speaker Be sensitive to
listener responses Ask yourself Are they paying attention? Do they look interested? Do they look
confused? Are they bored? Answering these questions will help you to make the necessary
adjustments and tailor your message to the needs of lie audience.
Be Clear
Know your purpose. What is the main point that you are trying to make? If you are not clear
about what you want to say and why you want to say it, you're Likely to ramble aimlessly and it's
very difficult to pay attention to disconnected and disjointed wanderings.
Be Interesting
To be interesting you must first of all be interested in what you have to say Lack of interest
on the speaker's part communicates itself immediately to the listeners/and dulls their own response.
Lively, stimulating and relevant speech always has a better chance of capturing the audience's
attention.
THINK ABOUT IT
Most people spend at least half their communication time listening. This most used
communication skill is not only crucial in interpersonal communication, also affects organizational
communication and helps determine success in education and in careers. Business writer Kevin
Murphy says, "The better you listen, the luckier you will get." So take time to listen.
Page | 32
MODULE 4
INTERVIEWING
Page | 33
Interviewing
Explore the Require Provide the Yield an Allow the Elicit Precise,
breadth and the opportunity economical use interviewer Reproducible,
depth of interviewer for of time control over reliable data
potential to be respondent to questions and
information skilled reveal feelings responses
and
information
Page | 34
Persuasive Interviews
These interviews, which primarily seek to induce somebody to adopt a new idea, product or
service, are generally associated with selling To conduct a successful persuasive interview the
interviewer has to use all his communication skills, both to draw out the opinions of the respondent
and to impart information
Page | 35
toward any one end of the spectrum, depending on the purpose you want to achieve.
Page | 36
certain situations. Leading questions force the respondent to answer in a particular way, by
suggesting the answer the interviewer expects: You wouldn't mind working extra hours,
whenever necessary, would you?
Plan the Physical Setting
The physical setting in which the interview takes place can have a great deal of influence
on the results. A setting with the minimum distractions is generally the best. Frequent
interruptions mar the flow of conversation and prevent both the interviewer and the
respondent from being alert to each other's verbal and nonverbal cues. The seating
arrangements also have an impact on the interview. An interviewer who addresses the
respondent from behind a desk assumes greater control. When the two parties face each other with
no barriers between them there. 1S a greater degree of informality
Anticipate Problems
This is an important aspect of preparation. Once you've clarified our purpose, decided • the
format, put together you questions and decided the setting over the entire plan once again, looking
for loop holes. Is the format you've selected really suitable for the achievement of your specific
purpose? Perhaps you should include more questions that are open-ended, to get more details:?; .
What if the respondent is. a poor communicator and is unable to handle too many open ended
questions. What if the respondent uses open-ended questions to digress from the main point? As
you plan an interview you may come up with many such questions. Answering these questions
will help you to employ effective strategies to counter problems as and when they arise during
the course of the interview
MODULE 4B :CONDUCTING THE INTERVIEW
Interviews have three basic stages: an opening, a body and a close. Let us examine each of these
following
The Opening
The opening is usually used tp. put the respondent at ease and to establish the purpose of the
interview. To put the respondent at ease, the interviewer usually follows up the initial greeting
with a .brief informal conversation, helping the respondent to relax and building up a rapport with
him. Whatever the nature of' the interview, the outcome is likely to be better when the interviewer
and the respondent are comfortable with each other.
Once the respondent is made comfortable the interviewer gives him a brief overview of what is
to follow. He explains the purpose of the interview, what information will be needed how it will be
used and the general format of the interview. This set the stage for the actual question-answer
session that constitutes the body of the interview.
Page | 37
Body
How the interviewer and the respondent handle their respective roles in this session, is one of the
most important deciding factors in shaping the final outcome:
The interviewer's role
It is the interviewer's responsibility to control and focus the conversation so that the discussion
doesn't drift away from the agenda At the same time he should be willing to explore relevant sub-
topics that might provide valuable insights into the situation or about the respondent. The
interviewer also has to ensure that he allots enough time to each item on the agenda. If he lingers
too long on one topic, he may not have enough to time to do justice to other areas that are equally
important.
Interviewers must listen actively in order to pick up verbal and nonverbal cues.
Sometimes they are so caught up in framing their own questions and responses that they fail to hear
what the respondent is actually saying. An alert interviewer, must not only pay attention to the
respondent's verbal messages, but pick up nonverbal signals too, that may provide in into the
respondent's behavior and attitudes.
Taking notes is sometimes advisable. However, if you intend to take 'notes, you must inform the
respondent beforehand be brief and unobtrusive while taking notes When a respondent is not
totally responsive to a question, an interviewer may have to use probing questions to elicit a
satisfactory response. If a response is inadequate, you might say, "That's an interesting scheme,
Can you give me more details?" Skillful use of probing questions will improve your results.
Sometimes you can effectively use silence to induce the respondent to volunteer more details or
a more satisfactory explanation Summarizing the main points from time to time also helps to
ensure that the interviewer and the respondent have the same understanding
Page | 38
Check List for Interviews on the Job
A. Preparation
1. Decide the purpose and goals of the interview.
2. Set' a structure and format based on your goals.
3. Determine the needs of your respondent, and gather background information.
4. Formulate questions as clearly and concisely as possible, and plot their order.
5. Project the outcome of the interview, and develop a plan for accomplishing the goal.
6. Select a time and a site.
7. Inform the respondent of the nature of the interview and the agenda to be covered.
B. Conduct
1.Be on time for the interview. .
2. Remind the respondent of the purpose and format.
3. Clear the taking of notes or the use of a tape recorder with the respondent.
4. Use ears and eyes to pick up verbal and nonverbal cues.
5. Follow the stated agenda, but be willing to explore relevant sub-topics.
6. At the end of the interview, review the action 'items, goals, and .tasks that each of you has agreed to.
7. Close the interview on an appreciative note, thanking the respondent for his time, interest, and
cooperation.
C. Follow-Up
I. Write a thank-you memo or letter that provides the respondent with a record of the meeting.
2 Provide the assistance that you agreed to during your meeting
3 Monitor progress by keeping in touch through discussions with your respondent
Closing:
Once the last question has been asked and answered, the interview can be rounded of by a
restatement of conclusions. This signals that you have finished and gives the respondent an
opportunity to ask relevant questions. The interviewer then gives the respondent some idea of
what future action he can expect before concluding. With pleasantries checklist of all that goes into
making an interviews success.
Page | 39
MODULE 4C : THE ETHICS OF INTERVIEWING
The communication between the interviewer and the respondent should be guided certain
ethical guidelines This paves the way for better interactions in future
Guidelines for the Interviewer
Don't be dishonest
Misrepresenting facts can land you in trouble. Whatever the nature of the interview, be frank and
honest in your answers
Page | 40
MODULE 5
Letter Writing
Page | 41
Letter writing doesn't involve magic; you just have to think. about it. As you read chapter, think about
the following response to an order letter:
Thank you for your order, which we really appreciate. We sincerely welcome you our ever-growing
list of satisfied customers,
We were delighted to send you 30 Porta-phone telephones. They were shipped express today.
We are sure you will find our company a good one to deal with and that our telephones are of the
finest quality.
Please find our latest price list enclosed: Thank you for your patronage.
What is wrong with this letter? (We'll discuss it later, in the chapter.) Hint: Put you in the shoes of
the recipient of the letter.
Business correspondence is one of the most common forms of communication.. common that
people often neglect to write letters carefully, and, as are inadvertently antagonize customers,
business partners and potential clients.
How can one write an "effective" business letter? There is no foolproof method, there are some
useful guidelines An "effective" letter is one that conveys the sub to the intended audience in such
a way that the writer's purpose is achieved. Knowing your subject and purpose and understanding
your audience is essential for "effective” business correspondence These principles of letter writing
are fundamental to forms of communication, oral and written
It is relatively easy to compose routine letters or pleasant letters because there is I chance of
antagonizing the audience - unless one uses bad words The difficulty a when writing letters about
the unpleasant or when writing to persuade the audience. The audience's sensibilities are then of
paramount importance and affect the sentence structure, word choice, content, and organization of
the letter.
Whatever type of letter one is writing, it is always advisable to plan the letter with fundamentals of
letter writing in mind .. knowledge of the subject, audience, purpose These basics not only
provide guidance but also keep one's mind calm one is asked, at the last minute to inform a major
account holder that the company not be able to meet the deadline as scheduled
Page | 42
MODULE 5A : UNDERSTANDING THE AUDIENCE
Before composing any message, all writers should be well informed about the s and should
have a clear picture of what they want to convey and what they wish to achieve through the
communication (see Chapter 8 for a discussion of "purpose these prerequisites are not met, one
cannot even begin to write
Understanding the audience is often a more challenging task. It requires the cultivation of a.
"you" or "reader oriented" attitude. Then, as a writer, you have to transferred understanding or
mental picture of the audience into written form through careful selection of content and the
effective organization of the different parts of the message.
Cultivating a "You" 'Attitude
Readers find ideas more interesting and appealing if they are expressed from reader's point of
view. A letter reflecting _4 "you" attitude indicates sincere concern the reader's needs and
interests. To think from the reader's point of view - that cultivate a "you" attitude concentrate
on the following questions
Does the message address the reader's major needs and concerns?
Is the information stated, as truthfully and ethically as possible?
Will the reader perceive the ideas to be fair, 'and logical?
Are ideas expressed clearly and concisely (to avoid misunderstandings)?
Would the reader feel this message is reader-centered?
Does the message serve as a vehicle for developing positive business relationships - even when the
message is negative?: Are ideas stated tactfully and positively and in a manner that preserves the reader's
self-worth and cultivates future business?
Does the message reflect the high standards of a business professional: quality paper, correct
formatting, good printing quality, and absence of spelling mistakes and grammatical errors?
Concentration on these points will boost the reader's confidence in the writer's competence and will
communicate nonverbally that the reader is valued enough to merit the writer's best effort.
Page | 43
I. Age. A letter answering an elementary-school student's request for information from
your company would not be worded like a letter answering a similar request from an
adult
2. Economic level. A banker's collection leer o a prompt-paying customer is not likely
to be the same form letter sent to clients who have fallen behind on their payments for
small loans.
3. Educational/Occupational background. The technical jargon and acronyms used in a.
financial proposal sent to bank loan officers may not be suitable in a proposal sent to a
group of private investors .In the same way, a message to the CEO of a major
corporation may differ in style and content from a message to one of the stockholders
4. Culture, The vast cultural differences between people increase the complexity of the
communication process A letter containing expressions like "they were clean bowled' and
"clear the pitch" would confuse a manager from a different, culture. '
5. Rapport. A sensitive letter written to a bug-time client may differ significantly from a
letter written to a newly acquired client
6 Expectations Letters containing errors in spelling and grammar will cause the reader to
doubt the credibility of the source, particularly when the letter is sent by a
professional like a doctor, a lawyer, or an accountant. .
7 Needs of the reader. Just as successful sales personnel begin by identifying the needs of
the prospective buyer, an effective writer attempts to understand the reader's frame of
reference as the basis for developing the messages organization and content.
Understanding the audience helps the writer see the issue or subject from the reader's
point of view.
This message clearly shows no concern for the reader's problems and no regard for his status as a
client The tone is aggressive and accusatory, conveying not only a bad image of the Tiger
Page | 44
Tuff company, but also alienating the client and destroying the rapport that had been built up
over, many years.
Although knowledge about the recipients assists writers in developing empathy, writers can learn
to predict reader's reactions with reasonable accuracy by placing themselves in the reader's
shoes To do so, they should ask themselves the following questions.
Would I react favorably to a message saying my request is being granted?
Would I experience a feeling of disappointment upon learning that
request has been refused?
Would I be pleased when an apparently sincere message praises me for a
job well done?
Would I experience some disappointment if I am informed that my
promised pay increase is being postponed.
To understand your audience and develop a sense of empathy toward it, ask yourself how you.-
would react if you were in the other person's, position Asking that question before you write a
message greatly simplifies the task of organizing the message
All business communicators face the problem of compressing complicated, closely related
ideas into a linear message that proceeds sequentially from point to point People simply
don't remember disassociated facts and figures, so successful communicators rely on
organization to make their messages meaningful Before discussing how to achieve good
organization, let us explore why it is so important
Why Organization is Essential
When a topic is divided into parts, one part will be recognized as a central idea and the other as
minor ideas (details) The process of identifying these ideas and arranging them in the right
sequence is known as outlining or organizing Outlining before writing provides numerous
benefits
Encourages brevity and accuracy (Reduces the chance of leaving out an
essential idea or including an unessential one)
Permits concentration on one phase at a time Having focused separately on
(a) the ideas that need to be included, (b) the distinction between major and
minor ideas, the writer is now free to concentrate totally on the problem of
writing
Saves time in writing or dictating (With questions about which ideas to
include and their proper sequence already answered little time is lost in moving
from one point to another)
Facilitates emphasis and de emphasis (An effective outline ensures that
important points will appear, in emphatic positions)
Page | 45
The preceding benefits derived from outlining are writer oriented Readers also benefit from a well
outlined message
• The message is more concise and accurate.
• The relationships among ideas are easier to distinguish and remember •
Reaction to the message and its writer is. mare likely to be positive.
Reader reaction to a message is strongly influenced by the sequence in which ideas are presented.
Throughout this chapter, and other chapters on letter writing, outlining and reader response will be
considered frequently.
How, to Organize Letters
Writers need organization to ensure that their ideas are presented clearly and logically, increasing the
likelihood that the readers will react positively to their message. Before listingihe first point of an
outline, you should ask yourself:
What will be the central idea of the message?
To answer this question, think about the reason for writing. Is the purpose to get
information, to answer a question, to accept an offer, to deny a request? If the letter
were condensed into a one-sentence message, that sentence would be the central idea.
In addition, ask yourself two more questions:
• What will be the most likely reader reaction to the message?
• In view of the predicted reader reaction, should the central idea be listed first in the
outline; or should it be listed as one of the last items?
To answer the above questions, examine how you would react if you were the one receiving the
message you are preparing to send. Like all readers you would react with pleasure to good news and
displeasure to bad news; and you can reasonably assume that your reader's reaction would be
similar. Almost every letter will fit into one of four categories of anticipated reader reaction: (I)
pleasure, (2) displeasure, (3) interest but neither pleasure nor displeasure, or (4) no interest. By
considering anticipated reader reaction, the writer increases the effectiveness of the letter and
maintains good human relations.
After a letter has been classified into one of the four categories of reader reaction, the next issue to
consider is whether the letter should be organized deductively or inductively. In the deductive
approach, the central idea is placed first and is then followed by the evidence. In the inductive
approach, the evidence is placed first so as to lead up to the main idea. In general, the deductive (or
direct approach) is used when the audience is expected to be receptive, eager, interested, pleased or
even neutral toward the message. If the audience is likely to be resistant - displeased, uninterested,
or unwilling - the writer would have better results with the inductive (or indirect) approach. See
figure 10.1 for a summary of different organizational patterns for different types of audience
reactions.
Bear in mind, however, that each message is unique. All communication problems cannot be
solved by simple formulas. If you are sending bad news to outsiders, for example, an indirect
Page | 46
approach is probably the best. On the other hand, you might want to get directly to the point with an
associate or old friends even if your message is unpleasant. The direct approach might also be, the
best. choice for long messages, regardless of the audience attitude, because delaying the main point
could cause confusion and frustration. Just remember that the first priority is to make the message
clear.
Because deductive messages are easier to write, and pleasant (good news and goodwill) and
routine messages follow similar outlines, they are discussed together in this chapter. Inductive
messages are discussed in more detail in subsequent chapters.
Page | 47
AUDIENCE ORGANIZATIONA OPENING BODY CLOSE
REACTION L PLAN
Eager or Direct requests Begin with the Provide Close cordially
interested request or Necessary and state the
main idea. details . specific action
. . desired.
Pleased or Routine, good news, Begin with the Provide Close with 'a
neutral and goodwill main idea or Necessary cordial
messages the good news. comment, a
reference to the
good news, or a
look toward the
future.
Displeased Bad-news messages Begin with a Give reasons Close cordially.
Neutral to justify a
statement that Negative
acts as a answer. State
transition to or imply the
the reasons for bad news and
the bad news. make a
Positive
suggestion.
Page | 48
1. The first sentence is easy to write
2. The first sentence is likely to attract attention. Coming first, the major idea gets the attention it
deserves.
3. When good news appears in the beginning, the message immediately puts readers in a pleasant
state of mind, rendering them receptive to the details that follow.
4.The arrangement reduces the reading time. Once readers have grasped the important idea, they can
move rapidly through the supporting details.
This basic plan is applicable in several business-writing situations: (1) routine claim letters and "yes"
replies, (2) routine requests related. to credit matters and "yes" replies, (3) routine order letters and
"yes" replies, and (4) routine requests an 'yes" replies.
Dear Mr Kumar:
Sincerely,
K Srinivas
Contractor
Page | 49
1. Subject line is too general to aid reader in understanding the subject
2. Begins with detail that will lead to the main idea. .
3. Makes an observation that could be omitted (although it does show that the writer is willing to
give credit where credit is due).
4. States directly information that is already known.
5. Moves toward the main idea.
6. Continues with detail.
7. P r e s e n t s a r e a s o n f o r m a k i n g t h e u p c o m i n g r e q u e s t ; .
8. States the main point of the letter that should have been presented in the first paragraph.
Stating the claim in such a long sentence keeps the request from receiving the emphasis it
needs.
9. Uses a cliche.
10. Does close on a positive note but uses "enthused", "enthusiastic" should be used instead.
Ineffective and effective applications of the deductive outline are illustrated in the sample letters
in this chapter. Detailed comments have been made to help you see principles are applied or violated.
Typically, a poorly written and poorly organized example is followed by a well-Witten and well-
organized example The commentary on poor examples explains why certain techniques should be
avoided And the commentary on well-written examples demonstrates ways to avoid certain
mistakes The well-written-examples are designed to illustrate the application of principles of
good writing they are not intended models of exact words, phrases or sentences that should
appear in the letters you write. The aim of this case study technique is to enable you to apply the
principles you learned and create your own well-written letters
Routine Claims
A claim letter is a request for an adjustment When writers ask for something to which they think
they are entitled (such as a refund, replacement, exchange, or payment damages), the letter is called a
claim letter.
Claim letter. These requests can be divided into two groups routine claims persuasive claims
Persuasive claims, which will be discussed in a later cha assume that the request will be
granted only after explanations and persuasive arguments have been presented Routine claims -
possibly because of guarantees warrantees, or other contractual conditions - assume that the
request will be granted quickly and willingly, without persuasion.. Because it is assumed that
routine c! will be granted willingly, a forceful tone is inappropriate.
When the claim is routine (not likely to meet resistance), the following outline recommended
Page | 50
I Request action in the first sentence.
2 Explain the details supporting the request for action
3. Close with an expression of appreciation for taking the action requested.
Surely the builder intended to install 50 litre heaters; otherwise, the building contract would not
have been signed. Because a mistake is obvious, the builder would not to be persuaded. And
because compliance can be expected, the claim can be: s without prior explanation.
Without showing anger, suspicion, or disappointment, the claim letter in figure asks: simply and
directly for 1an adjustment. As a result, the major point receives deserved emphasis and given the
nature of the audience, the response to it shod favorable.
Favorable Response to a Claim: Letter. A response to a claim letter is termed adjustment'
letter..By responding favorably to legitimate requests, businesses gain a reputation for standing
behind their goods and service A loyal customer become even more loyal after a business has
demonstrated such integrity
Since the subject of an adjustment letter is related to the goods or services provided the letter can
serve easily and efficiently as a low-pressure sales letter. For example letter about a company's
wallpaper might also mention its paint This type of s sales message in an adjustment letter has a
better chance of being read than a direct sales letter.
When the response to a claim letter: is favorable, present ideas in the folio sequence:
Page | 51
Good Example of a Routine claim Letter
Dec 3, 2011
Mr. G. Bharat Kumar, Owner
Kumar Plumbing & Heating Company
78 Namboodri Street,
Chennai 370001
Dear Mr Kumar:
1 Water Heater Specifications for Mayur Street Apartments
2 Please replace the two 30-litre water heaters (installed last week) with 50 litre units
3 Large units are essential for families with children. 4 For that reason, the contract specifies a
50-litre heater for each of the 12 apartments
5 The project appears to be well a head of schedule; thanks for your efforts.
Sincerely,
K Srinivas
Contractor
Page | 52
MODULE 6
ORAL PRESENTATION
Page | 53
MODULE 6A : INTRODUCTION
Oral presentations have one big advantage over written presentations: they permit a dialogue
between the speaker and the audience. The listeners can offer alternative explanations and
viewpoints, or simply ask questions that help the speaker clarify the information.
Oral presentations are therefore an increasingly popular means of technical communication. You
can expect to give oral presentations to three different types of Listeners:
The situations that demand oral presentations are numerous, and they increase as you assume
greater responsibility within an organization. For this reason, many managers and executives
seek instruction from professionals in different kinds of oral presentations. But oral
presentations on technical subjects are essentially the spoken form of technical writing. Just as
there are a few writers who can produce effective reports without outlines or rough drafts, you
might know a natural speaker who can talk to groups "off the cuff' effortlessly. For most
persons, however, an oral presentation requires deliberate and careful preparation.
Objectives
Page | 54
MODULE 6B PREPARING TO GIVE AN ORAL PRESENTATON
Talking to a group is part of the job of any professional. One to two speeches, technical briefs,
or seminars a year, are typical for scientists and engineers beginning their careers, and the number
of talks increases as the years. advance. But typical beginners are not well prepared for this job;
instead, they stand before the group, nervous, uncertain of how to establish, an appropriate tone,
aware of the gap between the information they are presenting and the amount their audience is
absorbing.
The guidelines in this Unit are directed at making technical presentations not only correct but also
comprehensible for the group you are addressing and the reaction you seek from that group. .
Before you write the first work, whether your topic is a new product or research results, think
about two things: your audience and your purpose. The two are fundamental to your speech. .
Audience
Before you start, you need to think about who is in the audience, why they are there, and what
they know already. Regardless of your topic, the way to develop it is dictated almost entirely by
audience background. This is particularly true for technical subjects, which by definition draw on
highly specialized experiences.
Page | 55
Consider Audience
Answering these questions will help you decide how technical you can be, when you need to define
terms, and what tone you should take. You won't communicate effectively if your audience fails
to understand your terminology or if your, tone is offensive. Your only reason for speaking is to
communicate to your audience. Tailor your content and style accordingly.
Purpose
All the beginning many speakers almost always encompass too wide a field in their speeches. Instead,
pinpoint what you have to say in one idea and then develop this idea slowly and methodically in the time
allotted.
In other words, before you write a word, decide on what idea you want your listeners lake out the door
with them.. Then arrange your arguments and visual accompanirnents to support this idea.
There is a great difference between reading information and absorbing spoken information. In printed
text, the reader can return to tables, to graphs, to difficult ideas. There is time to go back and ponder. This
possibility does not exist in peaking, and that makes all the difference.
Page | 56
Determining Objective
Determining your reason for speaking is as important as recognizing your audience. Is the purpose of
your speech to inform? Or do you hope to persuade, explain procedures, or motivate?
Inform
If your speech is to inform, all you want to do is update your listeners. Such a speech could be
about new taxation laws affecting listeners' pay, new management hiring’s, or budget restraints
affecting capital equipment purchases. Speeches which inform don't require any action on the part of
your audience. Your listeners won't change the tax laws, alter hiring practices, or increase the budget.
The informative speech merely keeps the audience up to date on business changes.
In such a speech, you’ll want to clarify when changes will occur, who will be affected, and how the
changes will affect your audience. Leave time for questions and answers to clarify confusing points.
However, in an informative speech, ; little audience involvement is necessary.
In contrast, speeches, which persuade, explain procedures, and/or motivate demand audience
involvement. You're not just informing; you're asking your listeners to act. For instance,
You might be speaking about the need to hold more regular and constructive quality circle
meetings (persuasion).
You might be telling your audience how to perform a series of steps for computer operation
(explaining procedures).
You might be telling your audience that they can achieve a higher level of productivity
through mutual trust and cooperation (motivation).
In each instance, you want your audience to leave the speech ready to implement your suggestions.
Page | 57
Writing the Speech
Writing out a speech will prevent disorganized presentation. It's a basic step in preparing a, technical
talk. Bear in mind, however, that writing out a manuscript does not mean delivering it, by reading the
text word for word. The manuscript is the first step in a successful oral presentation rather than the
final one. Here are steps for writing out a manuscript:
Being with an outline. Shape it to the time allotted. Let’s say you have 30 minutes.
Divide the presentation into an introduction, body of the text, and conclusion. if you take
3 minutes for the introduction, and 3 minutes for the conclusion, you have 24 minutes left.
Decide now how many ways you want to elaborate on or explain your point. List them under
the main discussion topic. If you have six examples that's 3 minutes each. e.g.
II. Body : Technical support for argument (maximum of six examples or elaborations)
1.
2.
(Restatement of points 1 and 2 and relationship to argument)
3.
4.
5.
(Restatement of points 3, 4, 5 and relationship to argument)1
6.
III. Conclusion: Restatement of argument.
2) Putting aside the. introduction and conclusion, write out each of the points in the body of the text.
You may need more than three minutes to expand an item if the point turns out to be more
complicated than your initial estimate suggested. That means you'll have to reduce the total-
number of points you're, using to back up the argument. You may be able to make only three
or four elaborations.
Page | 58
3) As you write, begin thinking about the visual accompaniments you may use. These will take
part of the time allotted for explanations. If you decide to use slides, overhead transparencies,
tagboard, or other visuals, indicate references to them as you write.
4) Return to the introduction. Write a clear statement of the argument. Follow it with a "road
map" - either all of the major divisions or the first subtopic you will discuss.
5) Cast a critical eye over the speech. Arrange and rearrange argument for persuasiveness and
logic. Remember that the internal logic of the document is probably much clearer to you, the
writer, than to your uninitiated listener.. As your revise, bear in mind the difference between
and oral presentation. You'll need to restate points from time to time, and you'll also need to
recap your arguments periodically. Finally, go over this checklist.
The introduction to the oral presentation is 'very important, for your opening sentences must gain and
keep the audience's attention. An effective introduction might define the problem that led to the
project, offer an interesting fact that the audience is unlikely' to know, or present a brief quotation
from an authoritative figure in the field or a famous person not generally associated with the field.
All these techniques should lead into a clear statement of the purpose, scope, and organization of
the. presentation. If none of these techniques offers an appropriate introduction, a speaker can begin
Directly by defining the purpose, scope, and organization. A forecast of the major ponns is useful
for long or complicated presentations. Don’t be fancy. Use the words scope and purpose. And don’t
Page | 59
try to enliven the presentation by adding jokes. Humor is usually inappropriate in technical
presentations.
The conclusion, too, is crucial in an oral presentation, for it summarizes the major points of the talk
and clarifies their relationships with one another. Without a summary, many otherwise effective
oral presentations would sound like a jumble of unrelated facts and theories.
Presentation Plan
Given below is an example of a presentation plan.
Topic : _____________________________________________________________
Objectives : (W hat do you want your audience to believe or do as a result of your presentation?)
____________________________________________________________________
____________________________________________________________________
____________________________________________________________________
____________________________________________________________________
Development: (What main points are you going to develop in your presentation?)
1) ___________________________________________________________________
__________________________________________________________________
2) ___________________________________________________________________
__________________________________________________________________
Page | 60
3) ___________________________________________________________________
__________________________________________________________________
4) ___________________________________________________________________
___________________________________________________________________
Organization: (will this presentation lend itself to a particular organizational format, such as
comparison/contrast, chronology, or analysis?)
You may want to write a more detailed outline focusing on your speech's major units of discussion
and supporting documentation. The following speech outline is a template for your speech
presentation.
Page | 61
Title:_________________________________________________________________________
Purpose: _____________________________________________________________________
I) Introduction
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
II) Body
A) Fust main point__________________________________________________
1) Documentation / subpoint:______________________________________
a) Document / subpoint:______________________________________
b) Documentation / subpoint: __________________________________
2) Documentation / subpoint:______________________________________
3) Documentation / subpoint:______________________________________
Page | 62
B) Second Main Point : _____________________________________________
1) Documentation / subpoint:______________________________________
2) Documentation / subpoint:______________________________________
a) Documentation / subpoint:____________________________________
b) Documentation / subpoint:____________________________________
3)Documentation / subpoint:______________________________________
Note Cards
If you decide that presenting the speech from the outline will not work for you, then you can write
highlights of the speech on 3" x 5" cards. One caution —avoid writing complete sentences or filling
the cards side to side. Just write short notes (phrases or key words) which will aid your memory.
Once your speech is outlined and note cards are prepared, practice, incorporating all your visual
aids. To speak longer than requested will infringe on another speaker's time, on question-and-
answer time, or on coffee-break time. Practising will help you meet any time constraints,
Page | 63
Preparing the Graphic Aids
. .
Graphic aids fulfill the same purpose in an oral presentation that-they do in a written one: they
clarify or highlight important ideas or facts. Statistical data, in particular, lend themselves to
graphical presentation, as do representations of equipment or process. The same guidelines that
apply to graphic aids in text apply here: they should be clear and self-explanatory. The audience
should know immediately what each graphic is showing. In addition, the material conveyed
should be simple. In making up a graphic aid for an oral presentation, be careful not to overload it
with more information than the audience can absorb. Each graphic aid should illustrate a single idea.
Remember that your listeners have not seen the graphic aid before, and that they, unlike readers, do not
have the opportunity to linger over it.
In choosing a medium for the graphic aid, consider the room in which you will give the presentation.
The people selected in the last row and near the sides of the room must be able to see each graphic aid
clearly and easily (a flip chart, for instance, would be ineffective in an auditorium). If you make a
transparency from a page of text, be sure to enlarge the picture or words; what is legible on a printed
page is usually too small to see on a screen.
Although there are no firm guidelines on how many graphic aids to create, a good rule of thumb is to
have a different graphic aid for every 30 seconds of the presentation. Changing from one to another
helps you keep the presentation visually interesting, and it-helps you signal transitions to your audience.
It is far better to have a series of simple graphics than to have one complicated one that stays on the screen
for 10 minutes.
After you have created your graphic aids, double check them for accuracy and correctness. Spelling
errors are particularly embarrassing when the letters are six inches tall.
Following is a list of the basic media for graphic aids, with the major features cited.
Page | 64
1) Slide Projector: Projects previously prepared slides onto a screen.
Advantages
• It has a very professional appearance.
• It is versatile — can handle photographs or artwork, colour or black-and white.
• With a second projector, the pause between slides can be eliminated.
• During the presentation, the speaker can easily advance and reverse the slides
Disadvantages:
Slides are expensive
The room has to be kept relatively dark during the slide presentation.
Advantages:
Disadvantages:
Page | 65
3) Opaque Projector: Projects a piece of paper onto a screen
Advantages:
It can project single sheets or pages in a bound volume.
It requires no expense or advance preparation.
Disadvantages:
It is inexpensive
It requires no equipment
Posters can be drawn or modified “live:
Disadvantages:
Page | 66
Advantages:
It is relatively inexpensive.
It requires no equipment.
The speaker can easily flip back or forward.
Posters can be drawn or modified “live”.
Disadvantages:
6) Felt Board: A hard, flat surface covered with felt, onto which paper can be attached using
tape
Advantages:
It is relatively inexpensive.
It is particularly effective if a speaker wishes to rearrange the items on the board during
the presentation.
It is versatile – can handle paper, photographs, cutouts.
Disadvantages:
7) Chalkboard
Advantages:
Page | 67
Disadvantages
8) Objects: Such as models or samples of material that can be held up or passed around through the
audience.
Advantages
They are very interesting for the audience.
They provide a very good look at the object.
Disadvantages
Audience members might not be listening while they are looking at the object.
The object might not survive intact.
Advantages
Much material can be fit on paper.
Audience members can write on their copies and keep them.
Disadvantage
Audience members might read the handouts rather than listen to the speaker.
Page | 68
(A word of advice : Before you design and create any graphic aids, make sure the room in which you
will be giving the presentation has the equipment you need. Don't walk into the room carrying a stack
of transparencies only to learn that there is no overhead projector. Even if you have arranged
beforehand to have the necessary equipment delivered, check to make sure it is there; if possible, bring
it with you.).
Extemporaneous Speaking: This is the more useful of the two methods in most corporate,
industrial, and professional settings. Extemporaneous delivery does not entail memorizing
the speech word for word. Instead, the speaker takes each topic within the speech and
practises delivering the information until it can be spoken rather than read. The only text
actually carried by the speaker is a skeleton outline with key words and side lists, plus
statistics, quotations, or other item that need to be quoted ,verbatim.
This mode has many advantages over manuscript reading. First of all, the rhythm and pace of the talk will
be closer to those occurring in natural speech: the speaker who talks rather than reads tends towards
shorter sentences, more repetition, use of contractions, and other natural speech rhythms. Further, spoken
delivery permits good eye contact. You, the speaker, can add explanations if the group seems confused,
or edit information if the group turns out to be more sophisticated than anticipated. Voice, tone, and
emphasis can be varied in relation to audience response.
Manuscript Reading: It is hard for most listeners to attend to someone reading a Speech. There’s
little eye contact and little variation in pace or tone since the reader is not guided by audience
response. There are some exceptions, through if you must read the manuscript, practice so that you
can look up regularly and at length.
Page | 69
Practicing Extemporaneous Delivery: Read through your text two or three times until you are
comfortable with the arguments. Then practice the speech out loud, without reading it, section by
section.
Saying the speech out loud works well for most people. Do not try to get each section word for
word. Instead, try delivering all the details within one section. If you fumble, find a simpler way
to state the point. Use an index card for any details such as figures .or quotations that need to be read.
Remember, the section does not have to be word-perfect. You may phrase a statement one way
during practice, another way during delivery. That is not significant; what is important is a clear way
of stating each point.
Some people use tape recorders, videotape recorders, or the mirror while they are practising. Others
are paralyzed with self-consciousness by such maneuvers. Tape and video recorders are fine - so long
as they support rather than inhibit your performance.
After you can say each point aloud, go through the entire speech. Make up a skeleton outline in which
you have two or three words or phrases to remind you of each major point. This outline will be useful
when you are speaking. It will prevent your losing the order of points or omitting major topics in the
course of the speech.
Check this skeleton outline against your list of visual aids, and coordinate the two by numbering the
visuals and inserting these numbers in the skeleton outline.
SAQ1
What are the steps you need to follow to make an effective Oral Presentation?
SAQ2
What are some of the Graphic Aids you can use to improve the impact of your
Presentation?
Page | 70
MODULE 6C Making the Presentation
Clutching a full script in hand while speaking is usually a bad idea you may find yourself
succumbing to the security of reading it. Instead, take only (1) the outline, with key words and slides
indicated, and (2) index cards with anything that must be read verbatim. When the time comes, reach
for the card and read the quote, or the statistic, Otherwise, address the audience directly.
Your goal, of course, is to have the audience listen to you and have confidence in what you are
saying. Try to project the same image that you would in a job interview: restrained self-
confidence. Show your listeners that you are interested in your topic and that you know what you
are talking about. As you give tile presentation, this sense of control is conveyed chiefly through
your voice and your body. .
To avoid problems caused by mechanical failures, take the machinery out, set it tip in conditions
that duplicate those for your speech and do a dual run.
A peculiar thing happens when people use slides and transparencies. Instead of reading from a
manuscript, they read from the slides. They bury their heads in the slides in the same way that shy
speakers use manuscripts to escape looking at the audience.
Step to the side of the screen and face the audience. This is difficult to do. Your illustrations are
behind you and there will be a strong tendency to turn away toward the screen without turning
away from the group.
If you can use electric pointers that project an arrow on the screen, or collapsible metal
pointers that fit in a pocket or briefcase when not in use. A pencil or index finger won't work.
Page | 71
You'll cast a shadow on the screen, obscuring the data just when someone
needs to look at it.
Furthermore , do not turn the lights off. It's a sure Way to send a portion of the audience to sleep,
and to frustrate those who want to take notes. Instead, experiment so that you have to reduce only
the light over rows immediately in the front of the screen. If you are using an overhead projector,
turn it off when not in use. The glare and machine noise are distracting.
Organisation: .A rough rule is two minutes per slide. Remember that the heart of a speech is
variety, and that you need to change what is in front of your listener to create a less passive
experience. If the slide takes longer than two minutes to explain, it may be too complicated. In this
case try to divide the information so that you have two or
three separate slides. The problem with slides is that unless one is careful, they end up contr011ing the
content of the speech. One starts using them with the best of intentions; they are a simple, attractive
way to illustrate technical presentations. But those of you who have sat through speeches where the
slides sailed past - too fast, too complicated, too many - know that slides can be disastrous. Take
care to simplify and focus their presentation
You should write an introduction and a conclusion that are freestanding, that can exist independent
of the visuals. Then use readable slides in the body of the text if they will aid your talk, but
remember that they should be used to illustrate points rather than as the points themselves.
The blackboard has advantages. For one thing, it slows the speaker down. It's harder to rush through
complicated illustrations, as speakers routinely do with slides, when speaker has to draw each figure
laboriously. 'Me board has another advantage: if you draw a simple figure, outline, or equation, you can
add to it as you talk. If, for instance, you are explaining how to read a. weather map, you can add each
notation as you introduce it rather than present the entire map at once. The effect is similar to that
Page | 72
achieved with transparencies when you add successive details through overlays.
Two pieces of advice if you use a blackboard: (1) Stop talking while you’re writing.(2) Add
illustrations with some sort of plan so that you’ll have a coherent display at the end rather than an
unrelated series of notes.
Companies usually have an alternative to the black board – an oversized note pad called a flipchart
(often as large as 24 x 24 inches) sitting on an easel. Illustrations, outlines, equations, or key words
may be prepared in advance and the flipchart used to illustrate the talk, or the illustrations may be
done during the talk.
The paper surface is easier and quicker to write on than a black board, particularly if you use a felt-
tipped pen. Speakers often tear off each page as they finish and tape it t the wall. In this way, upto
20 pages can be displayed for reference during the talk.
Tagboards, prepared in advance for display during the talk, can also be useful. Like slides, they should
have large lettering that is visible beyond the first few rows. If the board is sturdy, it can be placed on a
easel during the talk
Audience Participation
As a speaker, you need to imagine what you want your audience to be doing while you're talking. This
is a necessary consideration. For instance, if you want them to take notes, you'll need to arrange
appropriate desks and lighting.
Questions: If you are allowing time for responses to your talk, you'll need to prepare for questions and
inevitable challenges that accompany critical discussions. Certain types of questions -
misunderstandings by the audience, requests for further information - are easily fielded if you have
prepared. Skilled speakers often imagine before a talk what questions they will be asked, and then check to
mike sure they have concise answers at hand Questions involving controversial issues are harder to
handle Some will be irrelevant An occasional listener uses a question as an opportunity to deliver a
speech on an entirely different issue "Yes, that certainly seems to be a problem' ' is a reasonable
response to such rhetoric, rather than getting involved in discussing a side issue. For questions that
challenge the speaker and present an alternative interpretation, it's probably best to (1) rephrase the
Page | 73
question to be sure you've understood it, (2) acknowledge that there are differences, of opinion on. this
issue, and (3) reiterate your own position "Yes I'm familiar with that data My interpretation, as I
argued, is quite different." Avoid becoming defensive.
Eye Contact You can't stare at people, but you do have to look at them while you're talking Practice
looking at your audience, shifting your gaze from one member of the group to another. There will always
be three or four persons interested in what you're saying. Return to them when you need feedback, and
otherwise try to address yourself to each person in the room Practicing a speech until it can be
delivered extemporaneously is a good first step in learning to control your voice and keeping it within a
normal range.
Voice: Once you have learned to talk, you'll find you are naturally changing your rate from time to
time to give emphasis', and this change will be useful in keeping the audience attentive. Skilled
speakers learn that variations are important in all aspects of the speech— from eye contact and. the
visual displays to tone and rate. of speech.
If you have to use a mike, try to use one you can clip to your shirt or wear around your: neck If you use a
stationary model attached to the lectern, your voice will rise and fall as you step to the side to discuss
the slides, and if it is a hand-held mike, you may end up with the mike in one hand and the pointer in the
other, an awkward situation.
Bearing You'll want to move, and for good reason - muscle activity releases tension, and most speakers
are tense Go right ahead and relieve the tension, but do so in a measured way, by stepping forward to
make a point, or by using your arm in a natural gesture that accompanies what you're saying Try to keep
nervous mannerisms to a minimum
Page | 74
The Structure of a Speech as a Whole
A 30-minute speech can be divided as follows:
I) Opening (first three to five minutes) Some speakers begin with a joke - a mistake if you
don't have a good joke or a professional's knack for delivering the punch line Others begin
with a topical anecdote - for instance, a news item on their subject. Still others start with
an allusion to the occasion, the nature of the audience, or some other personalized
reference t the group and purpose.
The opening is tricky, however, and if you are uncomfortable to begin with; a wall of
unsmiling faces just after you've told your, anecdote is not likely to relax you. • If you want
to begin with an anecdote, a topical allusion, a reference to the occasion, a joke, or a
quotation, be sure that you are willing to live with the results if the lead doesn't work.
Otherwise, stick to that which is essential in any introduction: state the subject and define any
terms, This is more than adequate for any talk. To sum up:
This sort of clear-cut, businesslike introduction will be appreciated by most of, your
audience.
It is particularly important not to neglect the introduction if you are showing slides. Avoid
the tendency to dive right in and to let the slides dictate the content of the speech. Don't
sacrifice the introduction. The listener needs it to establish a foundation for the rest of the
speech.
Page | 75
2) Body of Speech (20 minutes): Whether, you use slides and transparencies, or simply speak directly
to the audience, allow two to three minutes to make each argument. Remember, your audience does not have
the advantage of reading. They cannot stop when they come to a puzzling item, pause over it, or refer to an
earlier section as they could if the information were printed.
As you proceed, use transitional words and phrases to help the listener along: So as you see, As I was saying,
All it means is
At two or three points, give an internal summary. Restating the points as you develop them will be an aid to
the listener.
Look out at the audience, not at your notes or the screen. If you use an Outline, coordinate it with your visuals
in case you lose your place.
If you use slides or transparencies, don't simply flash them - explain them.
There is a variety of techniques you can use to keep the audience attentive: Pause, look around before
making a point, change the tempo from time to time, use a• gesture or a step forward to emphasize a point..
Some speakers employ demonstrations or holdup objects relevant to the discussion.
If possible, present the argument in puzzle order. That is, instead of giving the answer, pose the situation,
and gradually work your way to the solution. If you are clever about this, you'll do it in such a way that
the audience will get there one jump ahead of your statement of the solution. This will give them the pleasure
of solving the puzzle, add a little flair to the presentation, and make the speech a less passive experience.
Watch the clock. The first time you get up to speak, you may find that, like the person at the opera who
didn't much care for it, you will look at your watch after what feels like two hours and discover that only
five minutes have passed. • But as you become more accustomed to speaking, you'll find your end to
Page | 76
exceed your time, particularly if you find ways to explain points in an interesting and lively way. Your
audience will follow what you're saying, and you won't notice how, quickly the moments have gone.
Dividing the speech into sections, represented by key words in your outline, will help you avoid
this.. You can look down,- see how much you have to go, and how much time is left. In this way you
can easily judge if you are going to end too soon, or, as .is more often the case, exceed your allotted
time.
3) Closing (last three to five minutes): Crush any tendency to sound apologetic. - Instead, wind
up with a brisk summary of your, central point; Include an upbeat anecdote or quotation if you have
one that illustrates the gist of the talk. Then call for questions Answer them pleasantly and don't be
defensive.
SAQ3
What are the two ways you can deliver the Presentation?
Activity
Prepare a technical talk and use the following checklist for self-evaluation. Alternatively you
may ask a friend to evaluate your presentation.
Page | 77
Check List for Evaluation of A Technical Talk
______________________________________________________________________________
Presentation
1) Eye contact, voice, bearing, gestures, timing
(Did. I look at all members of the group?
Was the pace brisk or lethargic? Were voice,
hearing and - gestures used effectively?)
Page | 78
3) Unusual aspects (Good introduction?
Lively format? Unusual illustrations?
Upbeat ending7 Effective eye Contact?)
______________________________________________________________
MODULE 6D SUMMARY
An effective oral presentation begins with a careful assessment of the speaking situation.
Highlight important points so as to register properly with audiences who may hear the speech
only once. One should use visual aids to help listeners grasp the main points of the speech. Every
presentation has an introduction, body and conclusion. The introduction sets the tone and previews
the speech. The body has clear transitions and important details and is presented in a set time-
frame.
To deliver a good presentation, rehearse several times until you can speak comfortably from note
cards. Try to speak in a conversational manner as if talking extemporaneously to friends. The
secret of success is to be well-informed and well-prepared.
Page | 79
PHY-102
Physics
INDEX
MODULE 4 Semiconductors 56
MODULE 5 Lasers 86
01
CONTENTS
MODULE 1 Bonding in solids
MODULE 1A Introduction
MODULE 1B Cohesive energy
MODULE 1C Calculation of cohesive energy of ionic solids
MODULE 1d Cohesive energy of Sodium Chloride (NaCl) crystal
Solved problems
Information for quiz
02
MODULE 4 Semiconductors
MODULE 4A Introduction
MODULE 4B Intrinsic Semiconductors
MODULE 4C Extrinsic Semiconductors
MODULE 4D Minority carrier life time
MODULE 4E Drift and Diffusion
MODULE 4F Einstein relation
MODULE 4G Equation of continuity
MODULE 4H Hall effect
MODULE 4I P-N junction
MODULE 4J Width of depletion layer of the P-N junction
MODULE 4K Volt-ampere characteristics
MODULE 4L Zener diode
MODULE 4M Varactor (varicap) diode
MODULE 4N Light Emitting Diode (LED)
MODULE 4O Solar cells
Solved problems
Information for quiz
MODULE 5 Lasers
MODULE 5A Introduction
MODULE 5B Einstein coefficients
MODULE 5C Pumping and population inversion
MODULE 5D Ruby laser
MODULE 5E He-Ne laser
MODULE 5F Semiconductor laser
MODULE 5G Applications of laser
Solved problems
Information for quiz
03
MODULE 1
Bonding in Solids
Module 1A. Introduction
Matter exists in three different states, namely, solid, liquid and vapour (or gas)
and is composed of atoms and molecules. The properties of matter are essentially
dependent on the way, the atoms and the molecules are arranged inside. Atoms are
closely packed in solids. These atoms are arranged in a particular fashion which is
determined by strength, character and directionality of the binding forces. The adjacent
atoms are held together by bonds which are made up of attractive and repulsive forces
that just balance each other. The process of holding the atoms together is called bonding.
The type of bonding plays a very important role in determining physical, chemical,
electrical and other properties.
It is known that inter-atomic forces or bonds hold together the individual atoms of
solids. Thus the solids are considerably strong and slightly elastic. The solids can not be
easily compressed because in addition to the attractive forces there are also repulsive
forces. The attractive forces are basically electrostatic in solids. The classification of
different types of bonding is strongly dependent on the electronic structure of atoms in a
solid.
Chemical bonds are classified under primary and secondary, according to their
strength and directionality. Primary bonds are inter-atomic, where as secondary bonds are
intermolecular. The attractive forces in primary bonds are directly associated with the
valance electrons. The outer shell containing valance electrons is in high energy state and
relatively more unstable. But it can become stable by sharing of electrons i.e., either
gaining or loosing, and thus forming atomic or primary bonds.
The primary bonds are of three principal types namely ionic, covalent and
metallic. The position assumed by the bond electrons during the formation of the bond
forms the basis of distinguishing different types of bonds. Vanderwaals and hydrogen
bonds are typical examples for secondary bonds.
The strength of a bond is best measured by the energy required to break it or the
amount of heat energy required to vapourize the solid, where the atoms are separated by
infinite distance. Also the melting and boiling points depend on strength of bonds in the
substance. Stronger the bonds higher are the values of melting and boiling points.
An attractive inter-atomic force exists between atoms to hold them together. This
force is also responsible for formation of a crystal. This means that the energy of the
crystal is lower than that of free atoms of it by an amount equal to the energy required to
pull the atoms in crystal apart, so that, the atoms are set free. This is called binding
energy or cohesive energy of the crystal.
04
When a solid is compressed repulsive forces come into play while attractive
forces keep the atoms together. The stored energy or potential energy of a material is sum
of the individual energies of atoms in addition to their interaction energy.
Assuming that the atom consists of moving electric charges which may attract or
repel each other as they come close. The potential energy due to the attraction is negative
as the atoms do the work of attraction while the repulsive energy is positive since
external work is done in bringing the atoms together and it is inversely proportional to
some power of the inter-atomic separation x. The net potential energy is sum of these two
terms.
P Q
Suppose the bonding force between
two atoms P and Q separated by a distance
x, exerting attractive and repulsive forces on x
each other is given by (Fig 1.1)
Fig. 1.1. Two atoms separated by a distance x.
A B
_____ _____
F(x) = - where N > M (1.1)
xM xN
A, B, M and N are constants whose values depend on the nature of the molecule. The first
term is attractive force while the second term represents the repulsive force.
At equilibrium position, i.e, at x = x0 , F(x) = 0
A B
_____ _____
0 = -
x0M x0N
A B
_____ _____
Hence, =
x0M x0N
1
N–M
B B
or xN0 – M = ____
x0 = ___
A A
05
Module 1B. Cohesive Energy
The bonding force (F) between two atoms P and Q is given by equation (1.1), that is,
A B
_____ _____
F(x) = - where N > M
M N
x x
A B
____
U(x)= F(x) dx = - ____ dx
M
x xN
= (A x -M - B x -N ) dx
A x1 - M B x1 - N
_________ ___________
= - +C , where C is constant of integration
1–M 1-N
A 1 B 1
_______ _______ _______ _______
=- x + x +C
M-1 N -1
M–1 x N-1 x
a b
U(x) = - ____ + _____ + C (1.2)
xm xn
A B
_____ ______
where a= ; b = ; m = M – 1 and n =N–1
M-1 N-1
When x = ; U(x) = 0
Therefore, C = 0.
Hence,
a b
U(x) = - ____ + _____ (1.3)
xm xn
x is the distance between the centers of the two atoms, a is positive constant which
determines the strength of attractive force while b also is positive constant which
determines the strength of repulsive force. m and n are positive numbers.
06
Fig. 1.2.(a) Curve between forces Vs distance x between atoms (b) Curve between Energy Vs distance x
For a particular value of x = x0, U(x) will be minimum and the two atoms form a
stable lattice i.e., a molecule. Then the spacing between the atoms x 0 is known as
equilibrium spacing of the system U(x) will be minimum only when n > m.
a b
_____ _____
Umin = - + (1.4)
x 0m x 0n
07
Differentiating the above equation,
dU ma nb
_____ _____ ______
= - =0
dx x = x0 x0m+1 x0n+1
Hence,
ma nb
_____ ________
=
x0m+1 x0n+1
b n x0n+1
__ ___ ________
= = x0n + 1 - m – 1 = x0n - m
a m x0m+1
b n
x0n = x0m ___ __
(1.5)
a m
a b a m
Umin = - _____ + _____ ___ ___
x 0m x0m b n
a am a m a
= - ___ + _____ = - ____ + ___ ____
x0m n x 0m x0 m n x0 m
a m a
____ ___ ____
= - +
x0 m n x0 m
or
a m
Umin = - _____ 1- ___
(1.6)
x 0m n
Hence, it may be concluded that the repulsive forces are due to overlap of outer electronic
shells between atoms, molecules or ions approaching each other while forces of attraction
are due to interaction between outer electrons of the atoms. This results in the formation
of a substantially stable aggregate independent molecule. Strong electric fields or high
temperature or mechanical strain may cause disassociation of atoms.
08
Module 1D. Cohesive energy of Sodium Chloride (NaCl) Crystal
A typical example of an ionic bond is the bond between the positive sodium ion
and negative chloride ion of sodium chloride (NaCl) crystal.
The energy required to remove the outer electron from Na atom is 5.1 ev and
sodium atom becomes Na+ ion.
It can be written as
Na + 5.1 eV Na+ + e- (1.7)
When this electron is removed from Na atom and then added to Cl atom, 3.6 eV
of energy is released, as the electron affinity of chlorine is 3.6 eV. The chlorine atom
becomes Cl –1 ion.
Cl + e- Cl - + 3.6 eV (1.8)
Thus to create a positive sodium ion and a negative chlorine ion at infinity, a net
amount of 5.1 eV – 3.6 eV = 1.5 eV is spent. That is,
Na + Cl + 1.5 eV Na+ + Cl -
The electrostatic attraction between Na + and Cl – ions brings them together to the
equilibrium spacing where the potential energy will be minimum. The energy released in
the formation of NaCl molecule is called bond energy or cohesive energy of the
molecule.
09
e2
_____________
The force F = -
4 x02
-e2
+ - ___________
Cohesive energy between Na and Cl = = - 6 eV
4 x0
Thus, the energy released in the formation of NaCl molecule starting from Na and Cl
neutral atoms is ( 5.1 – 3.6 – 6 ) eV. That is,
Thus, the entire process of formation of NaCl molecule results in evolving an energy of
6 – 1.5 = 4.5 eV.
So it can be said that an amount of 4.5 eV is required to disassociate NaCl molecule into
Na and Cl ions.
10
Solved Problems
= 28.8 Å
11
Example 3: The potential energy of a diatomic molecule in terms of the interatomic
separation x is given by
A B
U(x) = - ____ - ____
x2 x10
where A = 1.44 x 10-39 Jm2 and B = 2.19 x 10-115 Jm10. Calculate the
equilibrium spacing xe and the dissociation energy.
Solution :
Given, A = 1.44 x 10-39 Jm2
B = 2.19 x 10-115 Jm10
The exponents n = 2 and m = 10.
dU 2A 10 B
____ ______ ______
= - =0, this gives us
dx x = xe xe3 xe11
1/8
5B 5 x 2.19 x 10-115 1/8
____ ____________________
xe = = = 4.08 x 10-10 m
-39
A 1.44 x 10
Now the dissociation energy can be obtained as
A n 4A 4 x 1.44 x 10-39
____ ___ ______ _____________________
D = 1- = = Joules
xe2 m 5 xe2 5 x (4.08 x 10 ) -10 2
4 x 4.144 x 10-39
_____________________________________
= eV = 4.33 x 10-2 eV
-10 2 -19
5 x (4.08 x 10 ) x 1.6 x 10
12
Example 4: Calculate the potential energy of the system of Na + and Cl – ions when they
are at a distance of 2 Å.
Solution:
Given xe = 2 Å = 2 x 10-10 m.
At equilibrium distance, the potential is minimum and for one NaCl molecule it is
given by
e2 e
___________ ___________
U= - joules = - eV
4 0 xe 4 0 xe
13
INFORMATION REQUIRED FOR QUIZ
The type of bond determines the physical, chemical and other properties of solids.
Chemical bonds are of primary and secondary types according to their strength and
directionality.
Primary bonds are of three types namely ionic, covalent and metallic
The strength of bond is measured by the amount of energy required to break it.
Melting and boiling points of substances depend on the strength of the bond.
Stronger the bond higher is the value of melting and boiling points.
Cohesive energy or binding energy is the energy required to pull the atoms in a
crystal apart so that they are set free.
The potential energy due to attraction is negative as atoms do work in the process
14
Repulsive energy is positive as external work is done
The bonding energy or cohesive energy may also be defined as the energy of
formation of one K mol. of a substance from its atoms or ions.
Electron volt is the unit for energy and is defined as the amount of work done in
moving an electron through a potential difference of 1 volt
15
IMPORTANT FORMULAE
X0 = (B/A)1/N-M
n = N-1
16
Module 2
Principles of
Quantum Mechanics
Module 2A. Introduction
Light exhibits the phenomenon like interference, diffraction, polarization, photo
electric effect, Compton effect, discrete emission and absorption. The phenomenon like
interference, diffraction and polarization can be well explained on the basis of wave
theory of light. These properties show that light possesses wave nature. On the other
hand, the phenomenon of photo electric effect, Compton effect and discrete emission and
absorption can be explained only on the basis of quantum theory of light. According to
quantum theory, light is propagated in small packets or bundles each of energy h. These
packets are called photons or quanta and behave like corpuscles i.e., particles. Thus, the
latter phenomena, like photoelectric effect, Compton effect etc. indicate the particle or
corpuscular nature of light. So, it is evident that light behaves like a wave as well as like a
particle. This is the dual nature possessed by light.
Louis de Broglie, in 1923, proposed that the idea of dual nature of matter should
be extended to all micro particles like electrons, protons, etc.
h h
= _______ = _____
(2.1)
mv p
where h is Planck’s constant ( h = 6.626 x 10-34 J.sec) and p = mv, the momentum of the
particle.
17
Module 2B. Derivation of deBroglie equation for matter wave
Let us consider a material particle such as an electron associated with a standing
wave system in the region of space occupied by the particle. Further, let χ be the quantity
that undergoes periodic changes giving rise to matter wave, then χ at any instant t in
space at (x, y, z) is given as
According to the theory of relativity, from inverse Lorentz transformation, the time
v x
_____
t +
c2
______________
t = (2.3)
v2
_____
1-
2
c
where v is velocity of the particle in +x direction and c is the velocity of light. Then,
c2 1
___ ___
u = and
= (2.6)
v T v2
_____
1 -
c2
Einstein’s mass-energy relation is given by
E = m0 c2 (2.7)
Also, E =h (2.8)
h = m0 c2
18
m0 c2
_______
= (2.9)
h
m0 c2 ) / [ h ( 1 – v2/c2 )½ ] = m c2 / h
as m = m0 / ( 1 – v2/c2 )½
Velocity u
_____________ ____
= =
frequency
c2 / v h
___________ ______
= =
2
mc /h mv
h
_____
=
mv - de Broglie equation (2.10)
Thus a material particle of mass m moving with velocity v is associated with it a wave
whose wave length is given as h
= _____
mv
Case 1 :
In case of non-relativistic case ( v << c)
1
Kinetic Energy EK = ___ m v2
2
m2 v2
2 _______
or 2 EK = mv =
m
2 m E K = m 2 v2
or m v = 2 m EK (2.11)
19
Therefore, de Broglie wavelength of particle of kinetic energy EK is
h h
_____ _____________
= =
mv 2 m EK (2.12)
Case 2 :
In case of a charged particle carrying a charge q and accelerated through a
potential difference of V volts, the kinetic energy E k = q V.
Case 3:
Similarly in case of a material particle (say neutron) in thermal equilibrium at
temperature T possessing Maxwellian distribution of velocities (v rms), the kinetic energy
is given as
1 2
EK = ___ m vrms
2
3
= ___ k T
2
where k is Boltzman’s constant, k = 1.38 x 10-23 joule/K
Hence, the de Broglie wavelength for a material particle at temperature T is given as
h h
= ___________ = _____________________
2 m EK 3
2 m ___ kT
2
h (2.14)
= ___________
3mkT
It is to be noted that a material particle in motion involves two types of velocities namely,
the velocity ‘v’ which refers to the mechanical motion of the particle and the other is u
which refers to the propagation of the associated wave.
c2
___
u and v are related as u =
v
20
Module 2C. Davisson and Germer’s Experiment
Two American physicists, Davisson and Germer, were the first to show the
experimental evidence for wave like properties of a beam of material particles. They have
also succeeded in measuring de Broglie wave length for slow electrons accelerated by
some potential difference.
21
There is a selective reflection depending upon the incident electrons. Keeping the
positions of source and chamber fixed, if the velocity of electrons is gradually increased
it is observed that the number of electrons reaching the collector follows the curve shown
in the figure 2.2. This selective reflection of electrons is similar to that of X-rays. The
selective reflections can well be explained if it is assumed that the electrons are
associated with waves whose wavelength varies with velocity in accordance with de
Broglie equation = h/mv. The selective reflection may be explained using Bragg’s law,
i.e. 2d sin = n , where n is order of diffraction and 2d is the lattice spacing of nickel
crystal. With this experimental arrangement it is possible to determine the wave length of
electron wave using Bragg’s law. This value was in good agreement with those given by
de Broglie matter wave equation. This experiment also is a definite evidence for the wave
behaviour of electron beam.
Fig. 2.3 (a) G.P.Thomson’s experimental setup (b) Diffraction pattern of electrons
22
The metallic foil consists of random distribution of metallic crystals. The crystals,
at proper angle, scatter the electrons in accordance with Bragg’s law. Due to intersection
of the cone of diffraction with photographic plate, circular rings are produced. To verify
that the pattern produced is only by diffraction of electron but not due to secondary
X-rays generated by electrons across the metal foil, a magnet is brought near the beam of
electrons to deflect them. It is found that the diffraction pattern shifts as electron beam is
deflected by the magnet. If the pattern was due to X-rays, no such shift in diffraction
pattern could be expected.
The wavelength of electron wave can be estimated using the diffraction rings
obtained by electron beam. The wave length is found to be independent of material of the
foil and depends only on the velocity of electrons.
where mv is momentum of the particle and h is the Planck’s constant. The wave is called
matter wave. Thus, exhibiting dual nature.
Let χ (r, t) be the wave displacement for de Broglie wave at any location
r = i x+jy+kz
1 2 χ
2 ____ ______
χ = [Maxwell wave equation] (2.15)
2 2
U t
2 2 2
2 ____ ____ ____
where = + + and U is the wave velocity, where (del) is
2 2 2
x y z
Laplacian operator
23
The solution to equation (2.15) is
χ (r, t) = χ 0(r ) e- i t (2.16)
χ 0 is the amplitude at the point considered.
Differentiating equation (5.16) with respect to time t,
χ
___
= - i χ 0 e- i t
t
Again differentiating w.r.t. time
2 χ
___
= - i (- i ) χ 0 e- i t = i2 2 χ 0 e- i t
2
t
= - 2 χ ( r, t) as i2 = -1 since i = -1 (2.17)
Substituting equation (2.17) in equation (2.15),
1
2 χ = ____
[- 2 χ (r, t) ]
2
U
2 U
2 ____
χ + χ = 0 also = 2 = 2 ____ as U =
2
U
4 U2
2 _____ ____
χ + χ = 0
2 2
U
4
χ + _____ χ
2
= 0 (2.18)
2
Now introducing wave mechanics concept,
h
_____
=
mv
24
The equation (2.18) can be written as,
4
2 ________
χ + χ = 0
2
h
m2v2
4 m2v2
2 ____________
or χ + χ = 0 (2.19)
2
h
Total Energy E = P.E. + K.E.
1
__
= V + mv2
2
1
__
mv2 = E - V
2
mv2 = 2 (E – V)
Equation (2.19) can be written as
4 2m (E-V)
2 __________________
χ + χ = 0
2
h
8 m
2 _________
or χ + (E-V) χ = 0 (2.20)
2
h
h
_____
If ħ = , then Schroedinger’s wave equation is
2
2m
2 χ + _________ (E-V) χ = 0
ħ 2
For a free particle V = 0
2mE
2 χ + ________ χ = 0
ħ 2
25
Module 2F. Physical significance of wave function
Schroedinger himself interpreted the physical significance of χ in terms of charge
density. As electrons exhibit the diffraction phenomenon, like x-rays, one can use optical
analogy to arrive at physical significance of χ.
If χ is amplitude of the matter wave at any point in space, then the particle density
i.e., number of material particles per unit volume must be proportional to χ 2, just like
photon density being proportional to A2 where A is amplitude of light wave.
If q is electric charge on a particle, the charge density is equal to product of charge q and
particle density. Thus the quantity | χ |2 is a measure of charge density. Though this
interpretation leads to very satisfactory results in many cases like directional distribution
of photoelectrons, Compton scattering, stable states of Bohr’s atom, emission of spectral
lines etc. , but could not yield fruitful results in cases like flight of single particle etc. So,
inorder to justify such results, Max-Born proposed yet another physical interpretation
later developed by Bohr, Dirac, Heisenberg and others. According to this | χ |2 represents
probability density of the particle in the state χ. Then the probability of finding the
particle in volume element dv = dx dy dz about any point r and time t is given as
The function χ is sometimes called probability amplitude for the position of the particle.
26
Vx = 0 for 0 < x < L
Vx = for x 0 or x L
d2 χ 8 2 m E
______ ____________
+ χ = 0
2 2
dx h
d2 χ 8 2m E
______ 2 2 ____________
or + k =0 where k =
2 2
dx h
The solution for this equation is
χ (x) = A sin kx + B cos kx
where A and B are constants whose values are determined by boundary conditions.
At x = 0, χ (x) = 0 B=0
At x = L, χ (x) = 0 A sin kL = 0
n
_____
That is, k L = n or k =
L
we have,
8 2m E
k = ____________
2
h2
k2 h2
_________
or E =
2
8 m
n n h2
_____ ______ _________
as k = , En =
2 2
L L 8 m
n2 h2
_________
or En = where n = 1, 2, 3, . . .
2 Fig. 2.4 Energy level diagram
8mL
This indicates that energy of a particle in a box can only take discrete values given by
n = 1, 2, 3, . . . . . or, in other words, the energy is quantized. These discrete energy levels
are called eigen values and n is called the quantum number.
27
Eigen values and Eigen functions
1. χ is single valued
2. χ and its first derivatives are continuous
3. χ is finite
The set of acceptable values of a physical property like energy that follow from
eigen functions are called Eigen values.
nx
χ n = A sin ________ for 0 < x < L
L
=0 for 0 > x > L
As the electron should lie within the box, the probability of finding it in the box must be
unity.
L L
L
nx
= A2 sin2 ________ dx =1
0 L
L 1
__ nx
= A 2
2 1 - cos
2 ________
dx =1
0 L
A2 L 2nx L
____ ________ 2 __________
= x- sin =1
2 2n L 0
28
2
____
or A=
L
2 nx
____ _______
Therefore, the normalized solution is χx = sin
L L
The first three eigen functions χ 1, χ 2 and χ 3 and their probability densities are as shown
in the figure. 2.4.
Solved Problems
Therefore, E = (6.62 x 10-34) x (3 x 1010) / 10-10 = 19.86 x 10-16 joule = 1.24 x 104 eV.
[ 1 eV = 1.6 x 10-19 joules ]
Also, E = kT. T = 27C = (273 + 27) K = 300 K and k = 1.38 x 10-23 joule per K
h = 1.62 x 10-34 joule-sec, m = 1.67 x 10-27 kg.
Therefore, = h / 2 m k T
= ( 6.62 x 10-34) / 2 x 1.67 x 10-27 x 1.38 x 10-23 x 300
= 1.77 x 10-10 m = 1.77 Å
29
Example 3: What voltage must be applied to an electron microscope to produce electrons
of wavelength 0.50 of Å ?
Solution:
The de-Broglie wavelength is given by = h / mv = h / 2 m E
Also, E = eV, where V is voltage in volts. Therefore, = h / 2 m e V
Here,h = 1.62 x 10-34 joule-sec, m = 9.4 x 10-31 kg, e = 1.6 x 10-19 coulomb
V = h2 / 2 m e 2
= ( 6.62 x 10-34)2 / [( 0.5 x 10-10)2 x 2 x 9 x 10-31 x 1.6 x 10-19] = 602.4 volts.
30
INFORMATION REQUIRED FOR QUIZ
Matter behaves like a particle and a wave – called dual nature of matter – proposed
by Louis de Broglie
In G P Thomson experiment the wave length of electron wave is estimated using the
diffraction rings obtained by the electron beam
χ is the amplitude of matter wave at any point in the space, the number of material
particles of unit volume must be proportional to | χ |2
31
IMPORTANT FORMULAE
t = (t’ + (vx’/c2))/(1-(v2/c2))1/2
E = mc2 where E = h
= h / (3mKT)1/2
32
Module 3
Magnetic Properties of Solids
Module 3A. Introduction
The word magnetism comes from the district of Magnesia, in Asia Minor – the
place where certain stones (magnetite) were found to attract small iron pieces. The earth
is a natural magnet. The area around a magnet within which its influence is felt is called
magnetic field. The basic magnetic field vector is magnetic induction B.
The magnetic properties of substances are due to the motion of electrons and the
permanent magnetic moments of the atoms and electrons. Diamagnetism is a weak effect
and arises from changes in the atomic orbital states under the influence of the applied
field. Paramagnetism occurs when the magnetic moments are randomly oriented while
ferromagnetism occurs when the magnetic moments are aligned in the same direction.
This is a very strong magnetic effect. Ferrimagnetism arises when two or more types of
moments are present and directed oppositely and complete cancellation of moments does
not take place. Finally, antiferromagnetism occurs when adjacent magnetic moments are
oppositely directed so that complete cancellation of magnetic moment takes place. Above
a certain critical temperature, ferro, antiferro and ferrimagnetic substances are converted
in to paramagnetic substances.
Magnetic Induction B
Magnetic induction B may be defined as the total number of lines of force per unit
area both to the magnetizing field and the induced magnetism in the substance.
Unit of is weber per sq m
33
Intensity of magnetisation (I)
It can also be defined as pole strength per unit area. Its unit is Am -1
Magnetic permeability () for a medium may be defined as the ratio of number of
lines of magnetic induction per unit area (i.e., flux density) in that medium to the
number of lines per unit area present when the medium is replaced by vacuum.
Some atoms and molecules even in the absence of externally applied magnetic
field possess inherently some permanent magnetic moment. These moments are
randomly oriented with respect to one another so that net magnetic moment is zero.
When external magnetic field is applied, the atomic magnetic moments align themselves
along the direction of applied field, thereby intensifying the lines of force in the field
34
direction. This leads to paramagnetism. However, the aligning forces are weak and hence
the paramagnetic effect is weak. The susceptibility is small and positive in this case. It
decreases with increasing temperature, as thermal energy tends to make the alignment
random.
In some substances the magnetic moments are already aligned by nature due to
bonding forces. These substances are ferromagnetic substances. The susceptibility is
positive and very large. They strongly attract the magnetic lines of force of the applied
field.
Module 3C. Magnetic Moment due to Electron Spin & Bohr Magneton
B = eh/4 m
where e and m are charge and mass of an electron respectively and h is the Planck’s
constant.
Also, it is to be noted that the net magnetic moment of two electrons of opposite spins is
zero. Atoms or molecules which have quantum states all of which have paired electrons
have zero net magnetic moment. Still there will be a number of atoms or molecules
having unpaired electrons. The unpaired electrons can align themselves in an applied
field giving rise to paramagnetism. The order of filling of electron orbital in an atom is
given by Hund’s rule. Hund’s rule states that the number of electrons of the same spin in
p, d and f states should be maximum so as to reduce electron-electron repulsion.
An atom with three electrons in p-orbital will have all the three spins aligned
giving rise to a net magnetic moment of three Bohr magnetons. On the other hand an
atom with four electrons in p-orbital will have magnetic moment of two Bohr magneton
as the fourth electron spin is opposite to that of the first and hence cancel. Similarly atom
with five electrons in d-orbital has a net magnetic moment equal to five Bohr magnetons.
An atom with nine electrons in d-orbital will have net magnetic moment of one
unit and like that .
The magnetization of a solid is the sum of the magnetic moments in unit volume
of the solid.
35
Module 3D. Classification of Magnetic Materials
Magnetic properties of materials can be studied by grouping them under the
categories as
1. Diamagnetism
2. Paramagnetism
3. Ferromagnetism, Anti ferromagnetism and Ferri magnetism
1. Spin of electron
2. Angular momentum of electron about the nucleus
3. Induced charge in orbital moment by an applied magnetic field
The above first two components give rise to paramagentism while the third
component gives rise to diamagnetism. Ferromagnetism is due to alignment of magnetic
moments in the same direction.
Ferrimagnetism is due to excess alignment of magnetic moments in one direction than the
other.
However, the distinction between para, ferro, anti-ferro and ferri magnetisms
disappear at higher temperature called critical temperature. At this temperature all
substances get changed to paramagnetic substances. Magnetic susceptibility of the
materials differentiates the substances.
Langevin’s theory
All magnetic moments within some atoms balance each other and as a result the
net magnetic moment is zero. When atoms are surrounded by a magnetic field extra
current is generated by induction in the atom. These induced currents oppose the
increasing field as per Lenz’s law. Hence, the induced magnetic moments are opposite to
the magnetic field.
36
The orbit of a moving electron is a current loop. In the presence of an external
field B, uniform precession of the orbit about the line of magnetic field takes place. There
is super-position of precession and orbital currents which gives rise to additional current.
Then,
As electrons are negatively charged, the moment direction is opposite to that of the field.
This gives rise to negative susceptibility.
Larmor Frequency
e2
mr o2 = _____
2
r
e2
_______
o =
3
m r
37
Be 4e2 eB e2
____ ____ _____ ______
If << , then = -
3
m mr 2m m r3
That is, = o - P
Be
____
Here, P is the precessional frequency. That is, P =
2m
Thus presence of B alters o to . And sign show that moments parallel to B are
slowed and anti parallel are accelerated. This is Larmor Theorem.
Langevin’s Theory
1. Applied magnetic field, as it tends the magnetic axis to align along its
direction.
2. The thermal agitation and
3. The number of molecules having orientation inclined at an angle with the
given line of reference is proportional to sin d.
38
Origin
In order to minimize the dipole energy, the spin line up and leave uncoupled
magnetic pole on the surface of the specimen, which produces magnetic field in the
surroundings. The magneto-static energy out side the specimen due to the presence of
magnetic poles at surface,
1 +
_____
Ed = B2 dv
2 o -
Size
Magneto static energy is reduced due to the formation of domain. The sizes of
domains are determined to minimize the energy. The interface between two adjacent
domains magnetized in different direction is called Bloch wall. The change over from one
domain direction to another takes place gradually over several atomic planes and not
abruptly. Anisotropy increases if the width of Bloch wall increases.
J S2 2
__________
Total Eex =
N
Thus large N increases the thickness of wall and hence anisotropy.
Bi I
Bi = I - Weiss constant
39
Atomic magnets of ferromagnetic substance are grouped into certain regions called
domains. In the absence of magnetic field these domains form closed chains. These chains
breakup with application of magnetic field and domains gradually set themselves with
their magnetic axes pointing in the field direction. Thus ferromagnetism is a crystal
phenomenon. Uncompensated electron spin (ex., for iron four uncancelled spins) give rise
to large magnetic moments. Just like an orbital electron the spinning electron has
mechanical angular momentum and magnetic moment.
The effective field strength Be may be regarded as the vector sum of external field
strength B and the internal molecular field strength B i.
Be = B + Bi = B + I
Considering gram molecule of the substance, if is the density, M , the molecular weight,
and 0 the gram molecular magnetic moment and its saturation value respectively,
0
____ _____
then, I = and IS = Since I and IS refer to unit volume.
M M
B
I A
____ ____
=
IS 0
0 Langevin
As domains obey the general theory of curve
paramagnetism, we have
I 1
____ ____ ___
= = coth a -
IS 0 a
Be
_____ a
where a =
KT
Fig.3.1.
when external filed is zero,
Be = I = ____
M
0
__________ _____ _________
a = = ·
KTM N KTM
40
0
____________
or a =
RTM
RTM
___ __________
or = a
0 02
Below curie point , in the absence of the external field, the domains are
spontaneously magnetized to a degree depending on temperature approaching saturation
value, as the temperature approaches absolute zero.
According to R. Becker , there are two independent processes by which this may
happen:
1. By growth of size of domains which are favourably oriented with respect to
magnetic field at the expense of less favourably oriented domains and
2. by rotation of directions of the magnetization along the direction of applied
magnetic field. This is also known as Barkhausen effect.
The principle of minimum free energy in thermodynamics explains the logical origin of
domains. For stable configuration of a solid, the free energy E-TS tends to reach a
minimum. At low temperatures i.e., below curie temperature the entropy S decreases.
Therefore, the term TS is negligible. This minimizes the energy which is sufficient for the
purpose of stable configuration.
41
Fig.3.3. The origin of domains.
Free poles exist in case of single domain i.e., saturation magnetization an external
field will be produced around it (Fig 3.3).
Single domain is subdivided into two equal domains. Former single pole is now
replaced by two opposite poles, which partially very close to the specimen.
Consequently the magnetic field energy is reduced to about one-half its previous value.
This configuration is more stable (Fig 3.3).
This process of subdivisions may be carried further with n domains and the
magnetic field energy will be reduced to one nth of its previous value. But, such division
can not be continued as it involves the creation of domain walls and a situation arises
when any further decrease in magnetic field energy is less than that compensated by
increase of domain wall energy.
Thus it is evident that the domain structure has its origin in the principle of
minimum energy.
42
other direction along BCD. Further, with reduction of H to zero value and then increasing
H results in the curve along DEFA, thus completing one cycle of operation. The area of
the loop indicates the energy involved in one cycle of operation.
Applications
Linear and non linear parts of M-H curves are some times chosen for audio
amplifier and generation of harmonics etc.
Ferrites have wide applications as they yield higher efficiency, low cost, smaller
volume, greater uniformity and ease of manufacture.
While non-linear B-H include flyback transformers and deflection yokes, carrier
power transformers chock coils, recording heads, memory and switching cores and
magnetic amplifiers.
Substituted lithium ferrites and ferrite powder such as r-fe2O3 are used for
computer memory elements and tapes respectively.
43
Module 3L. Anti Ferromagnetism
When the distance between interacting atoms is very small the exchange forces
produce a tendency to antiparallel alignment of the neighbouring spin dipole moments.
Such systems were first investigated theoretically by Neel and Bitter.
Consider a crystal containing two types of atoms A and B. Let B atoms occupy the
corner points of an elementary cube and A atoms being located at centers of these cubes.
Interactions between atoms be such that A spin tend to line up antiparallel to B spins. At
low temperature this interaction is very effective. And in external magnetic field the
resulting magnetization is very small. As the temperature is raised, interaction becomes
less efficient and susceptibility increases. Finally at a temperature TN (Neel temperature)
above which spins are free and the material shows paramagnetism.
When neutrons are incident on a crystal (MnO) they are scattered by atomic nuclei
and also by interaction between neutron spin and paramagnetic ions present. As a result
the ordered antiferromagnetic state gives rise to extra diffraction lines, the intensity of
which decreases with increase in temperature. This is because antiferromagnetic order
diminishes.
Atoms A and B are distributed over interlocking lattice. Atoms B occupy lattice
points and A centre of the body. Hence all nearest neighbours of A atom are B atoms and
vice versa. Let there be antiferromagnetic A-B interaction as well as A-A and B-B
interactions.
44
If Ha and Hb are net molecular field at A site and B site, we have
Ha = H - Ma - Mb
Hb = H - Mb - Ma
Soft magnetic materials have small coersive force and high permeability and can
be easily magnetized or demagnetized. These materials are used as transformer cores,
magnetic switching circuits. They are also used in magnetic amplifiers.
Hard or permanent magnetic materials are used as strong magnets due to their
ability to retain magnetic fields. Therefore, large coersive force and high residual
induction are desirable for providing adequate magnetic fields by magnets and also to
retain this field against adverse conditions.
Hard magnetic material describes a hysteresis with large area and the hysteresis
loop is almost a square type.
45
Solved Problems
Example 1: An iron ring of 1 m mean circumference is made from iron specimen of the
cross-section 24 cm2. Its permeability is 500. It is wound with 500 turns.
What current will be required to produce a flux of 2.5 x 10 5 lines?
Solution:
Here, flux = 2.4 x 105 lines = 2.4 x 10-3 wb (as 103 lines = 1 wb)
Permeability r = 500
= 1591 AT/m
46
Example 2: A paramagnetic salt contains 1028 ions/m3 with magnetic moment of 1 Bohr
magneton. Calculate (a) the paramagnetic susceptibility and (b) the
magnetization produced in a uniform magnetic field of 10 6 A/m, at room
temperature.
Solution
Magnetic moment, m = B = 9.27 x 10-24 A/m2
= 0.87 x 10-4
Example 3: The magnetic field in copper is 10-6 A/m. If the magnetic susceptibility of
copper is –0.8 x 10-5, calculate the flux density and magnetization in copper.
Solution:
H = 106 A/m, = -0.8 x 10-5 , 0 = 4 x 10-7 H/m, B = ?
B = 1.26 T
47
INFORMATION REQUIRED FOR QUIZ
The area around a magnet within which its influence is felt is called magnetic field.
The basic magnetic field vector is magnetic induction B.
The magnetic properties of substances are due to the motion of electrons and the
permanent magnetic moments of the atoms and electrons.
Diamagnetism is a weak effect and arises from changes in atomic orbital states under
the influence of the applied field.
Paramagnetism occurs when the magnetic moments are randomly oriented.
Ferromagnetism occurs when the magnetic moments are aligned in the same
direction.
Ferrimagnetism arises when two or more types of moments are present and directed
oppositely and complete cancellation of moments does not take place.
Anti ferromagnetism occurs when adjacent magnetic moments are oppositely directed
so that complete cancellation of magnetic moments takes place.
Above a certain critical temperature, ferro, antiferro and ferrimagnetic substances are
converted into paramagnetic substances.
Magnetic induction B may be defined as the total number of lines of force per unit
area both to the magnetising field and the induced magnetism in the substance.
Magnetic intensity (Magnetising force) is the force experienced by a unit north pole
placed at a point in the field.
Intensity of magnetisation is the magnetic moment per unit volume of a substance.
Magnetic Permeability for a medium may be defined as the ratio of number of lines
of magnetic induction per unit area (i.e., flux density) in that medium to the number
of lines per unit area present when the medium is replaced by vacuum.
For free space 0= 4 x 10-7 henry /m
Magnetic Susceptibility is a measure of ease with which the material may be
magnetised by a magnetising force. It may also be defined as the ratio of the intensity
of magnetisation (I) to the magnetic intensity (H) i.e.,
= I/H
48
When an atom is subjected to a magnetic field, the motion of orbital electron gets
modified in such a way that a weak magnetic moment opposing the applied magnetic
field is induced. This results in diamagnetism.
The diamagnetic susceptibility is negative and very small in magnitude.
Some atoms and molecules, even in the absence of externally applied magnetic field,
possess inherently some permanent magnetic moment. These moments are randomly
oriented with respect to each other so that the net magnetic moment is zero.
When external magnetic field is applied, the atomic magnetic moments align
themselves along the direction of applied field, thereby intensifying the lines of force
in the field direction. This leads to paramagnetism.
The aligning forces are weak and hence the paramagnetic field is weak.
The susceptibility is small and positive in paramagnetism. It decreases with
increasing temperature as thermal energy tends to make the alignment random.
In some substances the magnetic moments are already aligned by nature due to
bonding forces. These substances are ferromagnetic substances.
The susceptibility is positive and very large for ferromagnetism. They strongly
attract the magnetic lines of force of the applied field.
The spinning electron creates its own magnetic field.
The magnetic moment of an electron spin is taken as one unit called the Bohr
magneton B
B= eh/ 4 m
B = 9.3 x 10-24 Am2
The net magnetic moment of two electrons of opposite spins is zero.
Atoms or molecules which have quantum states, all of which have paired electrons
have zero net magnetic moment.
The unpaired electrons can align themselves in an applied field giving rise to
paramagnetism.
The order of filling of electron orbitals in an atom is given by Hund’s rule.
Hund’s rule states that the number of electrons of the same spin in p, d and f states
should be maximum so as to reduce electron-electron repulsion.
49
The magnetisation of a solid is the sum of the magnetic moments in unit volume of
the solid.
All magnetic moments within some atoms balance each other and as a result the net
magnetic moment is zero.
When atoms are surrounded by a magnetic field extra current is generated by
induction in the atom.
The inducted magnetic moments are opposite to the magnetic field.
The orbit of a moving electron is a current loop.
In the presence of an external field B, uniform precession of the orbit about the line of
magnetic field takes place.
Super-position of precession and orbital currents give rise to additional current.
Electrons are negatively charged, the moment direction is opposite to that of the field.
This gives rise to negative susceptibility.
Larmor Theorem states that presence of B alters O to and sign show that
moments parallel to B are slowed and anti parallel are accelerated.
Paramagnetism is found to exist in atoms, molecules and lattice possessing an odd
number of electrons, when total spin does not vanish.
Paramagnetism arises from atoms, molecules and ions with a net magnetic dipole
moment.
According to Langevin, each atom or molecule of a paramagnetic substance is
considered to be a small permanent magnet due to circulating electrons.
When magnetic field is applied, the state of magnetisation depends on applied
magnetic field, thermal agitation, number of molecules having orientation inclined at
an angle with given line of reference with is proportional to sin d
The atomic magnetic dipole interactions, being of long range, are responsible for the
formation of magnetic domains.
The ferromagnetic material contains a large number of small regions known as
domains before magnetisation is induced by an external field.
The domains are magnetised to saturation individually but arrange themselves in such
a manner that the net magnetic moment of the specimen is zero.
50
Magnetostatic energy is reduce due to the formation of domains.
The sizes of domains are determined to minimize the energy.
The interface between two adjacent domains magnetised in different directions is
called Block wall.
The change over from one domain direction to another takes place gradually over
several atomic planes and not abruptly.
Anisotropy increases if the width of the Block wall increases.
Metals like Fe, Co and Ni (transition metals) exhibit magnetisation even when the
magnetising field is removed. This phenomenon is known as Ferromagnetism.
According to Weiss theory there exists an internal molecular field (Bi) which favours
instantaneous magnetisation of ferromagnetic materials.
Bi = I
Atomic magnets of ferromagnetic substance are grouped into certain regions called
domains.
Ferromagnetism is a crystal phenomenon.
Uncompensated electron spin (ex., for iron four uncancelled spins) give rise to large
magnetic moments.
Just like an orbital electron, the spinning electron has mechanical angular momentum
and magnetic moment.
The spinning electrons make a major contribution to magnetic properties of
ferromagnetics.
Below curie point, in the absence of the external field, the domains are spontaneously
magnetised to a degree depending on temperature approaching saturation value, as the
temperature approaches absolute zero.
Above curie point, spontaneous magnetisation no longer occurs, the ferromagnetic
properties disappear and the substance becomes paramagnetic.
According to Becker magnetisation appears even when a small magnetic field is
applied .
51
The magnetisation appears even when small magnetic field is applied due to growth
of size of domains which are favourably oriented with respect to magnetic field at the
expense of less favourably oriented domain.
The principle of minimum free energy in thermodynamics explains the logical origin
of domains.
For stable configuration of a solid, the free energy E-TS tends to reach a minimum.
Below curie temperature the entropy S decreases and the term TS is negligible. This
minimises the energy which is sufficient for the purpose of stable configuration.
Free poles exist in case of single domain.
It is evident that the domain structure has its origin in the principle of minimum
energy.
When a ferromagnetic specimen is placed in a magnetising field H, magnetisation
increases nonlinearly and reaches saturation.
On reducing the field strength, the path is not retraced in the hysteresis even at H = 0,
M is not zero but equal to Mr. This is called residual magnetisation.
In the case of hysteresis, the lagging of M behind H is significant as H attains zero
value and enough amount of M = Mr is left behind instead of being zero along with
H. This lagging of effect behind the cause of effect is hysteresis.
In the hysteresis to make M = 0, the applied field H should be reversed and increased
till H = Hc, known as coercivity.
The area of hysteresis loop indicates the energy involved in one cycle of operation.
Ferromagnetic materials are used as electrical and electronic components basing on
their hysteresis or M-H behaviour.
Transformers are used for various important functions, with ferromagnetic core.
The core material is so chosen as to have small retentivity and coercivities. This
increases their efficiency.
In case of permanent magnets, ferromagnetic substances with large values of
retentivity and coercivity are chosen.
Linear and non linear parts of M-H curves are chosen for audio amplifier and
generation of harmonics etc.
52
Ferrimagnetic materials are the salts of some of the transition metals particularly
which crystallize in spiral structure and contain one of the known ferromagnetic
elements. These are called ferrites.
The chemical formula of ferrites can be written as Me 2+ Fe23+ O42-
Ferrites have wide applications as they yield higher efficiency, low cost, smaller
volume, greater uniformity and ease of manufacture.
Ferrites are in use as filter inductor, IF transformer antenna cores, adjustable
inductors, tuners, miniature inductors and loading coils. All these are linear B-H.
Non-linear B-H include flyback transformers and deflection yokes, carrier power
transformers chock coils, recording heads, memory and switching cores and magnetic
amplifiers.
Substituted lithium ferrites and ferrite powder are used for computer memory
elements and tapes respectively.
Thermal sensing ferrites are used as switches in refrigerators, air conditioners,
electronic ovens and fire detecting systems.
Copper-Cobalt-Nickel ferrites are used in magneto-strictive oscillators that are used
in medical and industrial fields.
Experimentally antiferromagnetism was first discovered as a property of MnO by
Bizette.
Susceptibility shows a maximum as a function of temperature in antiferromagnetic
substances.
Substance which can be easily magnetised due to their small coersive force are called
soft magnetic materials.
The substances whose coersive force is large and their resistance to the movement of
the domain walls is also large are called hard magnetic materials.
Soft magnetic materials have small coersive force and high permeability and can be
easily magnetised or demagnetised.
Hard or permanent magnetic materials are used as strong magnets due to their ability
to retain magnetic fields.
53
Large coersive force and high residual induction are desirable for providing adequate
magnetic fields by magnets and also to retain this field against adverse conditions.
To attain saturation magnetisation in the material the most important requirement is
low permeability and a large magnetising force.
The hysteresis curve gives more important information with regard to the
requirements for the magnetic materials.
The soft magnetic materials are characterized by steeply ascending magnetisation
curve.
Hard magnetic material described a hysteresis with large area and the hysteresis loop
is almost a square type.
54
IMPORTANT FORMULAE
= I/H
0 = e2
√ m r3
P = Be
2m
= 0 - P
+
Ed = 1
2 O
-
B2 dv
Bi = I. - - Weiss constant
I =
IS O
a = O
RTM
Ha = H - Ma - Mb
Hb = H - Mb - Ma
55
MODULE 4
Semiconductors
Module 4A. Introduction
As isolated atoms are brought together to form a solid, various interactions occur
between neighbouring atoms. As a result of this interaction, the higher energy levels of
the outer shells are slightly altered, of-course, without violating Pauli’s exclusion
principle. There will be splitting of single energy level of an isolated atom into a large
number of energy levels. Since in a solid many and many atoms are brought together the
separation between so many sublevels would be superimposed on each other and
consequently the split energy levels are almost continuous and form an energy band. This
energy band corresponds to energy levels in an atom. An electron can have discrete
energy levels lying within these energy bands. These energy bands called the allowed
energy bands are generally separated by some gaps which have no allowed or permitted
energy levels. These gaps are known as forbidden energy gaps or bands. Energy bands
occupied by the valance electrons is valance band and the electrons that leave this
valance band are called conduction electrons and are very weakly bound to the nucleus.
The bands occupied by these electrons are called the conduction band.
56
Module 4B. Intrinsic Semiconductor
A semiconductor in extremely pure form is known as intrinsic semiconductor. In
case of intrinsic semiconductor hole-electron pairs are created even at room temperature.
When an external electric field is applied current conduction due to holes as well as
electrons takes place.
Carrier Concentration
Ve = e E
Vh = h E (4.1)
57
Therefore,
g(T) = r(T) = K ni pi (4.2)
where ni and pi are intrinsic electron and hole densities respectively while the
proportionality constant K takes care of the charge densities and the nature of
semiconductor.
as
ni pi = ni2 (4.4)
ni2 has a fixed value at a given temperature and depends on the nature of the material,
because the densities ni and pi are inherent property of a semiconductor at a given
temperature.
In case of intrinsic semiconductor germanium, the energy gap is 0.75 eV. There
are about 5 atoms out of 1010 atoms of germanium with broken valence bonds which can
contribute to the intrinsic conduction at usual ambient temperature, but increases rapidly
with temperature. As in the intrinsic semiconductor n i or pi strongly depends on
temperature, it is necessary to limit operating temperature of germanium to 85 – 100 C
and of silicon to 190 – 200 C. The larger energy gap of silicon is responsible for higher
permissible temperature.
Generally, the impurities added are trivalent (boron, gallium, indium) and
pentavalent atoms (arsenic, antimony, phosphorous) to Germanium and silicon.
Semiconductor doped with pentavalent atom like arsenic, four of its valance electron
form four covalent bonds with neighbouring semiconductor atom, while the fifth electron
58
of impurity atom arsenic is free to conduct current. Thus semiconductor doped with
pentavalent substances is called n-type semiconductor. In a similar way, when
semiconductor is doped with trivalent impurity say indium then three of its electrons
share covalent bonds with neighbouring semiconductor atom, while the fourth covalent
band is missing one electron. This missing or absence of electron (hole) behaves like a
positive charge. Hence, a semiconductor doped with trivalent substance is p-type
semiconductor.
(a)
(b)
Carrier Concentration
np pp = ni2 (4.5)
59
p +ND = n + NA (4.6)
NA – ND ( NA – ND)2 ½
_____________ ______________
p= + ni 2 (4.9)
2 4
ND – NA ( ND – NA)2 ½
_____________ ______________
n= + ni 2 (4.10)
2 4
Equations (4.9) and (4.10) are general expressions for charge densities.
NA = ND = 0
p = ni and n = ni
That is, p =n = ni
60
Case II: for n-type semiconductor
ND >> n and NA = 0, then hole density is
– ND + ND 1 + ( 4 ni2 / ND2)
______________________________________
pn =
2
The term 4ni2/ND2 is < 1, therefore by expanding as a power series, neglecting higher
powers,
- ND ND 2 ni 2
____ _____ ____
pn + 1+
2 2 ND2
ni 2
_____
pn (4.11)
ND
ni 2
_____
pn (4.13)
NA
and
pp N A (4.14)
61
Module 4D. Minority Carrier Life Time
Consider a semiconductor containing a relatively high concentration of donor
levels so that the conductivity is essentially due to electrons in a conduction band(n type).
The electrons are then called the majority carriers. There are always some boles in the
valance band as a result of thermal excitation of electrons from the filled band, but not at
too high temperatures the number of holes is relatively small. In this case, holes are
called minority carriers.
The net current that passes through a semiconductor material has two components
– drift current and diffusion current.
62
Carrier drift
In a perfect crystal the alternating electric field enables electrons and holes to
move freely. In the absence of an external applied electric field, the random motion of
free carriers within a crystal does not result in a net transfer of charge since charge
movement in any direction is balanced by charge movement in any other direction. When
voltage is applied across the material, each carrier experiences a force attracting it to one
end of the material. Electrons are attracted by positive potential and holes by the negative
potential. This net movement in charge is termed as drift and is superimposed on the
random thermal movement and results in current flow through the specimen.
In n
___
Jn = = ( - q vn ) = - q n v n (4.15)
i=0
A
But vn E ; vn = n E
where n is proportionality constant, called electron mobility.
Jn = q n n E (4.16)
A similar argument applies to holes. By taking the charge on the hole to be positive, its
current density,
Jp = q p p E (4.17)
The total current flowing in the semiconductor specimen due to the applied electric field
E can be written as
J = J n + Jp
J = ( q n n + q p p ) E
Thus,
= ( q n n + q p p ) (4.18)
The above equation reveals that contributions of electron and hole to electrical
conductivity are simply additive.
63
Generally in extrinsic semiconductors, only one of charge carriers, either electron
or hole, is significant because of the very large difference between the two carrier
densities.
Therefore,
Carrier diffusion
Another important current component that can exist if there is a spatial variation
of carrier concentration in the semiconductor material, that is, the carriers tend to move
from a region of high concentration to a region of low concentration. This current
component is called diffusion current.
The average rate of electron flow per unit area, F1 of electrons crossing the plane at x = 0
from the left is
(½) n (-l ) l 1
F1 = _________________ = ___
n (-l)vth (4.20)
c 2
as l / c = vth
64
In a similar manner, the average rate of electron flow per unit area, F 2 of electrons
at x = l crossing the plane at x = 0 from the right is
1
F2 = ___ n (l) vth (4.21)
2
F = F1 – F2
1
___
F=
vth [ n(-l) - n(l) ] (4.22)
2
Approximately, the densities at x = l by the first two terms of a Taylor series expansion
are,
1 dn dn
F = ___ vth n(0) - l ____ - n(0) + l ____
2 dx dx
dn dn
_____
F = - vth l = - Dn _____ (4.23)
dx dx
Each electron carries a charge –q, the carrier flow gives rise to the current
dn
Jn = -q F = q Dn ____ (4.24)
dx
The above equation for diffusion current reveals the following facts:
1. The diffusion current is proportional to the spatial derivative of the electron
density. That is,
dn
Jn ____
Dx
3. For an electron density that increases with x, the gradient is positive. And electron
will diffuse toward the negative direction. The current is positive and flows in the
direction opposite to that of the electrons as shown in fig. 4.3.
65
Module 4F. Einstein relation
Einstein relation relates the two important constants namely diffusivity and
mobility that characterize carrier transport by diffusion and by drift in a semiconductor.
The Einstein relation can also be obtained between D p and p.
Einstein relation can be obtained using the theorem for the equipartition of energy
for one dimensional case. That is,
1 2
1
___ ___
mn vth = kT (4.25)
2 2
dn kT dn
____ _____ _____
Jn = q D n = q n (4.26)
dx q dx
Therefore, kT
Dn = _____
n (4.27)
q
This is known as Einstein’s relation.
Now, consider the overall effect when drift, diffusion and recombination occur
simultaneously in a semiconductor material. The governing equation is called the
continuity equation.
66
x x + dx
Fig.4.4
Jn(x) A
__________
The number of electrons flowing into the slice at x =
-q
Jn(x + dx) A
________________
The number of electrons flowing out the slice at x + dx =
-q
dn Jn(x) A Sn (x + dx) A
_____ ____________ __________________
A dx = - + ( Gn – Rn) A dx (4.28)
dt -q -q
where A is the cross-sectional area and A dx is the volume of the slice. G n and Rn are
generation and recombination rates respectively.
Jn
Jn (x + dx) = Jn(x) + _______ dx + . . . . . . .
x
Thus basic continuity equation for electrons
n 1 Jn
_____
= _____ ______ + (Gn – Rn) (4.29)
t q x
A similar continuity equation can be derived for holes as
p 1 Jp
____ ___ _____
= - + (Gp – Rp) (4.30)
t q x
67
Module 4H. Hall Effect:
E H Hall, in 1879, discovered Hall effect. According to Hall effect when a magnetic field
is applied perpendicular to a current carrying conductor, a potential difference is
developed between the points on the opposite sides of the conductor. The voltage so
developed is called the Hall Voltage. Hall effect gives information about the sign of
charge carriers in electric conductor.
Consider a uniform thick metal strip placed with its length along X-axis. Let i be the
current passing through the conductor along the X-axis and a magnetic field B be
established along Y-axis. As a charged particle experiences a sideways force in a
magnetic field, the force (F) experienced by charge carriers in the conductor will be
perpendicular to the plane XY i.e. along Z-axis. The direction of the force is given by
Fleming’s left hand rule. So, if the charge carriers are electrons, then they experience a
force in positive Z-direction and get accumulated on the upper surface (face MNRS) of
the metal strip. Hence, the upper face is charged negatively while the lower side charged
positively. Thus a transverse potential difference is developed. This emf (electro motive
force) is known as Hall emf. However, for positive charges like holes the sign of emf is
reversed. The sign of emf helps in determining the nature of the charge which can be
measured using a potentiometer. Thus it is shown that electrons are the charge carriers in
metals while holes conduct current in p-type semiconductor.
The displacement of charges due to the force they experience, gives rise to a transverse
electric field known as Hall electric field EH which opposes the sideway drift of the
electric charges. When equilibrium is reached, the forces due to electric and magnetic
fields on the charges will be balanced.
If q is the charge and Vd its drift speed in the magnetic field (B) then Magnetic deflecting
force
(FB) = q (Vd x B) (4.31)
68
As the net force acting on the charge is zero
q (Vd x B) + q EH = 0
or EH = - Vd x B
i.e. EH = Vd B (4.33)
EH = ( j n q ) B (4.35)
If d is the width of the metal strip and V H - the Hall Voltage then,
EH = VH (4.36)
d
EH can be calculated by measuring the Hall Voltage V H and the width of the metal strip.
If A is the area of cross-section of the strip then the current density j can be calculated by
measuring the current i in the strip as
j=i/A (4.37)
Thus the value of l /nq can be calculated by substituting the values of E H, j and B in
equation (4.35).
Thus the ratio of Hall electric field E H to the product of current density j and the magnetic
induction B is called Hall Coefficient (RH).
RH = l / nq = EH / j B.
Unit of RH = Volt - m
amp-weber/m2
The sign of conducting charges can be determined by measuring the Hall Voltage.
The number of charge carriers per unit volume can be determined from the magnitude
of RH.
The nature of a semi conductor can be determined by nature of Hall Coefficient i.e. for
n-type semi conductor the Hall Coefficient is negative while it is positive for p-type semi
conductor.
69
Hall effect can be used as the basis for the design of a magnetic flux density meter
as the Hall voltage is proportional to the magnetic flux density B for a given current.
Hall effect can also be used to determine the power flow in an electromagnetic wave.
As this process continues the remaining molten mixture becomes increasingly rich
in indium. When all germanium has been redeposited, the remaining material appears as
indium button which frozens on the outer surface. This button then serves as a suitable
base for soldering the leads.
Consider a pn-junction. The free charge carriers in both p-type and n-type are
represented by + (hole) and – (electron) respectively. The circled charges (Fig. 4.6)
represent ionized impurities which do not move. In p-type, the acceptors are shown
negative. This is because thermal energy causes electrons to move from valance band to
empty bands of the acceptor atoms. The accepter atoms are negatively charged because of
presence of the extra electrons. Similarly, on n side the donor impurity atoms are
positively charged because they have supplied electrons to the conduction band. On the
p side of the junction, there will be concentration of free holes, while on n side, electrons
will be in high concentration.
These charge carriers will try to spread out due to thermal agitations. Such type of
spreading out process due to scattering and random velocity is known as diffusion. Thus
holes diffuse over to n side while electrons diffuse over to p side of pn-junction. When
charge penetration takes place through the junction it finds itself surrounded by charges
of opposite side i.e., the holes when penetrate into n-type are surrounded by electrons and
vice versa. Such charges will try to recombine with each other.
70
As the free electrons move across the junction from
n type to p type, positive donor ions are uncovered i.e.,
they are robbed off free electrons. Hence, a positive charge
is built on n side of the junction. At the same time the free
electrons cross the junction and uncover the negative
acceptor ions by filling in the holes. Therefore, a net
negative charge is established on p side of the junction.
This process continues till a sufficient number of donors
and acceptors is uncovered and then further diffusion is
prevented. It is because now positive charge on n side Fig.4.7 Forward bias
repels boles to cross p to n and negative charge on p side
repels free electrons to enter from n-type to p-type. Thus a
barrier is setup against further movement of charge carriers
i.e., holes and electrons. This is called potential barrier or
junction barrier V0 which is nearly 0.1 to 0.3 volt. This
potential barrier gives rise to electric field. This field
prevents the respective majority carriers from crossing the
barrier region. It is to be noted that outside this barrier on
each side of the junction still the regions are neutral only
inside the barrier there is positive charge on n side while
negative charge on p side. This region is called depletion
region or depletion layer. This is so called because the Fig.4.8 Reverse bias
mobile free electrons and holes have been depleted i.e.,
emptied in this region.
Forward biasing
An external voltage is applied to the pn-junction such that it cancels the potential
barrier, thus permitting current flow across the junction. Such type of biasing is called
forward biasing.
Reverse Biasing
An external voltage is applied to the pn-junction such that it increases the
potential barrier and allows no current to flow across the junction. Such type of biasing is
called reverse biasing.
To connect pn-junction in reverse bias, the p-side of the junction is connected to
-ve and n-side is connected to +ve terminals of the battery B. This establishes an electric
field in the same direction as that of the potential barrier. This strengthens the resultant
field and the barrier height is increased. The increased potential barrier prevents the flow
of charge carriers across the junction. Thus a high resistance path is introduced for the
entire circuit and hence no current flows across the junction.
71
Module J. Width of Depletion Layer of the pn-junction
The area of depletion layer depends on the
concentration of the impurities in the regions. Let x 1
and x2 be the widths of the space charge region in p
and n sides respectively. The effective area of the
depletion layer can easily be calculated using
Poisson’s equation, which states that the second
derivative of the potential with respect to the distance
(d2v/dx2) is proportional to the charge density ().
d2 V -
____ _____
= (4.38)
2
dx
= - e Na (4.39)
where Na is the density of acceptor atoms and the negative sign indicates that acceptor
atoms are negatively ionized.
d2 V - e Na
____ _____ _____
= = (4.40)
2
dx
d2 V e Na
____ ______
integrating, = dx
2
dx
dV e Na
____ ______
= x +A (4.41)
dx
integrating once again,
dV e Na
____ ______
= x dx + A dx
dx
e Na x2
______ ____
V = +Ax+B (4.42)
2
where A and B are arbitrary constants which are determined by boundary conditions.
72
As potentials are measured with respect to the potential at boundary between p
and n-sides, V = 0 at x = 0
dV
____
At x = - x1, V = constant or =0
dx
e Na
______
0 = (-x1) + A
e Na
______
or A = x1 (4.44)
Finally, substituting the values of A and B in equation (4.42),
e Na x2 e Na x1
______ __ __________
V = + x (4.45)
2
at x = - x1, if V = -V1, then
e Na x12 e Na x 1 2
_____ __ __________
-V1 = -
2
e Na x12 1
__________ __
-V1 = - 1
2
e Na x12 e Na x12
-V1 = - __________ or V1 = (4.46)
2 2
Similarly, Poisson’s equation when applied to n-region the potential V2, at x = x2 where
the depletion region ends, is
e Nd x 2 2
_________
V2 = (4.47)
2
73
where Nd is the density of the donor atoms in n-region of pn-junction.
VB = V2 – (-V1) = V1 + V2
e Na x12 e Nd x 2 2
__________ __________
VB = +
2 2
e
____
VB = [ Na x12+ Nd x22 ] (4.48)
2
For charge neutrality e N a x 1 = e Nd x 2
Na x1
_______
or x2 = (4.49)
Nd
Substituting x2 value from equation (4.49) in equation (4.48),
e Na2 x12
____ _________
VB = + Na x12
2 Nd
e Na x12 Na
__________ ___
VB = + 1
2 Nd
2 VB ½
x1 = (4.50)
e Na 1 + Na
Nd
Na 2 VB ½
____
x2 =
Nd e Na 1 + Na
Nd
74
Na2 2 VB ½
____
= x
Nd 2
e Na 1 + Na
Nd
2 VB ½
=
2
e Nd 1 + Na
Na Nd
2 VB ½
=
e Nd Nd + Nd Na
Na Na Nd
2 VB ½
x2 = (4.51)
e Nd 1 + Nd
Na
Total width of the depletion region is x = x 1 + x2
2 VB ½ 2 VB ½
x = +
e Na 1 + Na e Nd 1 + Nd
Nd Na
2 VB ½ 2 VB ½
x = +
e Na Nd + Na e Nd Na + Nd
Nd Na
2 VB ½ 1 ½ 1 ½
x = +
e (Na+ Nd ) Na Nd
Nd Na
2 VB ½ Nd ½ Na ½
______________ ____ ____
x = + (4.52)
e (Na+ Nd ) Na Nd
2 VB ½ Na + Nd
______________ _____________
x =
e (Na+ Nd ) (Na Nd) ½
75
2 VB (Na + Nd ) ½
___________________________
x =
e Na Nd
This is the width of depletion region which can be determined by knowing the potential
barrier VB and densities of donor and acceptor atoms in n-type and p-type of a
pn-junction.
The increase in current with voltage can be seen from the curve OAB(Fig 4.10).
The portion OA shows that at first the increase in current is very slow because of barrier
voltage of pn-junction. Once the applied voltage exceeds the barrier voltage, the increase
in current is almost linear (portion AB) of the curve. The pn-junction behaves like a
conductor and the rise in current is very sharp.
76
Similarly, the circuit is connected in reverse bias by connecting p-type to negative
terminal and n-type to positive terminal of the battery. The potential barrier increases in
this case and practically no current flows through the circuit. The junction resistance will
become very high. However, due to minority charge carriers a very small amount of
current in microamperes (A) flows through the circuit. This current is called reverse
current. It is known that a few free electrons in p-type and holes in n-types are available
and are called minority charge carriers. The applied reverse bias acts as forward bias to
these minority charge carriers. As the applied voltage is increased continuously, the
kinetic energy of electrons becomes very high and they knock out electrons from the
semiconductors atoms. At this stage breakdown of the junction takes place. This is
characterized by sudden rise of reverse current and sudden fall of resistance of barrier
region and may destroy the pn-junction. The applied reverse voltage at which pn-junction
breaks down with sudden rise in reverse current is called breakdown voltage. But care
should be taken to see that the applied reverse bias is always less than breakdown voltage
to prevent the damage of pn-junction.
When the breakdown voltage is reached in this diode, the current increases
rapidly with additional voltage. As a result a diode in break down maintains an almost
constant voltage across itself over a wide current range. Hence, zener diode is used as
voltage regulator.
77
Module M. Varactor (Varicap) Diode
It is known that the transition capacitance C T of reverse-biased diode depends on
the width of the transition region, according to the equation,
A
______
CT =
W
where is the permittivity; A is the area of the junction, W is the width of the transition
region.
The width of the transition region (W) inturn depends on the applied reverse
voltage. Therefore, it can be said that transition capacitance CT varies with the applied
voltage. The larger the reverse voltage the more is the width W of the transition region
and smaller is the capacitance CT. The variation of CT with the applied reverse voltage for
a typical silicon diode is shown in fig.4.12.
Fig.4.12.
This voltage dependent nature of C T of a reverse biased pn-junction is utilized in
several cases such as
(1) voltage tuning of an L-C resonant circuit
(2) self balancing bridge circuits and
(3) parametric amplifiers
Special diodes are made for these applications, utilizing voltage-variable property
of CT and are called Varactors diodes, varicaps or voltacaps.
The symbol and the circuit model for varactor diode is shown in fig.4.12.
In the circuit, RS and CT represent the ohmic resistance and transition capacitance
of the varactor diode respectively. For a applied reverse voltage of about 5 volts, the
typical value of CT is 20 pf and typical value of R S is 8 ohm. The resistance RT shunting
CT is large, greater than 1 ohm and can be neglected in most cases.
78
In circuits, for high frequencies, C T must be small, failing which even in the
reverse biased condition, the diode permits appreciable reverse current through C T. This
is undesirable since in reverse bias condition, a diode is expected to prevent the signal
through it.
In some semiconductors, such as gallium arsenide (Ga As) and gallium phosphide
(Ga P), the released energy is in the form of light. Now, consider GaAs diode. When
current is passed in the forward direction through the diode, visible light is emitted from
the junction region. This is due to the recombination of minority carriers. Hence, in GaAs
diode, energy is released in the form of light when an electron jumps from conduction
band to valance band. Such a diode is called light emitting diode (LED).
Fig. 4.13 shows a circuit in which a resistor is connected to LED in series. Current
is passed through LED. The series resistor R limits the current.
The intensity of emitted light from LED increases with the increase of current
passing through it. Typical voltage drop across the LED is about 1.6 V to 2 V. A LED
gives a reasonable amount of light even at low current, say 10 mA. Hence, LED has low
power consumption of 16 to 20 milliwatts.
The following are the merits of LEDs over conventional incandescent and other
types of lamps:
1. Low working voltages and currents
2. Less power consumption
3. No warm up time
4. Very fast action.
5. Emission of monochromatic light
79
6. Small size and weight
7. No affect of mechanical vibrations
8. Less fragile than glass.
9. Extremely long life.
The important solar cells are pn- junction solar cell, heterojunction and interface
solar cells. However, a typical pn-junction solar cell is discussed here.
pn - junction solar cell
When the cell is exposed to the solar spectrum, a photon that has an energy less
than the band gap Eg, makes no contribution to the cell output. A photon that has an
energy greater than Eg is wasted as heat. To drag the conversion efficiency, consider the
energy band diagram of a pn-junction under solar radiation shown in Fig. 4.15a.
80
The equivalent circuit is shown in Fig. 4.15b, where a constant current source is
in parallel with the junction. The source I L results from the excitation of excess carriers
by solar radiation, Is is the diode saturation current and R L is the load resistance, then
the ideal V-I characteristic is given by
I = Is (eqV/kT - 1) - IL (4.53)
Fig.4.15
The curve passes through the fourth quadrant, and therefore power can be extracted from
the device.
Hence, for a given IL, Voc increases with decreasing saturation current IS ,the
output current is given by
Conversion efficiency
where Pin is the incident power and FF is the fill factor defined as
FF ImVm / ILVoc
For maximum efficiency, all the three terms in the numerator should have maximum
values.
81
Amorphous silicon (-Si) is also a material for solar cells. Layers of a few
microns thick are deposited by radio-frequency glow discharge composition of silane
onto metal or glass substrates. The optical-absorption characteristics of -Si has an
effective bandgap of 1.5 eV. Although the efficiency of -Si solar cells (~10%) is lower
than that of single-crystal silicon solar cells, their production costs are considerably
lower. Therefore, the -Si solar cell is one of the major devices for large-scale use of
solar energy.
82
Solved Problems
Solution:
Given, Crystal-Ge, impurity = 0.1 % arsenic, ni = 2.37 x 1019 / m3, the density of
germanium atom = 4.41 x 1028 /m3, electron and hole densities = ?
The hole density is pn = ni2 / ND = 5.62 x 1038 / (4.41 x 1020) = 1.27 x 1013 /m3.
Solution:
Given, me* = mh* = m, T = 300 K, Eg(A) = 0.36 eV and Eg = 0.67 eV.
ni(A) e(-0.36/2kT)
_____ ___________
= = e [ (0.72 – 0.36) / 2kT] = e(0.36/0.052) = 1015.
(-0.722kT)
nI(B) e
83
INFORMATION REQUIRED FOR QUIZ
Solid substances are classified as conductors, semiconductors and insulators
The resistivity of a semiconductor is less than that of insulator but more than that of a
conductor
In a solid many atoms are brought together therefore the separation between sub
levels would be super-imposed on each other and therefore the split energy levels
are almost continuous and form an energy band
The energy bands are separated by some gaps which are not permitted energy levels.
These gaps are called Forbidden energy gaps
Under the application of electric field, electrons move towards positive terminal while
holes move towards negative terminal.
84
The operating temperature of silicon is 190 to 200C
The diffusion current is proportional to the spatial derivative of the electron density
85
Module 5
Lasers
Module 5A. Introduction
The word LASER is an acronym for Light Amplification by Stimulated Emission
of Radiation.
Absorption
Initially an atom is in ground state i.e., all electrons possess lowest possible
energy. When energy is supplied i.e., atom when excited, electrons absorb energy and
jump to higher energy level, say from E1 to E2.
E2 – E1
E2 – E1 = h or = __________
h
86
Here, B12 is constant of proportionality called as Einstein coefficient for absorption of
radiation.
Spontaneous emission
An atom in excited state remains only for about 10 -8s; it then on its own accord
jumps to lower energy level emitting radiation. This process is called spontaneous
emission.
Stimulated emission
87
Module 5B. Einstein’s Coefficients
Consider an assembly of atom in thermal equilibrium at temperature T with
energy density U(), where is frequency of radiation.
Similarly the number of atoms that can emit radiation (both spontaneous and stimulated)
is
N2 A21
___________________
U() =
N1 B12 - N2 B21
N2 A21 / N2B21
_________________________
=
N1 B12 N2 B21
_________ _________
N2B21 N2B21
A21 1
_____ ______________________
U(υ) = (5.7)
B21 N1 B12 -1
N2 B21
88
N1 e- E1 / kT
N2
___
= e– h / kT
N1
N1
___ h / kT
or = e (5.8 )
N2
Substituting equation (5.8) into (5.7)
A21 1
____ ______________________
U() = (5.9)
B21 eh / kT B12 -1
B21
Comparing this with Planck’s formula for radiation, given by
8h 3 1
_________ _____________
U() =
3 h / kT
c e -1
where c is velocity of light
A21 8 h 3
______ ____________
= and
3
B21 c
B12
_____
= 1
B21
or B12 = B21
That is, the probability of stimulated emission is same as that of induced absorption.
A21
Also, ______ 3
B21
Probability of spontaneous emission dominates over induced emission more and
more as the energy difference between the states increases.
At ordinary conditions number of atoms in lower energy level are more compared
to that in higher energy level. But by some means if more number of atoms are made
89
available in higher energy level, then the situation is called population inversion. The
process involved in achieving this situation is called pumping.
Working
90
The transition 1 is optical pumping. The excited ions give up them a part of energy to
crystal lattice by collision and decay to metastable state E 2. The transition 2 is radiation-
less transition. The metastable state has relatively longer life time ( 10-3 sec) than usual
life time (10-8 sec). Thus the number of ions in state E 2 goes on increasing while due to
pumping, the number of ions in ground state E 1 goes on decreasing. In this way
population inversion is established between ground and metastable states.
The output beam has wavelength, = 6943 Å. The duration of output flash is
about 300 sec. It is very intense (10,000 watt).
91
Construction
He – Ne laser consists of
1. A working substance in the form of a mixture of He and Ne gases in the ratio 7 : 1
at a total pressure of 1 torr (i.e. 1 mm of Hg).
2. A resonant cavity of quartz tube of 50 cm length and 5 mm diameter; highly
polished and silvered at one end and partially at other, and
3. An exciting source for creating discharge in tube using radio frequency high
voltage generator ( like Tesla coil).
When electromagnetic energy is injected into the tube through gas tube by means
of radio frequency high voltage source. He atoms get excited to metastable state. They in
turn collide with unexcited Ne atoms and resonant energy transfer takes place so that Ne
atoms get excited to a specific energy level. After transfer of energy He atoms return to
ground state. The laser action takes place only in Ne atoms, while He serves only the
purpose of enhancing the excitation process.
When population inversion occurs in neon atoms, they return to lower energy
level by emitting photons. The photons emitted parallel to axis of tube bounce back and
forth between polished faces and stimulate emission of same wavelength from other
excited Ne atoms. Thus photons get amplified and a powerful, coherent, parallel and
highly directional laser beam emerges from partially reflecting end face.
92
Module 5F. Semiconductor Laser
A semiconductor can also produce lasing action with high efficiency. A
semiconductor laser is obtained by forming a junction between p-type and n- type
materials in the same host lattice. The doping is done very heavily. The pn-junction is
forward biased. Recombination of electrons and holes is the basic mechanism responsible
for emission of light. The number of photons emitted depends on the value of forward
bias. As the current across pn-junction is gradually increased from zero, spontaneous
emission starts, which changes over to laser beam. This beam can be built up to a very
high value by polishing the faces of semiconductor pn-junction laser. The frequency of
this laser beam is
Eg
= ____
h
where Eg is the energy or band gap.
93
Solved Problems
Example 1: Light of wave length 6730 Å is emitted by a He-Ne laser with output power
of 2.3 mw. Calculate the number of photons emitted per second by the laser.
(Planck’s constant h = 6.63 x 10-34 J)
Solution:
Planck’s constant h = 6.63 x 10-34 J
Power , P = n h
Example 2: Evaluate the wave length of radiation given out by a laser with
E2 – E1 = 3 eV. ( h = 6.63 x 10-34 JS, c = 3 x 108 m/s)
Solution:
Given, h = 6.63 x 10-34 JS, c = 3 x 108 m/s
As E = h
E = E 2 – E1
c = or =c⁄
Therefore, E2 – E1 = h = h (c/)
= 4.14 x 10-7 m
94
INFORMATION REQUIRED FOR QUIZ
Spontaneous emission is the process in which an atom is excited and remains there
for few microseconds, then on its own accord jumps to lower energy level emitting
radiation
Population inversion is the situation, in which more number of atoms are made
available in higher energy level than that in lower energy level.
The pumping action in Ruby laser is provided by helical xenon flash lamp
Ruby laser does not generate continuous laser beam. It is a pulsed laser
A semiconductor can produce lasing action with high efficiency. GaAs laser is the
first semiconductor laser
95
Lasers are used to investigate basic laws of interaction of atoms and molecules with
electromagnetic waves
Lasers are widely used in optical communication systems, medicine and biology
IMPORTANT FORMULAE
h = E2 – E1
E2 – E1
__________
=
h
c=
96
Module 6
Fibre Optics
Module 6A. Introduction
From time to time scientists have tried to design and improve better
communication system by which messages are sent over long distances. The
communication system consists of transmitter, transmission channel (wave guide, wire
etc.) and receiver. Signal gets attenuated and distorted when transmitted through a wire.
With the development of laser, reliable and a powerful coherent radiation became
available. So, it is natural to use light for communication purpose. Optical frequencies are
extremely large (1015 Hz) , high compared to radio waves (106 Hz) and microwaves
(1010Hz). Therefore, light acting as carrier wave is capable of carrying far more
information than radio and microwave. So , in the current times, the demand for flow of
information traffic is so dense that only light waves alone would cope with it. Light
rapidly dissipates in traveling through open atmosphere. Hence, some guiding channel is
needed just like for electric current - a wire. Optical fibre provides the necessary wave
guide for light.
Light is incident on fibre making an angle i with its axis, r is the angle made by
light inside the fibre with its axis.
Then, from figure 12.1. = 90 – r
97
If 1 is refractive index of fibre and if c, the critical angle for the media
where
1
c = sin-1 ___
, then the ray gets total internally reflected. Thus, the ray undergoes
1
repeated internal reflections until it emerges out at the other end, even if the fibre is bent.
Thus there is no loss of energy due to refraction. Electrical signals, after being converted
to light, can thus be transmitted using fibre bundles.
In case of optical fibres, it is essential that there must be very little absorption of
light as it travels through long distances inside the fibres. This is achieved by purification
and special preparations of optical fibre materials.
22
___
= 1-
2
1
12 22
___________
= (6.3)
2
1
From Snell’s law,
sin i sin i
1 = _______ or sin r = _______
sin r 1
The condition of total internal reflection can be expressed as
12 22
___________
sin r (6.4)
2
1
98
From equation (6.3)
sin im is known as Numerical Aperture (NA) of the fibre, which measures the light
gathering power of fibre.
½
NA = ( 12 22 )
99
Fig.6.3. Dependence of path of pulse on the angle of incidence
At the input end the pulses may be well separated while entering the optical fibre.
But because of broadening of pulses the neighbouring pulses may overlap at the output
end. They can not be distinguished as discrete pulses. Thus from unresolved or poorly
resolved pulses, no information can be retrieved. The smaller the pulse dispersion, the
greater will be information transmission capacity of the fibre system. For a ray making an
angle with axis, the distance AB is traversed in time,
AC + CB 1 (AB)
___________ ____________
= =
c/ 1 c cos
where c/1 is speed of light in a medium of refractive index 1. Here, c is the velocity of
light in free space.
i.e., time taken is a function of , angle made by the ray with axis, which leads to pulse
dispersion.
If we assume that all rays lying between 0 and c are present then the time taken
by the rays corresponding respectively to = 0 and = c = cos-1(2 / 1) would be given by
1 L
_______
min =
c
12 L
_______
max =
2 c
100
1 L 1
_____ ____
= max - min = -1
c 2
An input pulse gets widened as it travels Fig.6.4. Pulse dispersion in optical fibre
along the fibre.
101
Fig.6.6. Parabolic-index fibre
2
cos = sin c = ____
1
Therefore,
~ = 1 cos = 2
~
All rays 2 < < 1 will undergo total internal reflection at core-cladding interface.
~
For < 2 the ray will be incident at angle less than critical angle and will be refracted
away and such ray will not be guided through fibre.
102
2
r
r2 = 1 2
1- __
0<r<a for core
2
a
r2 = 1 2
1- __
= 22 r>a for cladding
Each ray which gets guided through fibre is characterized by a definite time that it
requires for propagation through fibre.
L ~ 12
_____ ____
= + where L is the length of fibre
~
2c
~
The time taken is minimum for axial ray for which = 1
Thus,
L 1 L
_______ ________
min = =
c / 1 c
~
for = 2 ,
L 12
_____ _____
= max = 2 +
2c 2
L 2 L 12 L 1
_____ _________ ______
= max - min = + -
2c 2 2 c c
2 L 12 1
_____ ____ ____
= = 1 + -
2c 22 2
2 L 1 - 2
_____
= = 2 where = ________
2c 2
103
Module 6F. Optical Fibres in Communication And Sensing Applications
The optical fibres are extensively used in communication systems due to their
inherent merits like high information carrying capacity and low cost per unit length as raw
material required i.e., glass (silica) is available in plenty on the earth. As optical fibres are
insulators, electric current does not flow through them. Once the optic (light) wave is
trapped in the fibre, none leaks out during transmission. So there is no interference with
signals in near by fibres. Also due to smaller size and light weight they are extensively
used in aeronautical applications. They do not pick up or propagate electromagnetic pulses
caused by other means. No short circuit could be expected. As fibres do not react wotj
water or chemicals, corrosion is not severe. The optical firbes withstand high
temperatures. It is very difficult for an intruder to detect the signal that is transmitted.
Thus the firbes offer high privacy and security for the messages that are to be transmitted.
This is the reason why fire communication plays an important role in defense services.
Fibre optics finds wide application in communication due to its large band width
by which large number of channels are handled efficiently.
Digital data transmission is carried by optical fibres. They are also employed in
cable television, space vehicles, ships and submarine cables.
Single fibers of small diameter have been used to study the wave guide behaviour of
optical fibers, while special fibers having a neodymium-doped core glass have been used
to generate or amplify light by stimulated emission i.e. the fiber laser.
104
The multifibers consisting of a number of optically distinct clad fibers fused into a single
strand, retains the same mechanical properties of a single fiber and will be either rigid or
flexible. They are used to transmit light or images where flexibility is not required.
Applied in high speed photography and special aerial reconnaissance systems. Using
fiber optic plates recording on film can be done from a cathode-ray tube. The photometric
efficiency of this plate is very high compared to a lens, nearly 40 times. The fiber-optic
plate can also be used to flatten the field for photographic purposes.
105
Solved Problems
Example 1: Calculate the numerical aperture and the acceptance angle for an optical fibre
given that refractive indices of the core and the cladding are 1.45 and 1.40
respectively.
Solution:
n1 = refractive index of the core = 1.45
NA = ( n12 – n22) ½
Example 2: Given that the numerical aperture to be 0.2441 and refractive index of the
core to be 1.50, calculate the refractive index of the cladding and also the
acceptance angle.
Solution:
NA = ( n12 – n22) ½
n2 = 1.48.
im = 14 17
106
INFORMATION REQUIRED FOR QUIZ
An optical fibre is a hair thin cylindrical fibre of glass or any other transparent
dielectric
Optical fibres are cladded with the material of lower refractive index
Numerical aperture of a fibre measures the light gathering power of the fibre. NA =
½
(2core - 2cladding)
In graded index fibre, the refractive index gradually decreases from centre towards
core-cladding interface.
107
IMPORTANT FORMULAE
c = sin-1 (1/1)
NA = (12 22)½
1 L
= ___________
c cos
In step index fibre,
1 L 1
_____ ____
= - 1
c 2
2 L 12 1
_____ ____ ____
= = 1 + -
2c 22 2
108